query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
a785601f50bad4ed3744ee1442d8116e
A Unit Selection Methodology for Music Generation Using Deep Neural Networks
[ { "docid": "8f47dc7401999924dba5cb3003194071", "text": "Few types of signal streams are as ubiquitous as music. Here we consider the problem of extracting essential ingredients of music signals, such as well-defined global temporal structure in the form of nested periodicities (or meter). Can we construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style? Because recurrent neural networks can in principle learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard recurrent neural networks (RNNs) often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and learning of context sensitive languages. In the current study we show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.", "title": "" }, { "docid": "67b5bd59689c325365ac765a17886169", "text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.", "title": "" }, { "docid": "9198e035c77e8798462dd97426ed0e67", "text": "In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than an RBM. We apply this technique to the task of polyphonic music generation.", "title": "" }, { "docid": "b7ddc52ae897720f50d3f092d8cfbdab", "text": "Markov chains are a well known tool to model temporal properties of many phenomena, from text structure to fluctuations in economics. Because they are easy to generate, Markovian sequences, i.e. temporal sequences having the Markov property, are also used for content generation applications such as text or music generation that imitate a given style. However, Markov sequences are traditionally generated using greedy, left-to-right algorithms. While this approach is computationally cheap, it is fundamentally unsuited for interactive control. This paper addresses the issue of generating steerable Markovian sequences. We target interactive applications such as games, in which users want to control, through simple input devices, the way the system generates a Markovian sequence, such as a text, a musical sequence or a drawing. To this aim, we propose to revisit Markov sequence generation as a branch and bound constraint satisfaction problem (CSP). We propose a CSP formulation of the basic Markovian hypothesis as elementary Markov Constraints (EMC). We propose algorithms that achieve domain-consistency for the propagators of EMCs, in an event-based implementation of CSP. We show how EMCs can be combined to estimate the global Markovian probability of a whole sequence, and accommodate for different species of Markov generation such as fixed order, variable-order, or smoothing. Such a formulation, although more costly than traditional greedy generation algorithms, yields the immense advantage of being naturally steerable, since control specifications can be represented by arbitrary additional constraints, without any modification of the generation algorithm. We illustrate our approach on simple yet combinatorial chord sequence and melody generation problems and give some performance results.", "title": "" } ]
[ { "docid": "785a0d51c9d105532a2e571afccd957b", "text": "Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.", "title": "" }, { "docid": "a7665a6c0955b5d4ca2c4c8cdc183974", "text": "Deep learning has recently helped AI systems to achieve human-level performance in several domains, including speech recognition, object classification, and playing several types of games. The major benefit of deep learning is that it enables end-to-end learning of representations of the data on several levels of abstraction. However, the overall network architecture and the learning algorithms’ sensitive hyperparameters still need to be set manually by human experts. In this talk, I will discuss extensions of Bayesian optimization for handling this problem effectively, thereby paving the way to fully automated end-to-end learning. I will focus on speeding up Bayesian optimization by reasoning over data subsets and initial learning curves, sometimes resulting in 100-fold speedups in finding good hyperparameter settings. I will also show competition-winning practical systems for automated machine learning (AutoML) and briefly show related applications to the end-to-end optimization of algorithms for solving hard combinatorial problems. Bio. Frank Hutter is an Emmy Noether Research Group Lead (eq. Asst. Prof.) at the Computer Science Department of the University of Freiburg (Germany). He received his PhD from the University of British Columbia (2009). Frank’s main research interests span artificial intelligence, machine learning, combinatorial optimization, and automated algorithm design. He received a doctoral dissertation award from the Canadian Artificial Intelligence Association and, with his coauthors, several best paper awards (including from JAIR and IJCAI) and prizes in international competitions on machine learning, SAT solving, and AI planning. In 2016 he received an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning. Frontiers in Recurrent Neural Network Research", "title": "" }, { "docid": "6db5f103fa479fc7c7c33ea67d7950f6", "text": "Problem statement: To design, implement, and test an algorithm for so lving the square jigsaw puzzle problem, which has many applications in image processing, pattern recognition, and computer vision such as restoration of archeologica l artifacts and image descrambling. Approach: The algorithm used the gray level profiles of border pi xels for local matching of the puzzle pieces, which was performed using dynamic programming to facilita te non-rigid alignment of pixels of two gray level profiles. Unlike the classical best-first sea rch, the algorithm simultaneously located the neigh bors of a puzzle piece during the search using the wellknown Hungarian procedure, which is an optimal assignment procedure. To improve the search for a g lobal solution, every puzzle piece was considered as starting piece at various starting locations. Results: Experiments using four well-known images demonstrated the effectiveness of the proposed appr o ch over the classical piece-by-piece matching approach. The performance evaluation was based on a new precision performance measure. For all four test images, the proposed algorithm achieved 1 00% precision rate for puzzles up to 8×8. Conclusion: The proposed search mechanism based on simultaneou s all cation of puzzle pieces using the Hungarian procedure provided better performance than piece-by-piece used in classical methods.", "title": "" }, { "docid": "0084d9c69d79a971e7139ab9720dd846", "text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.", "title": "" }, { "docid": "29f6917a8eaf7958ffa3408a41e981a4", "text": "Reconstruction and rehabilitation following rhinectomy remains controversial and presents a complex problem. Although reconstruction with local and microvascular flaps is a valid option, the aesthetic results may not always be satisfactory. The aesthetic results achieved with a nasal prosthesis are excellent; however patient acceptance relies on a secure method of retention. The technique used and results obtained in a large series of patients undergoing rhinectomy and receiving zygomatic implants for the retention of a nasal prosthesis are described here. A total of 56 zygomatic implants (28 patients) were placed, providing excellent retention and durability with the loss of only one implant in 15 years.", "title": "" }, { "docid": "1b5bf2ef58a5f12e09f66e91d6472e56", "text": "High quality upsampling of sparse 3D point clouds is critically useful for a wide range of geometric operations such as reconstruction, rendering, meshing, and analysis. In this paper, we propose a data-driven algorithm that enables an upsampling of 3D point clouds without the need for hard-coded rules. Our approach uses a deep network with Chamfer distance as the loss function, capable of learning the latent features in point clouds belonging to different object categories. We evaluate our algorithm across different amplification factors, with upsampling learned and performed on objects belonging to the same category as well as different categories. We also explore the desirable characteristics of input point clouds as a function of the distribution of the point samples. Finally, we demonstrate the performance of our algorithm in single-category training versus multi-category training scenarios. The final proposed model is compared against a baseline, optimization-based upsampling method. The results indicate that our algorithm is capable of generating more accurate upsamplings with less Chamfer loss.", "title": "" }, { "docid": "4bf485a218fca405a4d8655bc2a2be86", "text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;", "title": "" }, { "docid": "c843f4ba35aee9ef2ac7e852a1d489c4", "text": "We investigate the effect of a corporate culture of sustainability on multiple facets of corporate behavior and performance outcomes. Using a matched sample of 180 companies, we find that corporations that voluntarily adopted environmental and social policies many years ago termed as High Sustainability companies exhibit fundamentally different characteristics from a matched sample of firms that adopted almost none of these policies termed as Low Sustainability companies. In particular, we find that the boards of directors of these companies are more likely to be responsible for sustainability and top executive incentives are more likely to be a function of sustainability metrics. Moreover, they are more likely to have organized procedures for stakeholder engagement, to be more long-term oriented, and to exhibit more measurement and disclosure of nonfinancial information. Finally, we provide evidence that High Sustainability companies significantly outperform their counterparts over the long-term, both in terms of stock market and accounting performance. The outperformance is stronger in sectors where the customers are individual consumers instead of companies, companies compete on the basis of brands and reputations, and products significantly depend upon extracting large amounts of natural resources. Robert G. Eccles is a Professor of Management Practice at Harvard Business School. Ioannis Ioannou is an Assistant Professor of Strategic and International Management at London Business School. George Serafeim is an Assistant Professor of Business Administration at Harvard Business School, contact email: [email protected]. Robert Eccles and George Serafeim gratefully acknowledge financial support from the Division of Faculty Research and Development of the Harvard Business School. We would like to thank Christopher Greenwald for supplying us with the ASSET4 data. Moreover, we would like to thank Cecile Churet and Iordanis Chatziprodromou from Sustainable Asset Management for giving us access to their proprietary data. We are grateful to Chris Allen, Jeff Cronin, Christine Rivera, and James Zeitler for research assistance. We thank Ben Esty, Joshua Margolis, Costas Markides, Catherine Thomas and seminar participants at Boston College for helpful comments. We are solely responsible for any errors in this manuscript.", "title": "" }, { "docid": "a1ef2bce061c11a2d29536d7685a56db", "text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "title": "" }, { "docid": "99e4a4619e20bf0612c0db4249952874", "text": "Today, machine learning based on neural networks has become mainstream, in many application domains. A small subset of machine learning algorithms, called Convolutional Neural Networks (CNN), are considered as state-ofthe- art for many applications (e.g. video/audio classification). The main challenge in implementing the CNNs, in embedded systems, is their large computation, memory, and bandwidth requirements. To meet these demands, dedicated hardware accelerators have been proposed. Since memory is the major cost in CNNs, recent accelerators focus on reducing the memory accesses. In particular, they exploit data locality using either tiling, layer merging or intra/inter feature map parallelism to reduce the memory footprint. However, they lack the flexibility to interleave or cascade these optimizations. Moreover, most of the existing accelerators do not exploit compression that can simultaneously reduce memory requirements, increase the throughput, and enhance the energy efficiency. To tackle these limitations, we present a flexible accelerator called MOCHA. MOCHA has three features that differentiate it from the state-of-the-art: (i) the ability to compress input/ kernels, (ii) the flexibility to interleave various optimizations, and (iii) intelligence to automatically interleave and cascade the optimizations, depending on the dimension of a specific CNN layer and available resources. Post layout Synthesis results reveal that MOCHA provides up to 63% higher energy efficiency, up to 42% higher throughput, and up to 30% less storage, compared to the next best accelerator, at the cost of 26-35% additional area.", "title": "" }, { "docid": "d361dd8eaea9c8fa8d0a74e8f2161f4b", "text": "Gamification is commonly employed in designing interactive systems to enhance user engagement and motivations, or to trigger behavior change processes. Although some quantitative studies have been recently conducted aiming at measuring the effects of gamification on users’ behaviors and motivations, there is a shortage of qualitative studies able to capture the subjective experiences of users, when using gamified systems. The authors propose to investigate how users are engaged by the most common gamification techniques, by conducting a diary study followed by a series of six focus groups. From the findings gathered, they conclude the paper identifying some implications for the design of interactive systems that aim at supporting intrinsic motivations to engage their users. A Qualitative Investigation of Gamification: Motivational Factors in Online Gamified Services and Applications", "title": "" }, { "docid": "81919bc432dd70ed3e48a0122d91b9e4", "text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.", "title": "" }, { "docid": "48b3ee93758294ffa7b24584c53cbda1", "text": "Engineering design problems requiring the construction of a cheap-to-evaluate 'surrogate' model f that emulates the expensive response of some black box f come in a variety of forms, but they can generally be distilled down to the following template. Here ffx is some continuous quality, cost or performance metric of a product or process defined by a k-vector of design variables x ∈ D ⊂ R k. In what follows we shall refer to D as the design space or design domain. Beyond the assumption of continuity, the only insight we can gain into f is through discrete observations or samples x ii → y ii = ffx ii i = 1 n. These are expensive to obtain and therefore must be used sparingly. The task is to use this sparse set of samples to construct an approximation f , which can then be used to make a cheap performance prediction for any design x ∈ D. Much of this book is made up of recipes for constructing f , given a set of samples. Excepting a few pathological cases, the mathematical formulations of these modelling approaches are well-posed, regardless of how the sampling plan X = x 1 x 2 x nn determines the spatial arrangement of the observations we have built them upon. Some models do require a minimum number n of data points but, once we have passed this threshold, we can use them to build an unequivocally defined surrogate. However, a well-posed model does not necessarily generalize well, that is it may still be poor at predicting unseen data, and this feature does depend on the sampling plan X. For example, measuring the performance of a design at the extreme values of its parameters may leave a great deal of interesting behaviour undiscovered, say, in the centre of the design space. Equally, spraying points liberally in certain parts of the inside of the domain, forcing the surrogate model to make far-reaching extrapolations elsewhere, may lead us to (false) global conclusions based on patchy, local knowledge of the objective landscape. Of course, we do not always have a choice in the matter. We may be using data obtained by someone else for some other purpose or the available observations may come from a variety of external sources and we may not be able to add to them. The latter situation often occurs in conceptual design, where we …", "title": "" }, { "docid": "472e9807c2f4ed6d1e763dd304f22c64", "text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.", "title": "" }, { "docid": "2e6e46a1224041ed2080395f82b7c49c", "text": "The image processing techniques are very useful for many applications such as biology, security, satellite imagery, personal photo, medicine, etc. The procedures of image processing such as image enhancement, image segmentation and feature extraction are used for fracture detection system.This paper uses Canny edge detection method for segmentation.Canny method produces perfect information from the bone image. The main aim of this research is to detect human lower leg bone fracture from X-Ray images. The proposed system has three steps, namely, preprocessing, segmentation, and fracture detection. In feature extraction step, this paper uses Hough transform technique for line detection in the image. Feature extraction is the main task of the system. The results from various experiments show that the proposed system is very accurate and efficient.", "title": "" }, { "docid": "c61107e9c5213ddb8c5e3b1b14dca661", "text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.", "title": "" }, { "docid": "8a9076c9212442e3f52b828ad96f7fe7", "text": "The building industry uses great quantities of raw materials that also involve high energy consumption. Choosing materials with high content in embodied energy entails an initial high level of energy consumption in the building production stage but also determines future energy consumption in order to fulfil heating, ventilation and air conditioning demands. This paper presents the results of an LCA study comparing the most commonly used building materials with some eco-materials using three different impact categories. The aim is to deepen the knowledge of energy and environmental specifications of building materials, analysing their possibilities for improvement and providing guidelines for materials selection in the eco-design of new buildings and rehabilitation of existing buildings. The study proves that the impact of construction products can be significantly reduced by promoting the use of the best techniques available and eco-innovation in production plants, substituting the use of finite natural resources for waste generated in other production processes, preferably available locally. This would stimulate competition between manufacturers to launch more eco-efficient products and encourage the use of the Environmental Product Declarations. This paper has been developed within the framework of the “LoRe-LCA Project” co-financed by the European Commission’s Intelligent Energy for Europe Program and the “PSE CICLOPE Project” co-financed by the Spanish Ministry of Science and Technology and the European Regional Development Fund. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3d5bbe4dcdc3ad787e57583f7b621e36", "text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.", "title": "" }, { "docid": "baf9e931df45d010c44083973d1281fd", "text": "Error vector magnitude (EVM) is one of the widely accepted figure of merits used to evaluate the quality of communication systems. In the literature, EVM has been related to signal-to-noise ratio (SNR) for data-aided receivers, where preamble sequences or pilots are used to measure the EVM, or under the assumption of high SNR values. In this paper, this relation is examined for nondata-aided receivers and is shown to perform poorly, especially for low SNR values or high modulation orders. The EVM for nondata-aided receivers is then evaluated and its value is related to the SNR for quadrature amplitude modulation (QAM) and pulse amplitude modulation (PAM) signals over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels, and for systems with IQ imbalances. The results show that derived equations can be used to reliably estimate SNR values using EVM measurements that are made based on detected data symbols. Thus, presented work can be quite useful for measurement devices such as vector signal analyzers (VSA), where EVM measurements are readily available.", "title": "" }, { "docid": "082e747ab9f93771a71e2b6147d253b2", "text": "Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals’ locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74% of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.", "title": "" } ]
scidocsrr
1f2250209a2472bb1d660be549649ffe
Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization
[ { "docid": "2670c9d261edfb771d7e9673a282ea0b", "text": "In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. First of all, the feature points on the head in the head-centered coordinate system and their image projections are used to determine a transformation matrix. Then, the camera position and orientations are extracted from the matrix. Finally, the 3D coordinates of the head points expressed in the camera-centered coordinate system are obtained. Starting from the coordinates of the neck, which is a head feature point, the 3D coordinates of other joints one-by-one are determined under the assumption of the fixed lengths of the body segments. A binary interpretation tree is used to represent the 2”-’ possible body structures, if a human body has n joints. To determine the final feasible body structures, physical and motion constraints are used to prune the interpretation tree. Formulas and rules required for the tree pruning are formulated. Experiments are used to illustrate the pruning powers of these constraints. In the two cases of input data chosen, a unique or nearly unique solution of the body structure is obtained. e 1985 Academic PI~SS, IIIC.", "title": "" }, { "docid": "8a1ba356c34935a2f3a14656138f0414", "text": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.", "title": "" } ]
[ { "docid": "f4e6c5c5c7fccbf0f72ff681cd3a8762", "text": "Program specifications are important for many tasks during software design, development, and maintenance. Among these, temporal specifications are particularly useful. They express formal correctness requirements of an application's ordering of specific actions and events during execution, such as the strict alternation of acquisition and release of locks. Despite their importance, temporal specifications are often missing, incomplete, or described only informally. Many techniques have been proposed that mine such specifications from execution traces or program source code. However, existing techniques mine only simple patterns, or they mine a single complex pattern that is restricted to a particular set of manually selected events. There is no practical, automatic technique that can mine general temporal properties from execution traces.\n In this paper, we present Javert, the first general specification mining framework that can learn, fully automatically, complex temporal properties from execution traces. The key insight behind Javert is that real, complex specifications can be formed by composing instances of small generic patterns, such as the alternating pattern ((ab)) and the resource usage pattern ((ab c)). In particular, Javert learns simple generic patterns and composes them using sound rules to construct large, complex specifications. We have implemented the algorithm in a practical tool and conducted an extensive empirical evaluation on several open source software projects. Our results are promising; they show that Javert is scalable, general, and precise. It discovered many interesting, nontrivial specifications in real-world code that are beyond the reach of existing automatic techniques.", "title": "" }, { "docid": "6b5455a7e5b93cd754c0ad90a7181a4d", "text": "This paper reports an exploration of the concept of social intelligence in the context of designing home dialogue systems for an Ambient Intelligence home. It describes a Wizard of Oz experiment involving a robotic interface capable of simulating several human social behaviours. Our results show that endowing a home dialogue system with some social intelligence will: (a) create a positive bias in the user’s perception of technology in the home environment, (b) enhance user acceptance for the home dialogue system, and (c) trigger social behaviours by the user in relation to the home dialogue system. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "61662cfd286c06970243bc13d5eff566", "text": "This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully?", "title": "" }, { "docid": "f3dc6ab7d2d66604353f60fe1d7bd45a", "text": "Establishing end-to-end authentication between devices and applications in Internet of Things (IoT) is a challenging task. Due to heterogeneity in terms of devices, topology, communication and different security protocols used in IoT, existing authentication mechanisms are vulnerable to security threats and can disrupt the progress of IoT in realizing Smart City, Smart Home and Smart Infrastructure, etc. To achieve end-to-end authentication between IoT devices/applications, the existing authentication schemes and security protocols require a two-factor authentication mechanism. Therefore, as part of this paper we review the suitability of an authentication scheme based on One Time Password (OTP) for IoT and proposed a scalable, efficient and robust OTP scheme. Our proposed scheme uses the principles of lightweight Identity Based Elliptic Curve Cryptography scheme and Lamport's OTP algorithm. We evaluate analytically and experimentally the performance of our scheme and observe that our scheme with a smaller key size and lesser infrastructure performs on par with the existing OTP schemes without compromising the security level. Our proposed scheme can be implemented in real-time IoT networks and is the right candidate for two-factor authentication among devices, applications and their communications in IoT.", "title": "" }, { "docid": "f1d1a73f21dcd1d27da4e9d4a93c5581", "text": "Movements of interfaces can be analysed in terms of whether they are sensible, sensable and desirable. Sensible movements are those that users naturally perform; sensable are those that can be measured by a computer; and desirable movements are those that are required by a given application. We show how a systematic comparison of sensible, sensable and desirable movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: the Augurscope II, a mobile augmented reality interface for outdoors; the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs; and pointing flashlights at walls and posters in order to play sounds.", "title": "" }, { "docid": "78d7c61f7ca169a05e9ae1393712cd69", "text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.", "title": "" }, { "docid": "aaba5dc8efc9b6a62255139965b6f98d", "text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.", "title": "" }, { "docid": "ffdddba343bb0aa47fc101696ab3696d", "text": "The meaning of a sentence in a document is more easily determined if its constituent words exhibit cohesion with respect to their individual semantics. This paper explores the degree of cohesion among a document's words using lexical chains as a semantic representation of its meaning. Using a combination of diverse types of lexical chains, we develop a text document representation that can be used for semantic document retrieval. For our approach, we develop two kinds of lexical chains: (i) a multilevel flexible chain representation of the extracted semantic values, which is used to construct a fixed segmentation of these chains and constituent words in the text; and (ii) a fixed lexical chain obtained directly from the initial semantic representation from a document. The extraction and processing of concepts is performed using WordNet as a lexical database. The segmentation then uses these lexical chains to model the dispersion of concepts in the document. Representing each document as a high-dimensional vector, we use spherical k-means clustering to demonstrate that our approach performs better than previ-", "title": "" }, { "docid": "69275ddc999036a415b339a0a0219978", "text": "BACKGROUND\nDeveloping countries, including Ethiopia are experiencing a double burden of malnutrition. There is limited information about prevalence of overweight/obesity among school aged children in Ethiopia particularly in Bahir Dar city. Hence this study aimed to assess the prevalence of overweight/obesity and associated factors among school children aged 6-12 years at Bahir Dar City, Northwest Ethiopia.\n\n\nMETHODS\nA school based cross-sectional study was carried out. A total of 634 children were included in the study. Multi stage systematic random sampling technique was used. A multivariable logistic regression analysis was used to identify factors associated with overweight/obesity. The association between dependent and independent variables were assessed using odds ratio with 95% confidence interval and p-value ≤0.05 was considered statistically significant.\n\n\nRESULTS\nThe overall prevalence of overweight and/or obesity was 11.9% (95% CI, 9.3, 14.4) (out of which 8.8% were overweight and 3.1% were obese). Higher wealth status[adjusted OR = 3.14, 95% CI:1.17, 8.46], being a private school student [AOR = 2.21, 95% CI:1.09, 4.49], use of transportation to and from school [AOR = 2.53, 95% CI: 1.26,5.06], fast food intake [AOR = 3.88, 95% CI: 1.42,10.55], lack of moderate physical activity [AOR = 2.87, 95% CI: 1.21,6.82], low intake of fruit and vegetable [AOR = 6.45, 95% CI:3.19,13.06] were significant factors associated with overweight and obesity.\n\n\nCONCLUSION\nThis study revealed that prevalence of overweight/obesity among school aged children in Bahir Dar city is high. Thus, promoting healthy dietary habit, particularly improving fruit and vegetable intake is essential to reduce the burden of overweight and obesity. Furthermore, it is important to strengthen nutrition education about avoiding junk food consumption and encouraging regular physical activity.", "title": "" }, { "docid": "f847a04cb60bbbe5a2cd1ec1c4c9be6f", "text": "This letter presents a wideband patch antenna on a low-temperature cofired ceramic substrate for Local Multipoint Distribution Service band applications. Conventional rectangular patch antennas have a narrow bandwidth. The proposed via-wall structure enhances the electric field coupling between the stacked patches to achieve wideband characteristics. We designed same-side and opposite-side feeding configurations and report on the fabrication of an experimental 28-GHz antenna used to validate the design concept. Measurements correlate well with the simulation results, achieving a 10-dB impedance bandwidth of 25.4% (23.4-30.2 GHz).", "title": "" }, { "docid": "ea9e392bdca32154b95b2b0b424229c3", "text": "Multi-person pose estimation in images and videos is an important yet challenging task with many applications. Despite the large improvements in human pose estimation enabled by the development of convolutional neural networks, there still exist a lot of difficult cases where even the state-of-the-art models fail to correctly localize all body joints. This motivates the need for an additional refinement step that addresses these challenging cases and can be easily applied on top of any existing method. In this work, we introduce a pose refinement network (PoseRefiner) which takes as input both the image and a given pose estimate and learns to directly predict a refined pose by jointly reasoning about the input-output space. In order for the network to learn to refine incorrect body joint predictions, we employ a novel data augmentation scheme for training, where we model \"hard\" human pose cases. We evaluate our approach on four popular large-scale pose estimation benchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack Pose Estimation, and PoseTrack Pose Tracking, and report systematic improvement over the state of the art.", "title": "" }, { "docid": "cf7eff6c24f333b6bcf30ef8cd8686e0", "text": "For 4 decades, vigorous efforts have been based on the premise that early intervention for children of poverty and, more recently, for children with developmental disabilities can yield significant improvements in cognitive, academic, and social outcomes. The history of these efforts is briefly summarized and a conceptual framework presented to understand the design, research, and policy relevance of these early interventions. This framework, biosocial developmental contextualism, derives from social ecology, developmental systems theory, developmental epidemiology, and developmental neurobiology. This integrative perspective predicts that fragmented, weak efforts in early intervention are not likely to succeed, whereas intensive, high-quality, ecologically pervasive interventions can and do. Relevant evidence is summarized in 6 principles about efficacy of early intervention. The public policy challenge in early intervention is to contain costs by more precisely targeting early interventions to those who most need and benefit from these interventions. The empirical evidence on biobehavioral effects of early experience and early intervention has direct relevance to federal and state policy development and resource allocation.", "title": "" }, { "docid": "b7094d555b9b4c7197822027510a65aa", "text": "Vegetation indices have been used extensively to estimate the vegetation density from satellite and airborne images for many years. In this paper, we focus on one of the most popular of such indices, the normalized difference vegetation index (NDVI), and we introduce a statistical framework to analyze it. As the degree of vegetation increases, the corresponding NDVI values begin to saturate and cannot represent highly vegetated regions reliably. By adopting the statistical viewpoint, we show how to obtain a linearized and more reliable measure. While the NDVI uses only red and near-infrared bands, we use the statistical framework to introduce new indices using the blue and green bands as well. We compare these indices with that obtained by linearizing the NDVI with extensive experimental results on real IKONOS multispectral images.", "title": "" }, { "docid": "204a2331af6c32a502005d5d19f4fc10", "text": "This paper presents a detailed comparative study of spoke type brushless dc (SPOKE-BLDC) motors due to the operating conditions and designs a new type SPOKE-BLDC with flux barriers for high torque applications, such as tractions. The current dynamic analysis method considering the local magnetic saturation of the rotor and the instantaneous current by pwm driving circuit is developed based on the coupled finite element analysis with rotor dynamic equations. From this analysis, several new structures using the flux barriers are designed and the characteristics are compared in order to reduce the large torque ripple and improve the average torque of SPOKE-BLDC. From these results, it is confirmed that the flux barriers, which are inserted on the optimized position of the rotor, have made remarkable improvement for the torque characteristics of the SPOKE-BLDC.", "title": "" }, { "docid": "0cc665089be9aa8217baac32f0385f41", "text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.", "title": "" }, { "docid": "e3b3e4e75580f3dad0f2fb2b9e28fff4", "text": "The present study introduced an integrated method for the production of biodiesel from microalgal oil. Heterotrophic growth of Chlorella protothecoides resulted in the accumulation of high lipid content (55%) in cells. Large amount of microalgal oil was efficiently extracted from these heterotrophic cells by using n-hexane. Biodiesel comparable to conventional diesel was obtained from heterotrophic microalgal oil by acidic transesterification. The best process combination was 100% catalyst quantity (based on oil weight) with 56:1 molar ratio of methanol to oil at temperature of 30 degrees C, which reduced product specific gravity from an initial value of 0.912 to a final value of 0.8637 in about 4h of reaction time. The results suggested that the new process, which combined bioengineering and transesterification, was a feasible and effective method for the production of high quality biodiesel from microalgal oil.", "title": "" }, { "docid": "80ba326570f2e492eff3515ddcc2b3cf", "text": "Automatic program transformation tools can be valuable for programmers to help them with refactoring tasks, and for Computer Science students in the form of tutoring systems that suggest repairs to programming assignments. However, manually creating catalogs of transformations is complex and time-consuming. In this paper, we present REFAZER, a technique for automatically learning program transformations. REFAZER builds on the observation that code edits performed by developers can be used as input-output examples for learning program transformations. Example edits may share the same structure but involve different variables and subexpressions, which must be generalized in a transformation at the right level of abstraction. To learn transformations, REFAZER leverages state-of-the-art programming-by-example methodology using the following key components: (a) a novel domain-specific language (DSL) for describing program transformations, (b) domain-specific deductive algorithms for efficiently synthesizing transformations in the DSL, and (c) functions for ranking the synthesized transformations. We instantiate and evaluate REFAZER in two domains. First, given examples of code edits used by students to fix incorrect programming assignment submissions, we learn program transformations that can fix other students' submissions with similar faults. In our evaluation conducted on 4 programming tasks performed by 720 students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive code edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 56 scenarios of repetitive edits taken from three large C# open-source projects, REFAZER learns the intended program transformation in 84% of the cases using only 2.9 examples on average.", "title": "" }, { "docid": "d83ecee8e5f59ee8e6a603c65f952c22", "text": "PredPatt is a pattern-based framework for predicate-argument extraction. While it works across languages and provides a well-formed syntax-semantics interface for NLP tasks, a large-scale and reproducible evaluation has been lacking, which prevents comparisons between PredPatt and other related systems, and inhibits the updates of the patterns in PredPatt. In this work, we improve and evaluate PredPatt by introducing a large set of high-quality annotations converted from PropBank, which can also be used as a benchmark for other predicate-argument extraction systems. We compare PredPatt with other prominent systems and shows that PredPatt achieves the best precision and recall.", "title": "" }, { "docid": "23ba216f846eab3ff8c394ad29b507bf", "text": "The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.", "title": "" } ]
scidocsrr
676750cc6699250834bbba06c106c5c6
Cyber-Physical-Social Based Security Architecture for Future Internet of Things
[ { "docid": "de8e9537d6b50467d014451dcaae6c0e", "text": "With increased global interconnectivity, reliance on e-commerce, network services, and Internet communication, computer security has become a necessity. Organizations must protect their systems from intrusion and computer-virus attacks. Such protection must detect anomalous patterns by exploiting known signatures while monitoring normal computer programs and network usage for abnormalities. Current antivirus and network intrusion detection (ID) solutions can become overwhelmed by the burden of capturing and classifying new viral stains and intrusion patterns. To overcome this problem, a self-adaptive distributed agent-based defense immune system based on biological strategies is developed within a hierarchical layered architecture. A prototype interactive system is designed, implemented in Java, and tested. The results validate the use of a distributed-agent biological-system approach toward the computer-security problems of virus elimination and ID.", "title": "" }, { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" } ]
[ { "docid": "bc5b77c532c384281af64633fcf697a3", "text": "The purpose of this study was to investigate the effects of a 12-week resistance-training program on muscle strength and mass in older adults. Thirty-three inactive participants (60-74 years old) were assigned to 1 of 3 groups: high-resistance training (HT), moderate-resistance training (MT), and control. After the training period, both HT and MT significantly increased 1-RM body strength, the peak torque of knee extensors and flexors, and the midthigh cross-sectional area of the total muscle. In addition, both HT and MT significantly decreased the abdominal circumference. HT was more effective in increasing 1-RM strength, muscle mass, and peak knee-flexor torque than was MT. These data suggest that muscle strength and mass can be improved in the elderly with both high- and moderate-intensity resistance training, but high-resistance training can lead to greater strength gains and hypertrophy than can moderate-resistance training.", "title": "" }, { "docid": "fd4bd9edcaff84867b6e667401aa3124", "text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378", "title": "" }, { "docid": "5c819727ba80894e72531a62e402f0c4", "text": "omega-3 fatty acids, alpha-tocopherol, ascorbic acid, beta-carotene and glutathione determined in leaves of purslane (Portulaca oleracea), grown in both a controlled growth chamber and in the wild, were compared in composition to spinach. Leaves from both samples of purslane contained higher amounts of alpha-linolenic acid (18:3w3) than did leaves of spinach. Chamber-grown purslane contained the highest amount of 18:3w3. Samples from the two kinds of purslane contained higher leaves of alpha-tocopherol, ascorbic acid and glutathione than did spinach. Chamber-grown purslane was richer in all three and the amount of alpha-tocopherol was seven times higher than that found in spinach, whereas spinach was slightly higher in beta-carotene. One hundred grams of fresh purslane leaves (one serving) contain about 300-400 mg of 18:3w3; 12.2 mg of alpha-tocopherol; 26.6 mg of ascorbic acid; 1.9 mg of beta-carotene; and 14.8 mg of glutathione. We confirm that purslane is a nutritious food rich in omega-3 fatty acids and antioxidants.", "title": "" }, { "docid": "ede12c734b2fb65b427b3d47e1f3c3d8", "text": "Battery management systems in hybrid-electric-vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state-of-charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose methods, based on extended Kalman filtering (EKF), that are able to accomplish these goals for a lithium ion polymer battery pack. We expect that they will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This third paper concludes the series by presenting five additional applications where either an EKF or results from EKF may be used in typical BMS algorithms: initializing state estimates after the vehicle has been idle for some time; estimating state-of-charge with dynamic error bounds on the estimate; estimating pack available dis/charge power; tracking changing pack parameters (including power fade and capacity fade) as the pack ages, and therefore providing a quantitative estimate of state-of-health; and determining which cells must be equalized. Results from pack tests are presented. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e4e26cc61b326f8d60dc3f32909d340c", "text": "We propose two secure protocols namely private equality test (PET) for single comparison and private batch equality test (PriBET) for batch comparisons of l-bit integers. We ensure the security of these secure protocols using somewhat homomorphic encryption (SwHE) based on ring learning with errors (ring-LWE) problem in the semi-honest model. In the PET protocol, we take two private integers input and produce the output denoting their equality or non-equality. Here the PriBET protocol is an extension of the PET protocol. So in the PriBET protocol, we take single private integer and another set of private integers as inputs and produce the output denoting whether single integer equals at least one integer in the set of integers or not. To serve this purpose, we also propose a new packing method for doing the batch equality test using few homomorphic multiplications of depth one. Here we have done our experiments at the 140-bit security level. For the lattice dimension 2048, our experiments show that the PET protocol is capable of doing any equality test of 8-bit to 2048-bit that require at most 107 milliseconds. Moreover, the PriBET protocol is capable of doing about 600 (resp., 300) equality comparisons per second for 32-bit (resp., 64-bit) integers. In addition, our experiments also show that the PriBET protocol can do more computations within the same time if the data size is smaller like 8-bit or 16-bit.", "title": "" }, { "docid": "1cc4048067cc93c2f1e836c77c2e06dc", "text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.", "title": "" }, { "docid": "440436a887f73c599452dc57c689dc9d", "text": "This paper will explore the process of desalination by reverse osmosis (RO) and the benefits that it can contribute to society. RO may offer a sustainable solution to the water crisis, a global problem that is not going away without severe interference and innovation. This paper will go into depth on the processes involved with RO and how contaminants are removed from sea-water. Additionally, the use of significant pressures to force water through the semipermeable membranes, which only allow water to pass through them, will be investigated. Throughout the paper, the topics of environmental and economic sustainability will be covered. Subsequently, the two primary methods of desalination, RO and multi-stage flash distillation (MSF), will be compared. It will become clear that RO is a better method of desalination when compared to MSF. This paper will study examples of RO in action, including; the Carlsbad Plant, the Sorek Plant, and applications beyond the potable water industry. It will be shown that The Claude \"Bud\" Lewis Carlsbad Desalination Plant (Carlsbad), located in San Diego, California is a vital resource in the water economy of the area. The impact of the Sorek Plant, located in Tel Aviv, Israel will also be explained. Both plants produce millions of gallons of fresh, drinkable water and are vital resources for the people that live there.", "title": "" }, { "docid": "10496d5427035670d89f72a64b68047f", "text": "A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.", "title": "" }, { "docid": "c19b63a2c109c098c22877bcba8690ae", "text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.", "title": "" }, { "docid": "cc5f814338606b92c92aa6caf2f4a3f5", "text": "The purpose of this study was to report the outcome of infants with antenatal hydronephrosis. Between May 1999 and June 2006, all patients diagnosed with isolated fetal renal pelvic dilatation (RPD) were prospectively followed. The events of interest were: presence of uropathy, need for surgical intervention, RPD resolution, urinary tract infection (UTI), and hypertension. RPD was classified as mild (5–9.9 mm), moderate (10–14.9 mm) or severe (≥15 mm). A total of 192 patients was included in the analysis; 114 were assigned to the group of non-significant findings (59.4%) and 78 to the group of significant uropathy (40.6%). Of 89 patients with mild dilatation, 16 (18%) presented uropathy. Median follow-up time was 24 months. Twenty-seven patients (15%) required surgical intervention. During follow-up, UTI occurred in 27 (14%) children. Of 89 patients with mild dilatation, seven (7.8%) presented UTI during follow-up. Renal function, blood pressure, and somatic growth were within normal range at last visit. The majority of patients with mild fetal RPD have no significant findings during infancy. Nevertheless, our prospective study has shown that 18% of these patients presented uropathy and 7.8% had UTI during a medium-term follow-up time. Our findings suggested that, in contrast to patients with moderate/severe RPD, infants with mild RPD do not require invasive diagnostic procedures but need strict clinical surveillance for UTI and progression of RPD.", "title": "" }, { "docid": "2f3bb54596bba8cd7a073ef91964842c", "text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.", "title": "" }, { "docid": "3aa58539c69d6706bc0a9ca0256cdf80", "text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.", "title": "" }, { "docid": "bf4a991dbb32ec1091a535750637dbd7", "text": "As cutting-edge experiments display ever more extreme forms of non-classical behavior, the prevailing view on the interpretation of quantum mechanics appears to be gradually changing. A (highly unscientific) poll taken at the 1997 UMBC quantum mechanics workshop gave the once alldominant Copenhagen interpretation less than half of the votes. The Many Worlds interpretation (MWI) scored second, comfortably ahead of the Consistent Histories and Bohm interpretations. It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrödinger equation describes everything — and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words. Common objections to the MWI are discussed. It is argued that when environment-induced decoherence is taken into account, the experimental predictions of the MWI are identical to those of the Copenhagen interpretation except for an experiment involving a Byzantine form of “quantum suicide”. This makes the choice between them purely a matter of taste, roughly equivalent to whether one believes mathematical language or human language to be more fundamental.", "title": "" }, { "docid": "f274062a188fb717b8645e4d2352072a", "text": "CPU-FPGA heterogeneous acceleration platforms have shown great potential for continued performance and energy efficiency improvement for modern data centers, and have captured great attention from both academia and industry. However, it is nontrivial for users to choose the right platform among various PCIe and QPI based CPU-FPGA platforms from different vendors. This paper aims to find out what microarchitectural characteristics affect the performance, and how. We conduct our quantitative comparison and in-depth analysis on two representative platforms: QPI-based Intel-Altera HARP with coherent shared memory, and PCIe-based Alpha Data board with private device memory. We provide multiple insights for both application developers and platform designers.", "title": "" }, { "docid": "c9c29c091c9851920315c4d4b38b4c9f", "text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.", "title": "" }, { "docid": "fc07af4d49f7b359e484381a0a88aff7", "text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.", "title": "" }, { "docid": "0cd2da131bf78526c890dae72514a8f0", "text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1ec62f70be9d006b7e1295ef8d9cb1e3", "text": "The aim of this research is to explore social media and its benefits especially from business-to-business innovation and related customer interface perspective, and to create a more comprehensive picture of the possibilities of social media for the business-to-business sector. Business-to-business context was chosen because it is in many ways a very different environment for social media than business-to-consumer context, and is currently very little academically studied. A systematic literature review on B2B use of social media and achieved benefits in the inn ovation con text was performed to answer the questions above and achieve the research goals. The study clearly demonstrates that not merely B2C's, as commonly believed, but also B2B's can benefit from the use of social media in a variety of ways. Concerning the broader classes of innovation --related benefits, the reported benefits of social media use referred to increased customer focus and understanding, increased level of customer service, and decreased time-to-market. The study contributes to the existing social media --related literature, because there were no found earlier comprehensive academic studies on the use of social media in the innovation process in the context of B2B customer interface.", "title": "" }, { "docid": "97c162261666f145da6e81d2aa9a8343", "text": "Shape optimization is a growing field of interest in many areas of academic research, marine design, and manufacturing. As part of the CREATE Ships Hydromechanics Product, an effort is underway to develop a computational tool set and process framework that can aid the ship designer in making informed decisions regarding the influence of the planned hull shape on its hydrodynamic characteristics, even at the earliest stages where decisions can have significant cost implications. The major goal of this effort is to utilize the increasing experience gained in using these methods to assess shape optimization techniques and how they might impact design for current and future naval ships. Additionally, this effort is aimed at establishing an optimization framework within the bounds of a collaborative design environment that will result in improved performance and better understanding of preliminary ship designs at an early stage. The initial effort demonstrated here is aimed at ship resistance, and examples are shown for full ship and localized bow dome shaping related to the Joint High Speed Sealift (JHSS) hull concept. Introduction Any ship design inherently involves optimization, as competing requirements and design parameters force the design to evolve, and as designers strive to deliver the most effective and efficient platform possible within the constraints of time, budget, and performance requirements. A significant number of applications of computational fluid dynamics (CFD) tools to hydrodynamic optimization, mostly for reducing calm-water drag and wave patterns, demonstrate a growing interest in optimization. In addition, more recent ship design programs within the US Navy illustrate some fundamental changes in mission and performance requirements, and future ship designs may be radically different from current ships in the fleet. One difficulty with designing such new concepts is the lack of experience from which to draw from when performing design studies; thus, optimization techniques may be particularly useful. These issues point to a need for greater fidelity, robustness, and ease of use in the tools used in early stage ship design. The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program attempts to address this in its plan to develop and deploy sets of computational engineering design and analysis tools. It is expected that advances in computers will allow for highly accurate design and analyses studies that can be carried out throughout the design process. In order to evaluate candidate designs and explore the design space more thoroughly shape optimization is an important component of the CREATE Ships Hydromechanics Product. The current program development plan includes fast parameterized codes to bound the design space and more accurate Reynolds-Averaged Navier-Stokes (RANS) codes to better define the geometry and performance of the specified hull forms. The potential for hydrodynamic shape optimization has been demonstrated for a variety of different hull forms, including multi-hulls, in related efforts (see e.g., Wilson et al, 2009, Stern et al, Report Documentation Page Form Approved", "title": "" }, { "docid": "7a8a98b91680cbc63594cd898c3052c8", "text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.", "title": "" } ]
scidocsrr
1916caa83f81c9e0d9b79c11b423b711
The metabolic cost of neural information
[ { "docid": "ec93b4c61694916dd494e9376102726b", "text": "In 1969 Barlow introduced the phrase economy of impulses to express the tendency for successive neural systems to use lower and lower levels of cell firings to produce equivalent encodings. From this viewpoint, the ultimate economy of impulses is a neural code of minimal redundancy. The hypothesis motivating our research is that energy expenditures, e.g., the metabolic cost of recovering from an action potential relative to the cost of inactivity, should also be factored into the economy of impulses. In fact, coding schemes with the largest representational capacity are not, in general, optimal when energy expenditures are taken into account. We show that for both binary and analog neurons, increased energy expenditure per neuron implies a decrease in average firing rate if energy efficient information transmission is to be maintained.", "title": "" }, { "docid": "6d2903f82ec382b4214d9322e545e71f", "text": "We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.", "title": "" } ]
[ { "docid": "abb748541b980385e4b8bc477c5adc0e", "text": "Spin–orbit torque, a torque brought about by in-plane current via the spin–orbit interactions in heavy-metal/ferromagnet nanostructures, provides a new pathway to switch the magnetization direction. Although there are many recent studies, they all build on one of two structures that have the easy axis of a nanomagnet lying orthogonal to the current, that is, along the z or y axes. Here, we present a new structure with the third geometry, that is, with the easy axis collinear with the current (along the x axis). We fabricate a three-terminal device with a Ta/CoFeB/MgO-based stack and demonstrate the switching operation driven by the spin–orbit torque due to Ta with a negative spin Hall angle. Comparisons with different geometries highlight the previously unknown mechanisms of spin–orbit torque switching. Our work offers a new avenue for exploring the physics of spin–orbit torque switching and its application to spintronics devices.", "title": "" }, { "docid": "bdda2074b0ab2e12047d0702acb4d20a", "text": "Ferroptosis has emerged as a new form of regulated necrosis that is implicated in various human diseases. However, the mechanisms of ferroptosis are not well defined. This study reports the discovery of multiple molecular components of ferroptosis and its intimate interplay with cellular metabolism and redox machinery. Nutrient starvation often leads to sporadic apoptosis. Strikingly, we found that upon deprivation of amino acids, a more rapid and potent necrosis process can be induced in a serum-dependent manner, which was subsequently determined to be ferroptosis. Two serum factors, the iron-carrier protein transferrin and amino acid glutamine, were identified as the inducers of ferroptosis. We further found that the cell surface transferrin receptor and the glutamine-fueled intracellular metabolic pathway, glutaminolysis, played crucial roles in the death process. Inhibition of glutaminolysis, the essential component of ferroptosis, can reduce heart injury triggered by ischemia/reperfusion, suggesting a potential therapeutic approach for treating related diseases.", "title": "" }, { "docid": "a7317f06cf34e501cb169bdf805e7e34", "text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.", "title": "" }, { "docid": "dd0bbc039e1bbc9e36ffe087e105cf56", "text": "Using a comparative analysis approach, this article examines the development, characteristics and issues concerning the discourse of modern Asian art in the twentieth century, with the aim of bringing into picture the place of Asia in the history of modernism. The wide recognition of the Western modernist canon as centre and universal displaces the contribution and significance of the non-Western world in the modern movement. From a cross-cultural perspective, this article demonstrates that modernism in the field of visual arts in Asia, while has had been complex and problematic, nevertheless emerged. Rather than treating Asian art as a generalized subject, this article argues that, with their subtly different notions of culture, identity and nationhood, the modernisms that emerged from various nations in this region are diverse and culturally specific. Through the comparison of various art-historical contexts in this region (namely China, India, Japan and Korea), this article attempts to map out some similarities as well as differences in their pursuit of an autonomous modernist representation.", "title": "" }, { "docid": "65580dfc9bdf73ef72b6a133ab19ccdd", "text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.", "title": "" }, { "docid": "456b7ad01115d9bc04ca378f1eb6d7f2", "text": "Article history: Received 13 October 2007 Received in revised form 12 June 2008 Accepted 31 July 2008", "title": "" }, { "docid": "1c5f53fe8d663047a3a8240742ba47e4", "text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.", "title": "" }, { "docid": "068321516540ed9f5f05638bdfb7235a", "text": "Cloud of Things (CoT) is a computing model that combines the widely popular cloud computing with Internet of Things (IoT). One of the major problems with CoT is the latency of accessing distant cloud resources from the devices, where the data is captured. To address this problem, paradigms such as fog computing and Cloudlets have been proposed to interpose another layer of computing between the clouds and devices. Such a three-layered cloud-fog-device computing architecture is touted as the most suitable approach for deploying many next generation ubiquitous computing applications. Programming applications to run on such a platform is quite challenging because disconnections between the different layers are bound to happen in a large-scale CoT system, where the devices can be mobile. This paper presents a programming language and system for a three-layered CoT system. We illustrate how our language and system addresses some of the key challenges in the three-layered CoT. A proof-of-concept prototype compiler and runtime have been implemented and several example applications are developed using it.", "title": "" }, { "docid": "cfad1e8941f0a60f6978493c999a5850", "text": "We propose SecVisor, a tiny hypervisor that ensures code integrity for commodity OS kernels. In particular, SecVisor ensures that only user-approved code can execute in kernel mode over the entire system lifetime. This protects the kernel against code injection attacks, such as kernel rootkits. SecVisor can achieve this propertyeven against an attacker who controls everything but the CPU, the memory controller, and system memory chips. Further, SecVisor can even defend against attackers with knowledge of zero-day kernel exploits.\n Our goal is to make SecVisor amenable to formal verificationand manual audit, thereby making it possible to rule out known classes of vulnerabilities. To this end, SecVisor offers small code size and small external interface. We rely on memory virtualization to build SecVisor and implement two versions, one using software memory virtualization and the other using CPU-supported memory virtualization. The code sizes of the runtime portions of these versions are 1739 and 1112 lines, respectively. The size of the external interface for both versions of SecVisor is 2 hypercalls. It is easy to port OS kernels to SecVisor. We port the Linux kernel version 2.6.20 by adding 12 lines and deleting 81 lines, out of a total of approximately 4.3 million lines of code in the kernel.", "title": "" }, { "docid": "a88c0d45ca7859c050e5e76379f171e6", "text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.", "title": "" }, { "docid": "b1f98cbb045f8c15f53d284c9fa9d881", "text": "If the pace of increase in life expectancy in developed countries over the past two centuries continues through the 21st century, most babies born since 2000 in France, Germany, Italy, the UK, the USA, Canada, Japan, and other countries with long life expectancies will celebrate their 100th birthdays. Although trends differ between countries, populations of nearly all such countries are ageing as a result of low fertility, low immigration, and long lives. A key question is: are increases in life expectancy accompanied by a concurrent postponement of functional limitations and disability? The answer is still open, but research suggests that ageing processes are modifiable and that people are living longer without severe disability. This finding, together with technological and medical development and redistribution of work, will be important for our chances to meet the challenges of ageing populations.", "title": "" }, { "docid": "556c0c1662a64f484aff9d7556b2d0b5", "text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.", "title": "" }, { "docid": "f7bdf07ef7a45c3e261e4631743c1882", "text": "Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sampleefficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actorcritic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sampleefficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learning deep RLbased dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.", "title": "" }, { "docid": "3ee021ed227247981e91566c2df4ac26", "text": "Particle filters have been applied with great success to various state estimation problems in robotics. However, particle filters often require extensive parameter tweaking in order to work well in practice. This is based on two observations. First, particle filters typically rely on independence assumptions such as \"the beams in a laser scan are independent given the robot's location in a map\". Second, even when the noise parameters of the dynamical system are perfectly known, the sample-based approximation can result in poor filter performance. In this paper we introduce CRF-filters, a novel variant of particle filtering for sequential state estimation. CRF-filters are based on conditional random fields, which are discriminative models that can handle arbitrary dependencies between observations. We show how to learn the parameters of CRF-filters based on labeled training data. Experiments using a robot equipped with a laser range-finder demonstrate that our technique is able to learn parameters of the robot's motion and sensor models that result in good localization performance, without the need of additional parameter tweaking.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "91f89990f9d41d3a92cbff38efc56b57", "text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.", "title": "" }, { "docid": "f136f8249bf597db706806a795ee8791", "text": "Automotive systems are constantly increasing in complexity and size. Beside the increase of requirements specifications and related test specification due to new systems and higher system interaction, we observe an increase of redundant specifications. As the predominant specification language (both for requirements and test cases) is still natural text, it is not easy to detect these redundancies. In principle, to detect these redundancies, each statement has to be compared to all others. This proves to be difficult because of number and informal expression of statements. In this paper we propose a solution to the problem of detecting redundant specification and test statements described in structured natural language. We propose a formalization process for requirements specification and test statements, allowing us to detect redundant statements and thus reduce the efforts for specification and validation. Specification Pattern Systems and Linear Temporal Logic provide the base for our process. We did evaluate the method in the context of Mercedes-Benz Passenger Car Development. The results show that for the investigated sample set of test statements, we could detect about 30% of test steps as redundant. This indicates the savings potential of our approach.", "title": "" }, { "docid": "e09dcb9cdd7f9a8d1c0a0449fd9b11f8", "text": "Radio-frequency identification (RFID) is being widely used in supply chain and logistics applications for wireless identification and the tracking and tracing of goods, with excellent performance for the long-range interrogation of tagged pallets and cases (up to 4-6 m, with passive tags). Item-level tagging (ILT) has also received much attention, especially in the pharmaceutical and retail industries. Low-frequency (125-134 KHz) and high-frequency (HF) (13.56 MHz) RFID systems have traditionally been used for ILT applications, where the radio-frequency (RF) power from the reader is delivered to the passive tags by inductive coupling. Recently, ultra-HF (UHF) (840-960 MHz) near-field (NF) RFID systems [1] have attracted increasing attention because of the merits of the much higher reading speed and capability to detect a larger number of tags (bulk reading). A UHF NF RFID system is a valuable solution to implement a reliable short-range wireless link (up to a few tens of centimeters) for ILT applications. Because the tags can be made smaller, RFID-based applications can be extended to extremely minuscule items (e.g., retail apparel, jewelry, drugs, rented apparel) as well as the successful implementation of RFID-based storage spaces, smart conveyor belts, and shopping carts.", "title": "" }, { "docid": "227bbb2341b1c28b69d38fcf0a22b604", "text": "The growth in low-cost, low-power sensing and communication technologies is creating a pervasive network infrastructure called the Internet of Things (IoT), which enables a wide range of physical objects and environments to be monitored in fine spatial and temporal detail. The detailed, dynamic data that can be collected from these devices provide the basis for new business and government applications in areas such as public safety, transport logistics and environmental management. There has been growing interest in the IoT for realising smart cities, in order to maximise the productivity and reliability of urban infrastructure, such as minimising road congestion and making better use of the limited car parking facilities. In this work, we consider two smart car parking scenarios based on real-time car parking information that has been collected and disseminated by the City of San Francisco, USA and the City of Melbourne, Australia. We present a prediction mechanism for the parking occupancy rate using three feature sets with selected parameters to illustrate the utility of these features. Furthermore, we analyse the relative strengths of different machine learning methods in using these features for prediction.", "title": "" }, { "docid": "ee9ca88d092538a399d192cf1b9e9df6", "text": "The new user problem in recommender systems is still challenging, and there is not yet a unique solution that can be applied in any domain or situation. In this paper we analyze viable solutions to the new user problem in collaborative filtering (CF) that are based on the exploitation of user personality information: (a) personality-based CF, which directly improves the recommendation prediction model by incorporating user personality information, (b) personality-based active learning, which utilizes personality information for identifying additional useful preference data in the target recommendation domain to be elicited from the user, and (c) personality-based cross-domain recommendation, which exploits personality information to better use user preference data from auxiliary domains which can be used to compensate the lack of user preference data in the target domain. We benchmark the effectiveness of these methods on large datasets that span several domains, namely movies, music and books. Our results show that personality-aware methods achieve performance improvements that range from 6 to 94 % for users completely new to the system, while increasing the novelty of the recommended items by 3–40 % with respect to the non-personalized popularity baseline. We also discuss the limitations of our approach and the situations in which the proposed methods can be better applied, hence providing guidelines for researchers and practitioners in the field.", "title": "" } ]
scidocsrr
0f7906ae6cc949541333e43ff695879a
Statistical transformer networks: learning shape and appearance models via self supervision
[ { "docid": "de1f35d0e19cafc28a632984f0411f94", "text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.", "title": "" }, { "docid": "6936b03672c64798ca4be118809cc325", "text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.", "title": "" }, { "docid": "b7387928fe8307063cafd6723c0dd103", "text": "We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signal's structure based on optimization of the network for classification accuracy, sparse representation, and regularization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.", "title": "" }, { "docid": "4551ee1978ef563259c8da64cc0d1444", "text": "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6%. We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.", "title": "" } ]
[ { "docid": "39c2c3e7f955425cd9aaad1951d13483", "text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .", "title": "" }, { "docid": "1afa72a646fcfa5dfe632126014f59be", "text": "The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) has served as a comprehensive repository of bacterial virulence factors (VFs) for >7 years. Bacterial virulence is an exciting and dynamic field, due to the availability of complete sequences of bacterial genomes and increasing sophisticated technologies for manipulating bacteria and bacterial genomes. The intricacy of virulence mechanisms offers a challenge, and there exists a clear need to decipher the 'language' used by VFs more effectively. In this article, we present the recent major updates of VFDB in an attempt to summarize some of the most important virulence mechanisms by comparing different compositions and organizations of VFs from various bacterial pathogens, identifying core components and phylogenetic clades and shedding new light on the forces that shape the evolutionary history of bacterial pathogenesis. In addition, the 2012 release of VFDB provides an improved user interface.", "title": "" }, { "docid": "fa03fe8103c69dbb8328db899400cce4", "text": "While deploying large scale heterogeneous robots in a wide geographical area, communicating among robots and robots with a central entity pose a major challenge due to robotic motion, distance and environmental constraints. In a cloud robotics scenario, communication challenges result in computational challenges as the computation is being performed at the cloud. Therefore fog nodes are introduced which shorten the distance between the robots and cloud and reduce the communication challenges. Fog nodes also reduce the computation challenges with extra compute power. However in the above scenario, maintaining continuous communication between the cloud and the robots either directly or via fog nodes is difficult. Therefore we propose a Distributed Cooperative Multi-robots Communication (DCMC) model where Robot to Robot (R2R), Robot to Fog (R2F) and Fog to Cloud (F2C) communications are being realized. Once the DCMC framework is formed, each robot establishes communication paths to maintain a consistent communication with the cloud. Further, due to mobility and environmental condition, maintaining link with a particular robot or a fog node becomes difficult. This requires pre-knowledge of the link quality such that appropriate R2R or R2F communication can be made possible. In a scenario where Global Positioning System (GPS) and continuous scanning of channels are not advisable due to energy or security constraints, we need an accurate link prediction mechanism. In this paper we propose a Collaborative Robotic based Link Prediction (CRLP) mechanism which predicts reliable communication and quantify link quality evolution in R2R and R2F communications without GPS and continuous channel scanning. We have validated our proposed schemes using joint Gazebo/Robot Operating System (ROS), MATLAB and Network Simulator (NS3) based simulations. Our schemes are efficient in terms of energy saving and accurate link prediction.", "title": "" }, { "docid": "95af5f635e876c4c66711e86fa25d968", "text": "Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human–Computer Interaction and automatic annotation, will benefit from a robust solution. In this paper, we discuss the characteristics of human motion analysis. We divide the analysis into a modeling and an estimation phase. Modeling is the construction of the likelihood function, estimation is concerned with finding the most likely pose given the likelihood surface. We discuss model-free approaches separately. This taxonomy allows us to highlight trends in the domain and to point out limitations of the current state of the art. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "83e7119065ededfd731855fe76e76207", "text": "Introduction: In recent years, the maturity model research has gained wide acceptance in the area of information systems and many Service Oriented Architecture (SOA) maturity models have been proposed. However, there are limited empirical studies on in-depth analysis and validation of SOA Maturity Models (SOAMMs). Objectives: The objective is to present a comprehensive comparison of existing SOAMMs to identify the areas of improvement and the research opportunities. Methods: A systematic literature review is conducted to explore the SOA adoption maturity studies. Results: A total of 20 unique SOAMMs are identified and analyzed in detail. A comparison framework is defined based on SOAMM design and usage support. The results provide guidance for SOA practitioners who are involved in selection, design, and implementation of SOAMMs. Conclusion: Although all SOAMMs propose a measurement framework, only a few SOAMMs provide guidance for selecting and prioritizing improvement measures. The current state of research shows that a gap exists in both prescriptive and descriptive purpose of SOAMM usage and it indicates the need for further research.", "title": "" }, { "docid": "936048690fb043434c3ee0060c5bf7a5", "text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "eef87d8905b621d2d0bb2b66108a56c1", "text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.", "title": "" }, { "docid": "2d73a7ab1e5a784d4755ed2fe44078db", "text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.", "title": "" }, { "docid": "18caf39ce8802f69a463cc1a4b276679", "text": "In this thesis we describe the formal verification of a fully IEEE compliant floating point unit (FPU). The hardware is verified on the gate-level against a formalization of the IEEE standard. The verification is performed using the theorem proving system PVS. The FPU supports both single and double precision floating point numbers, normal and denormal numbers, all four IEEE rounding modes, and exceptions as required by the standard. Beside the verification of the combinatorial correctness of the FPUs we pipeline the FPUs to allow the integration into an out-of-order processor. We formally define the correctness criterion the pipelines must obey in order to work properly within the processor. We then describe a new methodology based on combining model checking and theorem proving for the verification of the pipelines.", "title": "" }, { "docid": "9fc869c7e7d901e418b1b69d636cbd33", "text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2", "title": "" }, { "docid": "9f660caf74f1708339f7ca2ee067dc95", "text": "Abstruct-Vehicle following and its effects on traffic flow has been an active area of research. Human driving involves reaction times, delays, and human errors that affect traffic flow adversely. One way to eliminate human errors and delays in vehicle following is to replace the human driver with a computer control system and sensors. The purpose of this paper is to develop an autonomous intelligent cruise control (AICC) system for automatic vehicle following, examine its effect on traffic flow, and compare its performance with that of the human driver models. The AICC system developed is not cooperative; Le., it does not exchange information with other vehicles and yet is not susceptible to oscillations and \" slinky \" effects. The elimination of the \" slinky \" effect is achieved by using a safety distance separation rule that is proportional to the vehicle velocity (constant time headway) and by designing the control system appropriately. The performance of the AICC system is found to be superior to that of the human driver models considered. It has a faster and better transient response that leads to a much smoother and faster traffic flow. Computer simulations are used to study the performance of the proposed AICC system and analyze vehicle following in a single lane, without passing, under manual and automatic control. In addition, several emergency situations that include emergency stopping and cut-in cases were simulated. The simulation results demonstrate the effectiveness of the AICC system and its potentially beneficial effects on traffic flow.", "title": "" }, { "docid": "6ced60cadf69a3cd73bcfd6a3eb7705e", "text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.", "title": "" }, { "docid": "842cd58edd776420db869e858be07de4", "text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.", "title": "" }, { "docid": "0aa566453fa3bd4bedec5ac3249d410a", "text": "The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval.\n An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.", "title": "" }, { "docid": "5aaba72970d1d055768e981f7e8e3684", "text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.", "title": "" }, { "docid": "69ddedba98e93523f698529716cf2569", "text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.", "title": "" }, { "docid": "89b54aa0009598a4cb159b196f3749ee", "text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.", "title": "" }, { "docid": "ad4596e24f157653a36201767d4b4f3b", "text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.", "title": "" }, { "docid": "708915f99102f80b026b447f858e3778", "text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.", "title": "" }, { "docid": "021bed3f2c2f09db1bad7d11108ee430", "text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26", "title": "" } ]
scidocsrr
f9e1d9c1323a1e2e78f7fe6d59e30bee
Facial Expression Recognition Based on Facial Components Detection and HOG Features
[ { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" } ]
[ { "docid": "8d3c4598b7d6be5894a1098bea3ed81a", "text": "Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: [email protected] http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards?", "title": "" }, { "docid": "69d42340c09303b69eafb19de7170159", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "2e3ffdd6e9ee0bfee5653c3f21422f7e", "text": "Neural networks have recently solved many hard problems in Machine Learning, but their impact in control remains limited. Trajectory optimization has recently solved many hard problems in robotic control, but using it online remains challenging. Here we leverage the high-fidelity solutions obtained by trajectory optimization to speed up the training of neural network controllers. The two learning problems are coupled using the Alternating Direction Method of Multipliers (ADMM). This coupling enables the trajectory optimizer to act as a teacher, gradually guiding the network towards better solutions. We develop a new trajectory optimizer based on inverse contact dynamics, and provide not only the trajectories but also the feedback gains as training data to the network. The method is illustrated on rolling, reaching, swimming and walking tasks.", "title": "" }, { "docid": "b5004502c5ce55f2327e52639e65d0b6", "text": "Public health applications using social media often require accurate, broad-coverage location information. However, the standard information provided by social media APIs, such as Twitter, cover a limited number of messages. This paper presents Carmen, a geolocation system that can determine structured location information for messages provided by the Twitter API. Our system utilizes geocoding tools and a combination of automatic and manual alias resolution methods to infer location structures from GPS positions and user-provided profile data. We show that our system is accurate and covers many locations, and we demonstrate its utility for improving influenza surveillance.", "title": "" }, { "docid": "e502cdbbbf557c8365b0d4b69745e225", "text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.", "title": "" }, { "docid": "17c6b63d850292f5f1c78e156103c3b4", "text": "Continual learning is the constant development of complex behaviors with no nal end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the rst place. Table of", "title": "" }, { "docid": "4fc276d2f0ca869d84d372f4bb4622ac", "text": "An electrocardiogram (ECG) is a bioelectrical signal which records the heart's electrical activity versus time. It is an important diagnostic tool for assessing heart functions. The early detection of arrhythmia is very important for the cardiac patients. ECG arrhythmia can be defined as any of a group of conditions in which the electrical activity of the heart is irregular and can cause heartbeat to be slow or fast. It can take place in a healthy heart and be of minimal consequence, but they may also indicate a serious problem that leads to stroke or sudden cardiac death. As ECG signal being non stationary signal, the arrhythmia may occur at random in the time-scale, which means, the arrhythmia symptoms may not show up all the time but would manifest at certain irregular intervals during the day. Thus, automatic classification of arrhythmia is critical in clinical cardiology, especially for the treatment of patients in the intensive care unit. This project implements a simulation tool on MATLAB platform to detect abnormalities in the ECG signal. The ECG signal is downloaded from MIT-BIH Arrhythmia database, since this signal contains some noise and artifacts hence pre-processing of ECG signal are performed first. The preprocessing of ECG signal is performed with help of Wavelet toolbox wherein baseline wandering, denoising and removal of high frequency and low frequency is performed to improve SNR ratio of ECG signal. The Wavelet toolbox is also used for feature extraction of ECG signal. Classification of arrhythmia is based on basic classification rules. The complete project is implemented on MATLAB platform. The performance of the algorithm is evaluated on MIT–BIH Database. The different types of arrhythmia classes including normal beat, Tachycardia, Bradycardia and Myocardial Infract (MI) are classified. KeywordsDb6 , feature extraction, arrhythmia.", "title": "" }, { "docid": "41b8c1b04f11f5ac86d1d6e696007036", "text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.", "title": "" }, { "docid": "34e2eafd055e097e167afe7cb244f99b", "text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.", "title": "" }, { "docid": "b42c9db51f55299545588a1ee3f7102f", "text": "With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.", "title": "" }, { "docid": "b1d348e2095bd7054cc11bd84eb8ccdc", "text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.", "title": "" }, { "docid": "9342e1adb849f07a385714a24ac2fea5", "text": "MOTIVATION\nIn 2001 and 2002, we published two papers (Bioinformatics, 17, 282-283, Bioinformatics, 18, 77-82) describing an ultrafast protein sequence clustering program called cd-hit. This program can efficiently cluster a huge protein database with millions of sequences. However, the applications of the underlying algorithm are not limited to only protein sequences clustering, here we present several new programs using the same algorithm including cd-hit-2d, cd-hit-est and cd-hit-est-2d. Cd-hit-2d compares two protein datasets and reports similar matches between them; cd-hit-est clusters a DNA/RNA sequence database and cd-hit-est-2d compares two nucleotide datasets. All these programs can handle huge datasets with millions of sequences and can be hundreds of times faster than methods based on the popular sequence comparison and database search tools, such as BLAST.", "title": "" }, { "docid": "5f70d96454e4a6b8d2ce63bc73c0765f", "text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.", "title": "" }, { "docid": "c8d56c100db663ba532df4766e458345", "text": "Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.", "title": "" }, { "docid": "c3473e7fe7b46628d384cbbe10bfe74c", "text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.", "title": "" }, { "docid": "4159eacb27d820fd7cb93dfb9c605dd4", "text": "Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.", "title": "" }, { "docid": "b2e689cc561569f2c87e72aa955b54fe", "text": "Ensemble learning is attracting much attention from pattern recognition and machine learning domains for good generalization. Both theoretical and experimental researches show that combining a set of accurate and diverse classifiers will lead to a powerful classification system. An algorithm, called FS-PP-EROS, for selective ensemble of rough subspaces is proposed in this paper. Rough set-based attribute reduction is introduced to generate a set of reducts, and then each reduct is used to train a base classifier. We introduce an accuracy-guided forward search and post-pruning strategy to select part of the base classifiers for constructing an efficient and effective ensemble system. The experiments show that classification accuracies of ensemble systems with accuracy-guided forward search strategy will increase at first, arrive at a maximal value, then decrease in sequentially adding the base classifiers. We delete the base classifiers added after the maximal accuracy. The experimental results show that the proposed ensemble systems outperform bagging and random subspace methods in terms of accuracy and size of ensemble systems. FS-PP-EROS can keep or improve the classification accuracy with very few base classifiers, which leads to a powerful and compact classification system. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "949da61747af5cd33cc56a2163b7f7cc", "text": "The tomato crop is an important staple in the Indian market with high commercial value and is produced in large quantities. Diseases are detrimental to the plant's health which in turn affects its growth. To ensure minimal losses to the cultivated crop, it is crucial to supervise its growth. There are numerous types of tomato diseases that target the crop's leaf at an alarming rate. This paper adopts a slight variation of the convolutional neural network model called LeNet to detect and identify diseases in tomato leaves. The main aim of the proposed work is to find a solution to the problem of tomato leaf disease detection using the simplest approach while making use of minimal computing resources to achieve results comparable to state of the art techniques. Neural network models employ automatic feature extraction to aid in the classification of the input image into respective disease classes. This proposed system has achieved an average accuracy of 94–95 % indicating the feasibility of the neural network approach even under unfavourable conditions.", "title": "" }, { "docid": "db5f5f0b7599f1e9b3ebe81139eab1e6", "text": "In the manufacturing industry, supply chain management is playing an important role in providing profit to the enterprise. Information that is useful in improving existing products and development of new products can be obtained from databases and ontology. The theory of inventive problem solving (TRIZ) supports designers of innovative product design by searching a knowledge base. The existing TRIZ ontology supports innovative design of specific products (Flashlight) for a TRIZ ontology. The research reported in this paper aims at developing a metaontology for innovative product design that can be applied to multiple products in different domain areas. The authors applied the semantic TRIZ to a product (Smart Fan) as an interim stage toward a metaontology that can manage general products and other concepts. Modeling real-world (Smart Pen and Smart Machine) ontologies is undertaken as an evaluation of the metaontology. This may open up new possibilities to innovative product designs. Innovative Product Design using Metaontology with Semantic TRIZ", "title": "" }, { "docid": "082b1c341435ce93cfab869475ed32bd", "text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory", "title": "" } ]
scidocsrr
9513ffa44c24f795dd573dbfd6b731fa
Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "4628128d1c5cf97fa538a8b750905632", "text": "A large body of recent work on object detection has focused on exploiting 3D CAD model databases to improve detection performance. Many of these approaches work by aligning exact 3D models to images using templates generated from renderings of the 3D models at a set of discrete viewpoints. However, the training procedures for these approaches are computationally expensive and require gigabytes of memory and storage, while the viewpoint discretization hampers pose estimation performance. We propose an efficient method for synthesizing templates from 3D models that runs on the fly - that is, it quickly produces detectors for an arbitrary viewpoint of a 3D model without expensive dataset-dependent training or template storage. Given a 3D model and an arbitrary continuous detection viewpoint, our method synthesizes a discriminative template by extracting features from a rendered view of the object and decorrelating spatial dependences among the features. Our decorrelation procedure relies on a gradient-based algorithm that is more numerically stable than standard decomposition-based procedures, and we efficiently search for candidate detections by computing FFT-based template convolutions. Due to the speed of our template synthesis procedure, we are able to perform joint optimization of scale, translation, continuous rotation, and focal length using Metropolis-Hastings algorithm. We provide an efficient GPU implementation of our algorithm, and we validate its performance on 3D Object Classes and PASCAL3D+ datasets.", "title": "" } ]
[ { "docid": "316aa66508daedc1b729283d6212bdb0", "text": "The purpose of this study is to examine the physiological effects of Shinrin-yoku (taking in the atmosphere of the forest). The subjects were 12 male students (22.8+/-1.4 yr). On the first day of the experiments, one group of 6 subjects was sent to a forest area, and the other group of 6 subjects was sent to a city area. On the second day, each group was sent to the opposite area for a cross check. In the forenoon, the subjects were asked to walk around their given area for 20 minutes. In the afternoon, they were asked to sit on chairs and watch the landscapes of their given area for 20 minutes. Cerebral activity in the prefrontal area and salivary cortisol were measured as physiological indices in the morning at the place of accommodation, before and after walking in the forest or city areas during the forenoon, and before and after watching the landscapes in the afternoon in the forest and city areas, and in the evening at the place of accommodation. The results indicated that cerebral activity in the prefrontal area of the forest area group was significantly lower than that of the group in the city area after walking; the concentration of salivary cortisol in the forest area group was significantly lower than that of the group in the city area before and after watching each landscape. The results of the physiological measurements show that Shinrin-yoku can effectively relax both people's body and spirit.", "title": "" }, { "docid": "7588bd6798d8c2fd891acaf3c64c675f", "text": "OBJECTIVE\nThis article presents a case report of a child with poor sensory processing and describes the disorders impact on the child's occupational behavior and the changes in occupational performance during 10 months of occupational therapy using a sensory integrative approach (OT-SI).\n\n\nMETHOD\nRetrospective chart review of assessment data and analysis of parent interview data are reviewed. Progress toward goals and objectives is measured using goal attainment scaling. Themes from parent interview regarding past and present occupational challenges are presented.\n\n\nRESULTS\nNotable improvements in occupational performance are noted on goal attainment scales, and these are consistent with improvements in behavior. Parent interview data indicate noteworthy progress in the child's ability to participate in home, school, and family activities.\n\n\nCONCLUSION\nThis case report demonstrates a model for OT-SI. The findings support the theoretical underpinnings of sensory integration theory: that improvement in the ability to process and integrate sensory input will influence adaptive behavior and occupational performance. Although these findings cannot be generalized, they provide preliminary evidence supporting the theory and the effectiveness of this approach.", "title": "" }, { "docid": "c3aaa53892e636f34d6923831a3b66bc", "text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.", "title": "" }, { "docid": "5ac66257b2e43eb11ae906672acef904", "text": "Noticing that different information sources often provide complementary coverage of word sense and meaning, we propose a simple and yet effective strategy for measuring lexical semantics. Our model consists of a committee of vector space models built on a text corpus, Web search results and thesauruses, and measures the semantic word relatedness using the averaged cosine similarity scores. Despite its simplicity, our system correlates with human judgements better or similarly compared to existing methods on several benchmark datasets, including WordSim353.", "title": "" }, { "docid": "c2a59be58131149dcddfec02214423b8", "text": "Complex structures manufactured using low-pressure vacuum bag-only (VBO) prepreg processing are more susceptible to defects than flat laminates due to complex compaction conditions present at sharp corners. Consequently, effective defect mitigation strategies are required to produce structural parts. In this study, we investigated the relationships between laminate properties, processing conditions`, mold designs and part quality in order to develop science-based guidelines for the manufacture of complex parts. Generic laminates consisting of a central corner and two flanges were fabricated in a multi-part study that considered variation in corner angle and local curvature radius, the applied pressure during layup and cure, and the prepreg material and laminate thickness. The manufactured parts were analyzed in terms of microstructural fiber bed and resin distribution, thickness variation, and void content. The results indicated that defects observed in corner laminates were influenced by both mold design and processing conditions, and that optimal combinations of these factors can mitigate defects and improve quality.", "title": "" }, { "docid": "3dd4bfe71c3c141d9538e3b3eb72e8e1", "text": "This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting. In such a setting, there are differences between the distributions generating the training data (source domain) and the test data (target domain). The usual cross-validation procedure requires validation data, which can not be obtained from the unlabeled target data. The problem is that if one decides to use source validation data, the regularization parameter is underestimated. One possible solution is to scale the source validation data through importance weighting, but we show that this correction is not sufficient. We conclude the paper with an empirical analysis of the effect of several importance weight estimators on the estimation of the regularization parameter.", "title": "" }, { "docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5", "text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.", "title": "" }, { "docid": "8e077186aef0e7a4232eec0d8c73a5a2", "text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8df4ff8a2fbaf84b4bbd3aa647e946e8", "text": "One of the newly emerging carbon materials, nanodiamond (ND), has been exploited for use in traditional electric materials and this has extended into biomedical and pharmaceutical applications. Recently, NDs have attained significant interests as a multifunctional and combinational drug delivery system. ND studies have provided insights into granting new potentials with their wide ranging surface chemistry, complex formation with biopolymers, and combination with biomolecules. The studies that have proved ND inertness, biocompatibility, and low toxicity have made NDs much more feasible for use in real in vivo applications. This review gives an understanding of NDs in biomedical engineering and pharmaceuticals, focusing on the classified introduction of ND/drug complexes. In addition, the diverse potential applications that can be obtained with chemical modification are presented.", "title": "" }, { "docid": "8d092dfa88ba239cf66e5be35fcbfbcc", "text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.", "title": "" }, { "docid": "d29b90dbce6f4dd7c2a3480239def8f9", "text": "This paper presents a design of permanent magnet machines (PM), such as the permanent magnet axial flux generator for wind turbine generated direct current voltage base on performance requirements. However recent developments in rare earth permanent magnet materials and power electronic devices has awakened interest in alternative generator topologies that can be used to produce direct voltage from wind energy using rectifier circuit convert alternating current to direct current. In preliminary tests the input mechanical energy to drive the rotor of the propose generator. This paper propose a generator which can change mechanical energy into electrical energy with the generator that contains bar magnets move relative generated flux magnetic offset winding coils in stator component. The results show that the direct current output power versus rotor speed of generator in various applications. These benefits present the axial flux permanent magnet generator with generated direct voltage at rated power 1500 W.", "title": "" }, { "docid": "72e255a72bef093425f591e891f0c477", "text": "REFERENCES 1. Fern andez-Guarino M, Aldanondo I, Gonz alez-Garc ıa C, Garrido P, Marquet A, P erez-Garc ıa B, et al. Dermatosis perforante por gefinitib. Actas Dermosifiliogr 2006;97:208-11. 2. Gilaberte Y, Coscojuela C, V azquez C, Rosell o R, Vera J. Perforating folliculitis associated with tumor necrosis factor alpha inhibitors administered for rheumatoid arthritis. Br J Dermatol 2007;156:368-71. 3. Vano-Galvan S, Moreno C, Medina J, P erez-Garc ıa B, Garc ıaL opez JL, Jaen P. Perforating dermatosis in a patient receiving bevacizumab. J Eur Acad Dermatol 2009;23:972-4. 4. Minami-Hori M, Ishida-Yamamoto A, Komatsu S, Iiduka H. Transient perforating folliculitis induced by sorafenib. J Dermatol 2010;37:833-4. 5. Wolber C, Udvardi A, Tatzreiter G, Schneeberger A, Volc-Platzer B. Perforating folliculitis, angioedema, hand-foot syndrome e multiple cutaneous side effects in a patient treated with sorafenib. J Dtsch Dermatol Ges 2009;7:449-52.", "title": "" }, { "docid": "540099388527a2e8dd5b43162b697fea", "text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.", "title": "" }, { "docid": "1448b02c9c14e086a438d76afa1b2fde", "text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.", "title": "" }, { "docid": "2f3bb54596bba8cd7a073ef91964842c", "text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.", "title": "" }, { "docid": "cf0b98dfd188b7612577c975e08b0c92", "text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.", "title": "" }, { "docid": "20c3addef683da760967df0c1e83f8e3", "text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.", "title": "" }, { "docid": "29199ac45d4aa8035fd03e675406c2cb", "text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.", "title": "" }, { "docid": "acb3689c9ece9502897cebb374811f54", "text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.", "title": "" }, { "docid": "2f7ba7501fcf379b643867c7d5a9d7bf", "text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.", "title": "" } ]
scidocsrr
532f1fa097be66f7ed8456dab410ca86
Adaptive nonlinear hierarchical control of a quad tilt-wing UAV
[ { "docid": "8de43a1cbdd9d5157aee6a67eca408d3", "text": "This paper presents two types of nonlinear controllers for an autonomous quadrotor helicopter. One type, a feedback linearization controller involves high-order derivative terms and turns out to be quite sensitive to sensor noise as well as modeling uncertainty. The second type involves a new approach to an adaptive sliding mode controller using input augmentation in order to account for the underactuated property of the helicopter, sensor noise, and uncertainty without using control inputs of large magnitude. The sliding mode controller performs very well under noisy conditions, and adaptation can effectively estimate uncertainty such as ground effects.", "title": "" }, { "docid": "adc9e237e2ca2467a85f54011b688378", "text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.", "title": "" } ]
[ { "docid": "61f0c91688994adf947f4cc61718421a", "text": "This article reports on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. It explains how the researchers adopted DevOps and how this facilitated a smooth migration.", "title": "" }, { "docid": "c9b221d052490f106ea9c6bc58b75c27", "text": "Food logging is recommended by dieticians for prevention and treatment of obesity, but currently available mobile applications for diet tracking are often too difficult and time-consuming for patients to use regularly. For this reason, we propose a novel approach to food journaling that uses speech and language understanding technology in order to enable efficient self-assessment of energy and nutrient consumption. This paper presents ongoing language understanding experiments conducted as part of a larger effort to create a nutrition dialogue system that automatically extracts food concepts from a user's spoken meal description. We first summarize the data collection and annotation of food descriptions performed via Amazon Mechanical Turk AMT, for both a written corpus and spoken data from an in-domain speech recognizer. We show that the addition of word vector features improves conditional random field CRF performance for semantic tagging of food concepts, achieving an average F1 test score of 92.4 on written data; we also demonstrate that a convolutional neural network CNN with no hand-crafted features outperforms the best CRF on spoken data, achieving an F1 test score of 91.3. We illustrate two methods for associating foods with properties: segmenting meal descriptions with a CRF, and a complementary method that directly predicts associations with a feed-forward neural network. Finally, we conduct an end-to-end system evaluation through an AMT user study with worker ratings of 83% semantic tagging accuracy.", "title": "" }, { "docid": "03daa46354d26c4a8aeabbe88fd2cb37", "text": "The rapid evolution of Internet-of-Things (IoT) technologies has led to an emerging need to make them smarter. A variety of applications now run simultaneously on an ARM-based processor. For example, devices on the edge of the Internet are provided with higher horsepower to be entrusted with storing, processing and analyzing data collected from IoT devices. This significantly improves efficiency and reduces the amount of data that needs to be transported to the cloud for data processing, analysis and storage. However, commodity OSes are prone to compromise. Once they are exploited, attackers can access the data on these devices. Since the data stored and processed on the devices can be sensitive, left untackled, this is particularly disconcerting. In this paper, we propose a new system, TrustShadow that shields legacy applications from untrusted OSes. TrustShadow takes advantage of ARM TrustZone technology and partitions resources into the secure and normal worlds. In the secure world, TrustShadow constructs a trusted execution environment for security-critical applications. This trusted environment is maintained by a lightweight runtime system that coordinates the communication between applications and the ordinary OS running in the normal world. The runtime system does not provide system services itself. Rather, it forwards requests for system services to the ordinary OS, and verifies the correctness of the responses. To demonstrate the efficiency of this design, we prototyped TrustShadow on a real chip board with ARM TrustZone support, and evaluated its performance using both microbenchmarks and real-world applications. We showed TrustShadow introduces only negligible overhead to real-world applications.", "title": "" }, { "docid": "0ea3451556904a534352cc7cb90b70a9", "text": "Policy agenda research is concerned with measuring the policymaker activities. Topic classification has proven a valuable tool for policy agenda research. However, manual topic coding is extremely costly and time-consuming. Supervised topic classification offers a cost-effective and reliable alternative, yet it introduces new challenges, the most significant of which are the training set coding, classifier design, and accuracy-efficiency trade-off. In this work, we address these challenges in the context of the recently launched Croatian Policy Agendas project. We describe a new policy agenda dataset, explore the many system design choices, and report on the insights gained. Our best-performing model reaches 77% and 68% of F1-score for major topics and subtopics, respectively.", "title": "" }, { "docid": "c46edb8a67c10ba5819a5eeeb0e62905", "text": "One of the most challenging projects in information systems is extracting information from unstructured texts, including medical document classification. I am developing a classification algorithm that classifies a medical document by analyzing its content and categorizing it under predefined topics from the Medical Subject Headings (MeSH). I collected a corpus of 50 full-text journal articles (N=50) from MEDLINE, which were already indexed by experts based on MeSH. Using natural language processing (NLP), my algorithm classifies the collected articles under MeSH subject headings. I evaluated the algorithm's outcome by measuring its precision and recall of resulting subject headings from the algorithm, comparing results to the actual documents' subject headings. The algorithm classified the articles correctly under 45% to 60% of the actual subject headings and got 40% to 53% of the total subject headings correct. This holds promising solutions for the global health arena to index and classify medical documents expeditiously.", "title": "" }, { "docid": "9419aa1cabec77e33ccea0c448e56b20", "text": "We consider in this paper the problem of noisy 1-bit matrix completion under a general non-uniform sampling distribution using the max-norm as a convex relaxation for the rank. A max-norm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is obtained. Information-theoretical methods are used to establish a minimax lower bound under the general sampling model. The minimax upper and lower bounds together yield the optimal rate of convergence for the Frobenius norm loss. Computational algorithms and numerical performance are also discussed.", "title": "" }, { "docid": "37feedcb9e527601cb28fe59b2526ab3", "text": "In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).", "title": "" }, { "docid": "e896b306c5282da3b0fd58aaf635c027", "text": "In June 2011 the U.S. Supreme Court ruled that video games enjoy full free speech protections and that the regulation of violent game sales to minors is unconstitutional. The Supreme Court also referred to psychological research on violent video games as \"unpersuasive\" and noted that such research contains many methodological flaws. Recent reviews in many scholarly journals have come to similar conclusions, although much debate continues. Given past statements by the American Psychological Association linking video game and media violence with aggression, the Supreme Court ruling, particularly its critique of the science, is likely to be shocking and disappointing to some psychologists. One possible outcome is that the psychological community may increase the conclusiveness of their statements linking violent games to harm as a form of defensive reaction. However, in this article the author argues that the psychological community would be better served by reflecting on this research and considering whether the scientific process failed by permitting and even encouraging statements about video game violence that exceeded the data or ignored conflicting data. Although it is likely that debates on this issue will continue, a move toward caution and conservatism as well as increased dialogue between scholars on opposing sides of this debate will be necessary to restore scientific credibility. The current article reviews the involvement of the psychological science community in the Brown v. Entertainment Merchants Association case and suggests that it might learn from some of the errors in this case for the future.", "title": "" }, { "docid": "934ca8aa2798afd6e7cd4acceeed839a", "text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.", "title": "" }, { "docid": "a8a4bad208ee585ae4b4a0b3c5afe97a", "text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.", "title": "" }, { "docid": "afae709279cd8adeda2888089872d70e", "text": "One-class classification problemhas been investigated thoroughly for past decades. Among one of themost effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM).The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed.The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.", "title": "" }, { "docid": "8cb33cec31601b096ff05426e5ffa848", "text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.", "title": "" }, { "docid": "57c090eaab37e615b564ef8451412962", "text": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling—allowing inference to scale to massive data—as well as objectives that admit variational programs—a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images.", "title": "" }, { "docid": "a3c0a5a570c9c7d4fda363c6b8f792c5", "text": "How do children identify promising hypotheses worth testing? Many studies have shown that preschoolers can use patterns of covariation together with prior knowledge to learn causal relationships. However, covariation data are not always available and myriad hypotheses may be commensurate with substantive knowledge about content domains. We propose that children can identify high-level abstract features common to effects and their candidate causes and use these to guide their search. We investigate children’s sensitivity to two such high-level features — proportion and dynamics, and show that preschoolers can use these to link effects and candidate causes, even in the absence of other disambiguating information.", "title": "" }, { "docid": "d50c31e9b6ae64adc55a0c6fddb869cb", "text": "Dynamic simulation model of the actuator with two pneumatic artificial muscles in antagonistic connection was designed and built in Matlab Simulink environment. The basis for this simulation model was dynamic model of the pneumatic actuator based on advanced geometric muscle model. The main dynamics characteristics of such actuator were obtained by model simulation, as for example muscle force change, pressure change in muscle, arm position of the actuator. Simulation results will be used in design of control system of such actuator using model reference adaptive controller.", "title": "" }, { "docid": "1a638cef61762f6399df012e57b32998", "text": "Recurrent neural networks as fundamentally different neural network from feed-forward architectures was investigated for modelling of non linear behaviour of financial markets. Recurrent neural networks could be configured with the correct choice of parameters such as the number of neurons, the number of epochs, the amount of data and their relationship with the training data for predictions of financial markets. By exploring of learning and forecasting of the recurrent neural networks is observed the same effect: better learning, which often is described by the root mean square error does not guarantee a better prediction. There are such a recurrent neural networks settings where the best results of non linear time series forecasting could be obtained. New method of orthogonal input data was proposed, which improve process of EVOLINO RNN learning and forecasting. Citations: Nijolė Maknickienė, Aleksandras Vytautas Rutkauskas, Algirdas Maknickas. Investigation of Financial Market Prediction by Recurrent Neural Network – Innovative Infotechnologies for Science, Business and Education, ISSN 2029-1035 – 2(11) 2011 – Pp. 3-8.", "title": "" }, { "docid": "d2feed22afd1b6702ff4a8ebe160a5d7", "text": "Contactless payment systems represent cashless payments that do not require physical contact between the devices used in consumer payment and POS terminals by the merchant. Radio frequency identification (RFID) devices can be embedded in the most different forms, as the form of cards, key rings, built into a watch, mobile phones. This type of payment supports the three largest payment system cards: Visa (Visa Contactless), MasterCard (MasterCard PayPass) and American Express (ExpressPay). All these products are compliant with international ISO 14443 standard, which provides a unique system for payment globally. Implementation of contactless payment systems are based on same infrastructure that exists for the payment cards with magnetic strips and does not require additional investments by the firm and financial institutions, other than upgrading the existing POS terminals. Technological solutions used for the implementation are solutions based on ISO 14443 standard, Sony FeliCa technology, RFID tokens and NFC (Near Field Communication) systems. This paper describes the advantages of introducing contactless payment system based on RF technology through pilot projects conducted by VISA, MasterCard and American Express Company in order to confirm in practice the applicability of this technology.", "title": "" }, { "docid": "52504a4825bf773ced200a675d291dde", "text": "Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on nontextual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.", "title": "" }, { "docid": "ea92d0563e89a4cd7cfcfe6fc690ed09", "text": "At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their realvalued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.", "title": "" }, { "docid": "09f3bb814e259c74f1c42981758d5639", "text": "PURPOSE OF REVIEW\nThe application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases.\n\n\nRECENT FINDINGS\nMachine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies.\n\n\nSUMMARY\nOverall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.", "title": "" } ]
scidocsrr
14d11227c990c49308552e01212dc9c3
Humans prefer curved visual objects.
[ { "docid": "5afe5504566e60cbbb50f83501eee06c", "text": "This paper explores theoretical issues in ergonomics related to semantics and the emotional content of design. The aim is to find answers to the following questions: how to design products triggering \"happiness\" in one's mind; which product attributes help in the communication of positive emotions; and finally, how to evoke such emotions through a product. In other words, this is an investigation of the \"meaning\" that could be designed into a product in order to \"communicate\" with the user at an emotional level. A literature survey of recent design trends, based on selected examples of product designs and semantic applications to design, including the results of recent design awards, was carried out in order to determine the common attributes of their design language. A review of Good Design Award winning products that are said to convey and/or evoke emotions in the users has been done in order to define good design criteria. These criteria have been discussed in relation to user emotional responses and a selection of these has been given as examples.", "title": "" } ]
[ { "docid": "64e0a1345e5a181191c54f6f9524c96d", "text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.", "title": "" }, { "docid": "c2558388fb20454fa6f4653b1e4ab676", "text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.", "title": "" }, { "docid": "2a78461c1949b0cf6b119ae99c08847f", "text": "Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the handdesigned extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github. io/large-scale-curiosity/.", "title": "" }, { "docid": "bf14fb39f07e01bd6dc01b3583a726b6", "text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.", "title": "" }, { "docid": "14838947ee3b95c24daba5a293067730", "text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "title": "" }, { "docid": "f08c6829b353c45b6a9a6473b4f9a201", "text": "In this paper, we study the Symmetric Regularized Long Wave (SRLW) equations by finite difference method. We design some numerical schemes which preserve the original conservative properties for the equations. The first scheme is two-level and nonlinear-implicit. Existence of its difference solutions are proved by Brouwer fixed point theorem. It is proved by the discrete energy method that the scheme is uniquely solvable, unconditionally stable and second-order convergent for U in L1 norm, and for N in L2 norm on the basis of the priori estimates. The second scheme is three-level and linear-implicit. Its stability and second-order convergence are proved. Both of the two schemes are conservative so can be used for long time computation. However, they are coupled in computing so need more CPU time. Thus we propose another three-level linear scheme which is not only conservative but also uncoupled in computation, and give the numerical analysis on it. Numerical experiments demonstrate that the schemes are accurate and efficient. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "af07a7f4ffe29dda52bca62a803272fe", "text": "OBJECTIVE\nTo evaluate the effectiveness and tolerance of intraarticular injection (IAI) of triamcinolone hexacetonide (TH) for the treatment of osteoarthritis (OA) of hand interphalangeal (IP) joints.\n\n\nMETHODS\nSixty patients who underwent IAI at the most symptomatic IP joint were randomly assigned to receive TH/lidocaine (LD; n = 30) with TH 20 mg/ml and LD 2%, or just LD (n = 30). The injected joint was immobilized with a splint for 48 h in both groups. Patients were assessed at baseline and at 1, 4, 8, and 12 weeks by a blinded observer. The following variables were assessed: pain at rest [visual analog scale (VAS)r], pain at movement (VASm), swelling (physician VASs), goniometry, grip and pinch strength, hand function, treatment improvement, daily requirement of paracetamol, and local adverse effects. The proposed treatment (IAI with TH/LD) was successful if statistical improvement (p < 0.05) was achieved in at least 2 of 3 VAS. Repeated-measures ANOVA test was used to analyze intervention response.\n\n\nRESULTS\nFifty-eight patients (96.67%) were women, and the mean age was 60.7 years (± 8.2). The TH/LD group showed greater improvement than the LD group for VASm (p = 0.014) and physician VASs (p = 0.022) from the first week until the end of the study. In other variables, there was no statistical difference between groups. No significant adverse effects were observed.\n\n\nCONCLUSION\nThe IAI with TH/LD has been shown to be more effective than the IAI with LD for pain on movement and joint swelling in patients with OA of the IP joints. Regarding pain at rest, there was no difference between groups.\n\n\nTRIAL REGISTRATION NUMBER\nClinicalTrials.gov (NCT02102620).", "title": "" }, { "docid": "e583cf382c9a58a6f09acfcb345a381f", "text": "DXC Technology were asked to participate in a Cyber Vulnerability Investigation into organizations in the Defense sector in the UK. Part of this work was to examine the influence of socio-technical and/or human factors on cyber security – where possible linking factors to specific technical risks. Initial research into the area showed that (commercially, at least) most approaches to developing security culture in organisations focus on end users and deal solely with training and awareness regarding identifying and avoiding social engineering attacks and following security procedures. The only question asked and answered is how to ensure individuals conform to security policy and avoid such attacks. But experience of recent attacks (e.g., Wannacry, Sony hacks) show that responses to cyber security requirements are not just determined by the end users’ level of training and awareness, but grow out of the wider organizational culture – with failures at different levels of the organization. This is a known feature of socio-technical research. As a result, we have sought to develop and apply a different approach to measuring security culture, based on discovering the distribution of beliefs and values (and resulting patterns of behavior) throughout the organization. Based on our experience, we show a way we can investigate these patterns of behavior and use them to identify socio-technical vulnerabilities by comparing current and ‘ideal’ behaviors. In doing so, we also discuss how this approach can be further developed and successfully incorporated into commercial practice, while retaining scientific validity.", "title": "" }, { "docid": "b50b43bcc69f840e4ba4e26529788cab", "text": "Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-ofthe-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects. We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new stateof-the-art results on PASCAL VOC and COCO without any bells and whistles.", "title": "" }, { "docid": "fb1f3f300bcd48d99f0a553a709fdc89", "text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.", "title": "" }, { "docid": "c043e7a5d5120f5a06ef6decc06c184a", "text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures", "title": "" }, { "docid": "5a573ae9fad163c6dfe225f59b246b7f", "text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.", "title": "" }, { "docid": "c7862136579a8340f22db5d6f3ee5f12", "text": "A novel lighting system was devised for 3D defect inspection in the wire bonding process. Gold wires of 20 microm in diameter were bonded to connect the integrated circuit (IC) chip with the substrate. Bonding wire defects can be classified as 2D type and 3D type. The 2D-type defects include missed, shifted, or shorted wires. These defects can be inspected from a 2D top-view image of the wire. The 3D-type bonding wire defects are sagging wires, and are difficult to inspect from a 2D top-view image. A structured lighting system was designed and developed to facilitate all 2D-type and 3D-type defect inspection. The devised lighting system can be programmed to turn the structured LEDs on or off independently. Experiments show that the devised illumination system is effective for wire bonding inspection and will be valuable for further applications.", "title": "" }, { "docid": "1ef2e54d021f9d149600f0bc7bebb0cd", "text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.", "title": "" }, { "docid": "64ddf475e5fcf7407e4dfd65f95a68a8", "text": "Fuzzy PID controllers have been developed and applied to many fields for over a period of 30 years. However, there is no systematic method to design membership functions (MFs) for inputs and outputs of a fuzzy system. Then optimizing the MFs is considered as a system identification problem for a nonlinear dynamic system which makes control challenges. This paper presents a novel online method using a robust extended Kalman filter to optimize a Mamdani fuzzy PID controller. The robust extended Kalman filter (REKF) is used to adjust the controller parameters automatically during the operation process of any system applying the controller to minimize the control error. The fuzzy PID controller is tuned about the shape of MFs and rules to adapt with the working conditions and the control performance is improved significantly. The proposed method in this research is verified by its application to the force control problem of an electro-hydraulic actuator. Simulations and experimental results show that proposed method is effective for the online optimization of the fuzzy PID controller. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "74aaf19d143d86b52c09e726a70a2ac0", "text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.", "title": "" }, { "docid": "e9358f48172423a421ef5edf6fe909f9", "text": "PURPOSE\nTo describe a modification of the computer self efficacy scale for use in clinical settings and to report on the modified scale's reliability and construct validity.\n\n\nMETHODS\nThe computer self efficacy scale was modified to make it applicable for clinical settings (for use with older people or people with disabilities using everyday technologies). The modified scale was piloted, then tested with patients in an Australian inpatient rehabilitation setting (n = 88) to determine the internal consistency using Cronbach's alpha coefficient. Construct validity was assessed by correlation of the scale with age and technology use. Factor analysis using principal components analysis was undertaken to identify important constructs within the scale.\n\n\nRESULTS\nThe modified computer self efficacy scale demonstrated high internal consistency with a standardised alpha coefficient of 0.94. Two constructs within the scale were apparent; using the technology alone, and using the technology with the support of others. Scores on the scale were correlated with age and frequency of use of some technologies thereby supporting construct validity.\n\n\nCONCLUSIONS\nThe modified computer self efficacy scale has demonstrated reliability and construct validity for measuring the self efficacy of older people or people with disabilities when using everyday technologies. This tool has the potential to assist clinicians in identifying older patients who may be more open to using new technologies to maintain independence.", "title": "" }, { "docid": "b12bae586bc49a12cebf11cca49c0386", "text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.", "title": "" }, { "docid": "2959b7da07ce8b0e6825819566bce9ab", "text": "Social isolation among the elderly is a concern in developed countries. Using a randomized trial, this study examined the effect of a social isolation prevention program on loneliness, depression, and subjective well-being of the elderly in Japan. Among the elderly people who relocated to suburban Tokyo, 63 who responded to a pre-test were randomized and assessed 1 and 6 months after the program. Four sessions of a group-based program were designed to prevent social isolation by improving community knowledge and networking with other participants and community \"gatekeepers.\" The Life Satisfaction Index A (LSI-A), Geriatric Depression Scale (GDS), Ando-Osada-Kodama (AOK) loneliness scale, social support, and other variables were used as outcomes of this study. A linear mixed model was used to compare 20 of the 21 people in the intervention group to 40 of the 42 in the control group, and showed that the intervention program had a significant positive effect on LSI-A, social support, and familiarity with services scores and a significant negative effect on AOK over the study period. The program had no significant effect on depression. The findings of this study suggest that programs aimed at preventing social isolation are effective when they utilize existing community resources, are tailor-made based on the specific needs of the individual, and target people who can share similar experiences.", "title": "" }, { "docid": "42979dd6ad989896111ef4de8d26b2fb", "text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.", "title": "" } ]
scidocsrr
20b26df87b72f9c20e5766af9f3aba34
Multi-label Image Classification with A Probabilistic Label Enhancement Model
[ { "docid": "ba8e974e77d49749c6b8ad2ce950fb64", "text": "We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval.", "title": "" }, { "docid": "4820b3bfcf8c75011f5f5e1345be39c6", "text": "In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.", "title": "" } ]
[ { "docid": "03b9d6b605dc388c9b70f6ca06fdc1ab", "text": "Swimming is one of the most popular recreational and competitive sporting activities. In the 2013/2014 swimming season, 9630 men and 12,333 women were registered with the National Collegiate Athletics Association in the USA. The repetitive nature of the swimming stroke and demanding training programs of its athletes raises a number of concerns regarding incidence and severity of injuries that a swimmer might experience during a competitive season. A number of risk factors have previously been identified but the level of evidence from individual studies, as well as the level of certainty that these factors predispose a swimmer to pain and injury, to our knowledge has yet to be critically evaluated in a systematic review. Therefore, the primary objective of this review is to conduct a systematic review to critically assess the published evidence for risk factors that may predispose a swimmer to shoulder pain and injury. Three electronic databases, ScienceDirect, PubMed and SpringerLink, were searched using keywords \"(Injury OR pain) AND (Swim*)\" and \"(Shoulder) AND (Swim*)\". Based on the inclusion and exclusion criteria, 2731 unique titles were identified and were analyzed to a final 29 articles. Only articles with a level of evidence of I, II and III were included according to robust study design and data analysis. The level of certainty for each risk factor was determined. No studies were determined to have a high level of certainty, clinical joint laxity and instability, internal/external rotation, previous history of pain and injury and competitive level were determined to have a moderate level of certainty. All other risk factors were evaluated as having a low level of certainty. Although several risk factors were identified from the reviewed studies, prospective cohort studies, larger sample sizes, consistent and robust measures of risk should be employed in future research.", "title": "" }, { "docid": "379ddbfc61fb7fac65c7662d1b54c1f1", "text": "There is a rapid growth in the use of voice-controlled intelligent personal assistants on mobile devices, such as Microsoft's Cortana, Google Now, and Apple's Siri. They significantly change the way users interact with search systems, not only because of the voice control use and touch gestures, but also due to the dialogue-style nature of the interactions and their ability to preserve context across different queries. Predicting success and failure of such search dialogues is a new problem, and an important one for evaluating and further improving intelligent assistants. While clicks in web search have been extensively used to infer user satisfaction, their significance in search dialogues is lower due to the partial replacement of clicks with voice control, direct and voice answers, and touch gestures.\n In this paper, we propose an automatic method to predict user satisfaction with intelligent assistants that exploits all the interaction signals, including voice commands and physical touch gestures on the device. First, we conduct an extensive user study to measure user satisfaction with intelligent assistants, and simultaneously record all user interactions. Second, we show that the dialogue style of interaction makes it necessary to evaluate the user experience at the overall task level as opposed to the query level. Third, we train a model to predict user satisfaction, and find that interaction signals that capture the user reading patterns have a high impact: when including all available interaction signals, we are able to improve the prediction accuracy of user satisfaction from 71% to 81% over a baseline that utilizes only click and query features.", "title": "" }, { "docid": "8994470e355b5db188090be731ee4fe9", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "f0ced6c4641ebdc46e1f5efe4c3080ce", "text": "This paper summarizes results of the 1st Contest on Semantic Description of Human Activities (SDHA), in conjunction with ICPR 2010. SDHA 2010 consists of three types of challenges, High-level Human Interaction Recognition Challenge, Aerial View Activity Classification Challenge, and Wide-Area Activity Search and Recognition Challenge. The challenges are designed to encourage participants to test existing methodologies and develop new approaches for complex human activity recognition scenarios in realistic environments. We introduce three new public datasets through these challenges, and discuss results of state-ofthe-art activity recognition systems designed and implemented by the contestants. A methodology using a spatio-temporal voting [19] successfully classified segmented videos in the UT-Interaction datasets, but had a difficulty correctly localizing activities from continuous videos. Both the method using local features [10] and the HMM based method [18] recognized actions from low-resolution videos (i.e. UT-Tower dataset) successfully. We compare their results in this paper.", "title": "" }, { "docid": "55658c75bcc3a12c1b3f276050f28355", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "3ee6568a390b60b60c862c790b037bf5", "text": "In the commercial software development organizations, increased complexity of products, shortened development cycles and higher customer expectations of quality have placed a major responsibility on the areas of software debugging, testing, and verification. As this issue of the IBM Systems Journal illustrates, the technology is improving on all the three fronts. However, we observe that due to the informal nature of software development as a whole, the prevalent practices in the industry are still quite immature even in areas where there is existing technology. In addition, the technology and tools in the more advanced aspects are really not ready for a large scale commercial use.", "title": "" }, { "docid": "964f4f8c14432153d6001d961a1b5294", "text": "Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.", "title": "" }, { "docid": "37a0c6ac688c7d7f2dd622ebbe3ec184", "text": "Prior research shows that directly applying phrase-based SMT on lexical tokens to migrate Java to C# produces much semantically incorrect code. A key limitation is the use of sequences in phrase-based SMT to model and translate source code with well-formed structures. We propose mppSMT, a divide-and-conquer technique to address that with novel training and migration algorithms using phrase-based SMT in three phases. First, mppSMT treats a program as a sequence of syntactic units and maps/translates such sequences in two languages to one another. Second, with a syntax-directed fashion, it deals with the tokens within syntactic units by encoding them with semantic symbols to represent their data and token types. This encoding via semantic symbols helps better migration of API usages. Third, the lexical tokens corresponding to each sememe are mapped or migrated. The resulting sequences of tokens are merged together to form the final migrated code. Such divide-and-conquer and syntax-direction strategies enable phrase-based SMT to adapt well to syntactical structures in source code, thus, improving migration accuracy. Our empirical evaluation on several real-world systems shows that 84.8 -- 97.9% and 70 -- 83% of the migrated methods are syntactically and semantically correct, respectively. 26.3 -- 51.2% of total migrated methods are exactly matched to the human-written C# code in the oracle. Compared to Java2CSharp, a rule-based migration tool, it achieves higher semantic accuracy from 6.6 -- 57.7% relatively. Importantly, it does not require manual labeling for training data or manual definition of rules.", "title": "" }, { "docid": "531e30bf9610b82f6fc650652e6fc836", "text": "A versatile microreactor platform featuring a novel chemical-resistant microvalve array has been developed using combined silicon/polymer micromachining and a special polymer membrane transfer process. The basic valve unit in the array has a typical ‘transistor’ structure and a PDMS/parylene double-layer valve membrane. A robust multiplexing algorithm is also proposed for individual addressing of a large array using a minimal number of signal inputs. The in-channel microvalve is leakproof upon pneumatic actuation. In open status it introduces small impedance to the fluidic flow, and allows a significantly larger dynamic range of flow rates (∼ml min−1) compared with most of the microvalves reported. Equivalent electronic circuits were established by modeling the microvalves as PMOS transistors and the fluidic channels as simple resistors to provide theoretical prediction of the device fluidic behavior. The presented microvalve/reactor array showed excellent chemical compatibility in the tests with several typical aggressive chemicals including those seriously degrading PDMS-based microfluidic devices. Combined with the multiplexing strategy, this versatile array platform can find a variety of lab-on-a-chip applications such as addressable multiplex biochemical synthesis/assays, and is particularly suitable for those requiring tough chemicals, large flow rates and/or high-throughput parallel processing. As an example, the device performance was examined through the addressed synthesis of 30-mer DNA oligonucleotides followed by sequence validation using on-chip hybridization. The results showed leakage-free valve array addressing and proper synthesis in target reactors, as well as uniform flow distribution and excellent regional reaction selectivity. (Some figures in this article are in colour only in the electronic version) 0960-1317/06/081433+11$30.00 © 2006 IOP Publishing Ltd Printed in the UK 1433", "title": "" }, { "docid": "1a54c51a5488c1ca7e48d9260c4d907f", "text": "OBJECTIVES\nTo conduct a detailed evaluation, with meta-analyses, of the published evidence on milk and dairy consumption and the incidence of vascular diseases and diabetes. Also to summarise the evidence on milk and dairy consumption and cancer reported by the World Cancer Research Fund and then to consider the relevance of milk and dairy consumption to survival in the UK, a typical Western community. Finally, published evidence on relationships with whole milk and fat-reduced milks was examined.\n\n\nMETHODS\nProspective cohort studies of vascular disease and diabetes with baseline data on milk or dairy consumption and a relevant disease outcome were identified by searching MEDLINE, and reference lists in the relevant published reports. Meta-analyses of relationships in these reports were conducted. The likely effect of milk and dairy consumption on survival was then considered, taking into account the results of published overviews of relationships of these foods with cancer.\n\n\nRESULTS\nFrom meta-analysis of 15 studies the relative risk of stroke and/or heart disease in subjects with high milk or dairy consumption was 0.84 (95% CI 0.76, 0.93) and 0.79 (0.75, 0.82) respectively, relative to the risk in those with low consumption. Four studies reported incident diabetes as an outcome, and the relative risk in the subjects with the highest intake of milk or diary foods was 0.92 (0.86, 0.97).\n\n\nCONCLUSIONS\nSet against the proportion of total deaths attributable to the life-threatening diseases in the UK, vascular disease, diabetes and cancer, the results of meta-analyses provide evidence of an overall survival advantage from the consumption of milk and dairy foods.", "title": "" }, { "docid": "0802735955b52c1dae64cf34a97a33fb", "text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.", "title": "" }, { "docid": "dcc0a5ad40641c689da94c13055b5ffc", "text": "We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a k-coverage problem in wireless sensor networks. In addition, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.", "title": "" }, { "docid": "cee853716fdafbced544a00732437769", "text": "In this paper, we emphasize the simulation of a decision making process applied to an international transport company from Romania. Our goal was to establish the most efficient decision, having in view three alternatives: acquisition of one additional truck, acquisition of two additional trucks and the outsourcing supplementary requests of transport services, over the trucks fleet capacity of the company, to other specialized transport companies. In view to determine mathematically the best decision, we applied the decision tree method, in the conditions in which the company's manager offered us a feasibility analysis concerning these alternatives. We also reveal in this paper the opportunity provided by WinQSB software functions, which reduce the time involved by solving this decision problem.", "title": "" }, { "docid": "c3d1470f049b9531c3af637408f5f9cb", "text": "Information and communication technology (ICT) is integral in today’s healthcare as a critical piece of support to both track and improve patient and organizational outcomes. Facilitating nurses’ informatics competency development through continuing education is paramount to enhance their readiness to practice safely and accurately in technologically enabled work environments. In this article, we briefly describe progress in nursing informatics (NI) and share a project exemplar that describes our experience in the design, implementation, and evaluation of a NI educational event, a one-day boot camp format that was used to provide foundational knowledge in NI targeted primarily at frontline nurses in Alberta, Canada. We also discuss the project outcomes, including lessons learned and future implications. Overall, the boot camp was successful to raise nurses’ awareness about the importance of informatics in nursing practice.", "title": "" }, { "docid": "2e770177ea9c68a8259f9f620c08abe0", "text": "Several defenses have increased the cost of traditional, low-level attacks that corrupt control data, e.g. return addresses saved on the stack, to compromise program execution. In response, creative adversaries have begun circumventing these defenses by exploiting programming errors to manipulate pointers to virtual tables, or vtables, of C++ objects. These attacks can hijack program control flow whenever a virtual method of a corrupted object is called, potentially allowing the attacker to gain complete control of the underlying system. In this paper we present SAFEDISPATCH, a novel defense to prevent such vtable hijacking by statically analyzing C++ programs and inserting sufficient runtime checks to ensure that control flow at virtual method call sites cannot be arbitrarily influenced by an attacker. We implemented SAFEDISPATCH as a Clang++/LLVM extension, used our enhanced compiler to build a vtable-safe version of the Google Chromium browser, and measured the performance overhead of our approach on popular browser benchmark suites. By carefully crafting a handful of optimizations, we were able to reduce average runtime overhead to just 2.1%.", "title": "" }, { "docid": "12b205881ead4d31ae668d52f4ba52c7", "text": "The general theory of side-looking synthetic aperture radar systems is developed. A simple circuit-theory model is developed; the geometry of the system determines the nature of the prefilter and the receiver (or processor) is the postfilter. The complex distributed reflectivity density appears as the input, and receiver noise is first considered as the interference which limits performance. Analysis and optimization are carried out for three performance criteria (resolution, signal-to-noise ratio, and least squares estimation of the target field). The optimum synthetic aperture length is derived in terms of the noise level and average transmitted power. Range-Doppler ambiguity limitations and optical processing are discussed briefly. The synthetic aperture concept for rotating target fields is described. It is observed that, for a physical aperture, a side-looking radar, and a rotating target field, the azimuth resolution is λ/α where α is the change in aspect angle over which the target field is viewed, The effects of phase errors on azimuth resolution are derived in terms of the power density spectrum of the derivative of the phase errors and the performance in the absence of phase errors.", "title": "" }, { "docid": "afc9fbf2db89a5220c897afcbabe028f", "text": "Evidence for viewpoint-specific image-based object representations have been collected almost entirely using exemplar-specific recognition tasks. Recent results, however, implicate image-based processes in more categorical tasks, for instance when objects contain qualitatively different 3D parts. Although such discriminations approximate class-level recognition. they do not establish whether image-based representations can support generalization across members of an object class. This issue is critical to any theory of recognition, in that one hallmark of human visual competence is the ability to recognize unfamiliar instances of a familiar class. The present study addresses this questions by testing whether viewpoint-specific representations for some members of a class facilitate the recognition of other members of that class. Experiment 1 demonstrates that familiarity with several members of a class of novel 3D objects generalizes in a viewpoint-dependent manner to cohort objects from the same class. Experiment 2 demonstrates that this generalization is based on the degree of familiarity and the degree of geometrical distinctiveness for particular viewpoints. Experiment 3 demonstrates that this generalization is restricted to visually-similar objects rather than all objects learned in a given context. These results support the hypothesis that image-based representations are viewpoint dependent, but that these representations generalize across members of perceptually-defined classes. More generally, these results provide evidence for a new approach to image-based recognition in which object classes are represented as cluster of visually-similar viewpoint-specific representations.", "title": "" }, { "docid": "3bb8d021eac7da49dc97f44e64414694", "text": "We study two sequence discriminative training criteria, i.e., Lattice-Free Maximum Mutual Information (LFMMI) and Connectionist Temporal Classification (CTC), for end-to-end training of Deep Bidirectional Long Short-Term Memory (DBLSTM) based character models of two offline English handwriting recognition systems with an input feature vector sequence extracted by Principal Component Analysis (PCA) and Convolutional Neural Network (CNN), respectively. We observe that refining CTC-trained PCA-DBLSTM model with an interpolated CTC and LFMMI objective function (\"CTC+LFMMI\") for several additional iterations achieves a relative Word Error Rate (WER) reduction of 24.6% and 13.9% on the public IAM test set and an in-house E2E test set, respectively. For a much better CTC-trained CNN-DBLSTM system, the proposed \"CTC+LFMMI\" method achieves a relative WER reduction of 19.6% and 8.3% on the above two test sets, respectively.", "title": "" } ]
scidocsrr
8567f97c37660a35899955986d3b6b5a
A Mutually Beneficial Integration of Data Mining and Information Extraction
[ { "docid": "98e025d04aaf1ba394d7c8ac537b40c9", "text": "The information age is characterized by a rapid growth in the amount of information available in electronic media. Traditional data handling methods are not adequate to cope with this information flood. Knowledge Discovery in Databases (KDD) is a new paradigm that focuses on computerized exploration of large amounts of data and on discovery of relevant and interesting patterns within them. While most work on KDD is concerned with structured databases, it is clear that this paradigm is required for handling the huge amount of information that is available only in unstructured textual form. To apply traditional KDD on texts it is necessary to impose some structure on the data that would be rich enough to allow for interesting KDD operations. On the other hand, we have to consider the severe limitations of current text processing technology and define rather simple structures that can be extracted from texts fairly automatically and in a reasonable cost. We propose using a text categorization paradigm to annotate text articles with meaningful concepts that are organized in hierarchical structure. We suggest that this relatively simple annotation is rich enough to provide the basis for a KDD framework, enabling data summarization, exploration of interesting patterns, and trend analysis. This research combines the KDD and text categorization paradigms and suggests advances to the state of the art in both areas.", "title": "" } ]
[ { "docid": "ac56668cdaad25e9df31f71bc6d64995", "text": "Hand-crafted illustrations are often more effective than photographs for conveying the shape and important features of an object, but they require expertise and time to produce. We describe an image compositing system and user interface that allow an artist to quickly and easily create technical illustrations from a set of photographs of an object taken from the same point of view under variable lighting conditions. Our system uses a novel compositing process in which images are combined using spatially-varying light mattes, enabling the final lighting in each area of the composite to be manipulated independently. We describe an interface that provides for the painting of local lighting effects (e.g. shadows, highlights, and tangential lighting to reveal texture) directly onto the composite. We survey some of the techniques used in illustration and lighting design to convey the shape and features of objects and describe how our system can be used to apply these techniques.", "title": "" }, { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "ec58915a7fd321bcebc748a369153509", "text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.", "title": "" }, { "docid": "751689427492a952a5b1238c62f45db4", "text": "This work concerns the behavior study of a MPPT algorithm based on the incremental conductance. An open loop analysis of a photovoltaic chain, modeled in matlab-simulink, allows to extract the maximum power of the photovoltaic panel. The principle is based on the determination instantaneously of the conductance and its tendency materialized by a signed increment. A buck step down converter is used to adapt the voltage to its appropriate value to reach a maximal power extraction. This novel analysis method applied to the photovoltaic system is made under different atmospheric parameters. The performances are shown using Matlab/Simulink software.", "title": "" }, { "docid": "c697ce69b5ba77cce6dce93adaba7ee0", "text": "Online social networks play a major role in modern societies, and they have shaped the way social relationships evolve. Link prediction in social networks has many potential applications such as recommending new items to users, friendship suggestion and discovering spurious connections. Many real social networks evolve the connections in multiple layers (e.g. multiple social networking platforms). In this article, we study the link prediction problem in multiplex networks. As an example, we consider a multiplex network of Twitter (as a microblogging service) and Foursquare (as a location-based social network). We consider social networks of the same users in these two platforms and develop a meta-path-based algorithm for predicting the links. The connectivity information of the two layers is used to predict the links in Foursquare network. Three classical classifiers (naive Bayes, support vector machines (SVM) and K-nearest neighbour) are used for the classification task. Although the networks are not highly correlated in the layers, our experiments show that including the cross-layer information significantly improves the prediction performance. The SVM classifier results in the best performance with an average accuracy of 89%.", "title": "" }, { "docid": "a6ed481b6e3c0cf53e3e18f241b9489d", "text": "A generalization/specialization of the PARAFAC model is developed that improves its properties when applied to multi-way problems involving linearly dependent factors. Thismodel is called PARALIND (PARAllel profiles with LINear Dependences). Linear dependences can arise when the empirical sources of variation being modeled by factors are causally or logically linked during data generation, or circumstantially linked during data collection. For example, this can occur in a chemical context when end products are related to the precursor or in a psychological context when a single stimulus generates two incompatible feelings at once. For such cases, the most theoretically appropriate PARAFAC model has loading vectors that are linearly dependent in at least one mode, and when collinear, are nonunique in the others. However, standard PARAFAC analysis of fallible data will have neither of these features. Instead, latent linear dependences become high surface correlations and any latent nonuniqueness is replaced by a meaningless surface-level ‘unique orientation’ that optimally fits the particular random noise in that sample. To avoid these problems, any set of components that in theory should be rank deficient are re-expressed in PARALIND as a product of two matrices, one that explicitly represents their dependency relationships and another, with fewer columns, that captures their patterns of variation. To demonstrate the approach, we apply it first to fluorescence spectroscopy (excitation-emission matrices, EEM) data in which concentration values for two analytes covary exactly, and then to flow injection analysis (FIA) data in which subsets of columns are logically constrained to sum to a constant, but differently in each of twomodes. In the PARAFAC solutions of the EEMdata, all factors are ‘unique’ but this is onlymeaningful for two of the factors that are also unique at the latent level. In contrast, the PARALIND solutions directly display the extent and nature of partial nonuniqueness present at the latent level by exhibiting a corresponding partial uniqueness in their recovered loadings. For the FIA data, PARALIND constraints restore latent uniqueness to the concentration estimates. Comparison of the solutions shows that PARALINDmore accurately recovers latent structure, presumably because it uses fewer parameters and hence fits less error. Copyright 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "7d6e19b5a6db447ab4ea5df012d43da9", "text": "The ARPANET routing metric was revised in July 1987, resulting in substantial performance improvements, especially in terms of user delay and effective network capacity. These revisions only affect the individual link costs (or metrics) on which the PSN (packet switching node) bases its routing decisions. They do not affect the SPF (“shortest path first”) algorithm employed to compute routes (installed in May 1979). The previous link metric was packet delay averaged over a ten second interval, which performed effectively under light-to-moderate traffic conditions. However, in heavily loaded networks it led to routing instabilities and wasted link and processor bandwidth.\nThe revised metric constitutes a move away from the strict delay metric: it acts similar to a delay-based metric under lightly loads and to a capacity-based metric under heavy loads. It will not always result in shortest-delay paths. Since the delay metric produced shortest-delay paths only under conditions of light loading, the revised metric involves giving up the guarantee of shortest-delay paths under light traffic conditions for the sake of vastly improved performance under heavy traffic conditions.", "title": "" }, { "docid": "02d9153092f3cc2632810d4b46c272e8", "text": "ion in concept learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 45–58. Chen, S., & Chaiken, S. 1999. The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology: 73–96. New York: Guilford Press. Chi, M. T. H., Glaser, R., & Farr, M. J. 1998. The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum Associates. Claxton, G. 1998. Knowing without knowing why. Psychologist, 11(5): 217–220. Collins, H. M. 1982. The replication of experiments in physics. In B. Barnes & D. Edge (Eds.), Science in context: 94–116. Cambridge, MA: MIT Press. Cyert, R. M., & March, J. G. 1963. A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice-Hall. Dawes, R. M., Faust, D., & Meehl, P. E. 1989. Clinical versus actuarial judgment. Science, 31: 1668–1674. De Dreu, C. K. W. 2003. Time pressure and closing of the mind in negotiation. Organizational Behavior and Human Decision Processes, 91: 280–295. Denes-Raj, V., & Epstein, S. 1994. Conflict between intuitive and rational processing: When people behave against their better judgment. Journal of Personality and Social Psychology, 66: 819–829. Donaldson, T. 2003. Editor’s comments: Taking ethics seriously—A mission now more possible. Academy of Management Review, 28: 363–366. Dreyfus, H. L., & Dreyfus, S. E. 1986. Mind over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press. Edland, A., & Svenson, O. 1993. Judgment and decision making under time pressure. In O. Svenson & A. J. Maule (Eds.), Time pressure and stress in human judgment and decision making: 27–40. New York: Plenum Press. Eisenhardt, K. 1989. Making fast strategic decisions in highvelocity environments. Academy of Management Jour-", "title": "" }, { "docid": "08edbcf4f974895cfa22d80ff32d48da", "text": "This paper describes a Non-invasive measurement of blood glucose of diabetic based on infrared spectroscopy. We measured the spectrum of human finger by using the Fourier transform infrared spectroscopy (FT-IR) of attenuated total reflection (ATR). In this paper, We would like to report the accuracy of the calibration models when we measured the blood glucose of diabetic.", "title": "" }, { "docid": "a62dc7e25b050addad1c27d92deee8b7", "text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.", "title": "" }, { "docid": "4aeefa15b326ed583c9f922d7b035ff6", "text": "In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel \"address\". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.", "title": "" }, { "docid": "aa622e064469291fedfadfe36afe3aef", "text": "Multiple kernel clustering (MKC), which performs kernel-based data fusion for data clustering, is an emerging topic. It aims at solving clustering problems with multiple cues. Most MKC methods usually extend existing clustering methods with a multiple kernel learning (MKL) setting. In this paper, we propose a novel MKC method that is different from those popular approaches. Centered kernel alignment—an effective kernel evaluation measure—is employed in order to unify the two tasks of clustering and MKL into a single optimization framework. To solve the formulated optimization problem, an efficient two-step iterative algorithm is developed. Experiments on several UCI datasets and face image datasets validate the effectiveness and efficiency of our MKC algorithm.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "aedfe7bac8e74fb5744d5fdf666ce998", "text": "This paper describes an approach for credit risk evaluation based on linear Support Vector Machines classifiers, combined with external evaluation and sliding window testing, with focus on application on larger datasets. It presents a technique for optimal linear SVM classifier selection based on particle swarm optimization technique, providing significant amount of focus on imbalanced learning issue. It is compared to other classifiers in terms of accuracy and identification of each class. Experimental classification performance results, obtained using real world financial dataset from SEC EDGAR database, lead to conclusion that proposed technique is capable to produce results, comparable to other classifiers, such as logistic regression and RBF network, and thus be can be an appealing option for future development of real credit risk evaluation models. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "016a07d2ddb55149708409c4c62c67e3", "text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8f0d90a605829209c7b6d777c11b299d", "text": "Researchers and educators have designed curricula and resources for introductory programming environments such as Scratch, App Inventor, and Kodu to foster computational thinking in K-12. This paper is an empirical study of the effectiveness and usefulness of tiles and flashcards developed for Microsoft Kodu Game Lab to support students in learning how to program and develop games. In particular, we investigated the impact of physical manipulatives on 3rd -- 5th grade students' ability to understand, recognize, construct, and use game programming design patterns. We found that the students who used physical manipulatives performed well in rule construction, whereas the students who engaged more with the rule editor of the programming environment had better mental simulation of the rules and understanding of the concepts.", "title": "" }, { "docid": "650966f7be923fb91d48585e9cac10d5", "text": "OF THESIS WiFi AND LTE COEXISTENCE IN THE UNLICENSED SPECTRUM by R.A. Nadisanka Rupasinghe Florida International University, 2015 Miami, Florida Professor İsmail Güvenç, Major Professor Today, smart-phones have revolutionized wireless communication industry towards an era of mobile data. To cater for the ever increasing data traffic demand, it is of utmost importance to have more spectrum resources whereby sharing under-utilized spectrum bands is an effective solution. In particular, the 4G broadband Long Term Evolution (LTE) technology and its foreseen 5G successor will benefit immensely if their operation can be extended to the under-utilized unlicensed spectrum. In this thesis, first we analyze WiFi 802.11n and LTE coexistence performance in the unlicensed spectrum considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. Based on the initial findings, we propose a Q-Learning based dynamic duty cycle selection technique for configuring LTE transmission gaps, so that a satisfactory throughput is maintained both for LTE and WiFi systems. Simulation results show that the proposed approach can enhance the overall capacity performance by 19% and WiFi capacity performance by 77%, hence enabling effective coexistence of LTE and WiFi systems in the unlicensed band.", "title": "" }, { "docid": "3ddc2485ef256a38527efb6308d7526a", "text": "We study the K-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms. We introduce a tight asymptotic regret lower bound that is based on the information divergence. An algorithm that is inspired by the Deterministic Minimum Empirical Divergence algorithm (Honda and Takemura, 2010) is proposed, and its regret is analyzed. The proposed algorithm is found to be the first one with a regret upper bound that matches the lower bound. Experimental comparisons of dueling bandit algorithms show that the proposed algorithm significantly outperforms existing ones.", "title": "" }, { "docid": "2b197191cce0bf1fe83a3d40fcec582f", "text": "BACKGROUND\nPatient delay in seeking medical attention could be a contributing cause in a substantial number of breast cancer deaths. The purpose of this study was to identify factors associated with long delay in order to identify specific groups in need of more intensive education regarding the signs of breast cancer and the importance of early treatment.\n\n\nMETHODS\nA study of 162 women with potential breast cancer symptoms was done in the area of Worcester, MA. Two methods of analysis were used. A case-control approach was used where the outcome variable was categorized into two groups of longer and shorter delay, and a survival analysis was used where the outcome variable was treated as a continuous variable.\n\n\nRESULTS\nIt was found that women with increasing symptoms were more likely to delay than women whose symptoms either decreased or remained the same. Women performing monthly breast self-examination and/or receiving at least bi-annual mammograms were much less likely to delay than women who performed breast self-examination or received mammograms less often. It was also found that women using family practitioners were less likely to delay than women using other types of physicians.\n\n\nCONCLUSIONS\nPatient delay continues to be a major problem in breast cancer, as 16% of the women here delayed at least two months before seeking help. This study presented a new and improved method for defining patient delay, which should be explored further in larger studies.", "title": "" }, { "docid": "511c4a62c32b32eb74761b0585564fe4", "text": "In the previous chapters, we proposed several features for writer identification, historical manuscript dating and localization separately. In this chapter, we present a summarization of the proposed features for different applications by proposing a joint feature distribution (JFD) principle to design novel discriminative features which could be the joint distribution of features on adjacent positions or the joint distribution of different features on the same location. Following the proposed JFD principle, we introduce seventeen features, including twelve textural-based and five grapheme-based features. We evaluate these features for different applications from four different perspectives to understand handwritten documents beyond OCR, by writer identification, script recognition, historical manuscript dating and localization.", "title": "" } ]
scidocsrr
a7ecc679e00a090a141312f80c738635
PowerSpy: Location Tracking using Mobile Device Power Analysis
[ { "docid": "5e286453dfe55de305b045eaebd5f8fd", "text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.", "title": "" }, { "docid": "74227709f4832c3978a21abb9449203b", "text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.", "title": "" } ]
[ { "docid": "64a730ce8aad5d4679409be43a291da7", "text": "Background In the last years, it has been seen a shifting on society's consumption patterns, from mass consumption to second-hand culture. Moreover, consumer's perception towards second-hand stores, has been changing throughout the history of second-hand markets, according to the society's values prevailing in each time. Thus, the purchase intentions regarding second-hand clothes are influence by motivational and moderating factors according to the consumer's perception. Therefore, it was employed the theory of Guiot and Roux (2010) on motivational factors towards second-hand shopping and previous researches on moderating factors towards second-hand shopping. Purpose The purpose of this study is to explore consumer's perception and their purchase intentions towards second-hand clothing stores. Method For this, a qualitative and abductive approach was employed, combined with an exploratory design. Semi-structured face-to-face interviews were conducted utilizing a convenience sampling approach. Conclusion The findings show that consumers perception and their purchase intentions are influenced by their age and the environment where they live. However, the environment affect people in different ways. From this study, it could be found that elderly consumers are influenced by values and beliefs towards second-hand clothes. Young people are very influenced by the concept of fashion when it comes to second-hand clothes. For adults, it could be observed that price and the sense of uniqueness driver their decisions towards second-hand clothes consumption. The main motivational factor towards second-hand shopping was price. On the other hand, risk of contamination was pointed as the main moderating factor towards second-hand purchase. The study also revealed two new motivational factors towards second-hand clothing shopping, such charity and curiosity. Managers of second-hand clothing stores can make use of these findings to guide their decisions, especially related to improvements that could be done in order to make consumers overcoming the moderating factors towards second-hand shopping. The findings of this study are especially useful for second-hand clothing stores in Borås, since it was suggested couple of improvements for those stores based on the participant's opinions.", "title": "" }, { "docid": "7ddc7a3fffc582f7eee1d0c29914ba1a", "text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.", "title": "" }, { "docid": "75060c7027db4e75bc42f3f3c84cad9b", "text": "In this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to a) reduced agency costs due to enhanced stakeholder engagement and b) reduced informational asymmetry due to increased transparency. Using a large cross-section of firms, we find that firms with better CSR performance face significantly lower capital constraints. Moreover, we provide evidence that both of the hypothesized mechanisms, better stakeholder engagement and transparency around CSR performance, are important in reducing capital constraints. The results are further confirmed using several alternative measures of capital constraints, a paired analysis based on a ratings shock to CSR performance, an instrumental variables and also a simultaneous equations approach. Finally, we show that the relation is driven by both the social and the environmental dimension of CSR.", "title": "" }, { "docid": "66382b88e0faa573251d5039ccd65d6c", "text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.", "title": "" }, { "docid": "6766977de80074325165a82eeb08d671", "text": "We synthesized the literature on gamification of education by conducting a review of the literature on gamification in the educational and learning context. Based on our review, we identified several game design elements that are used in education. These game design elements include points, levels/stages, badges, leaderboards, prizes, progress bars, storyline, and feedback. We provided examples from the literature to illustrate the application of gamification in the educational context.", "title": "" }, { "docid": "f83a16d393c78d6ba0e65a4659446e7e", "text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.", "title": "" }, { "docid": "b8def7be21f014693589ae99385412dd", "text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.", "title": "" }, { "docid": "8dfdd829881074dc002247c9cd38eba8", "text": "The limited battery lifetime of modern embedded systems and mobile devices necessitates frequent battery recharging or replacement. Solar energy and small-size photovoltaic (PV) systems are attractive solutions to increase the autonomy of embedded and personal devices attempting to achieve perpetual operation. We present a battery less solar-harvesting circuit that is tailored to the needs of low-power applications. The harvester performs maximum-power-point tracking of solar energy collection under nonstationary light conditions, with high efficiency and low energy cost exploiting miniaturized PV modules. We characterize the performance of the circuit by means of simulation and extensive testing under various charging and discharging conditions. Much attention has been given to identify the power losses of the different circuit components. Results show that our system can achieve low power consumption with increased efficiency and cheap implementation. We discuss how the scavenger improves upon state-of-the-art technology with a measured power consumption of less than 1 mW. We obtain increments of global efficiency up to 80%, diverging from ideality by less than 10%. Moreover, we analyze the behavior of super capacitors. We find that the voltage across the supercapacitor may be an unreliable indicator for the stored energy under some circumstances, and this should be taken into account when energy management policies are used.", "title": "" }, { "docid": "249a09e24ce502efb4669603b54b433d", "text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7", "title": "" }, { "docid": "b8cf5e3802308fe941848fea51afddab", "text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.", "title": "" }, { "docid": "43e5146e4a7723cf391b013979a1da32", "text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.", "title": "" }, { "docid": "0321ef8aeb0458770cd2efc35615e11c", "text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "4c16117954f9782b3a22aff5eb50537a", "text": "Domain transfer is an exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (e.g., image-to-image, video-video), utilizing similar or shared networks to transform domain-specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.", "title": "" }, { "docid": "3b7cfe02a34014c84847eea4790037e2", "text": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.", "title": "" }, { "docid": "aea4b65d1c30e80e7f60a52dbecc78f3", "text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.", "title": "" }, { "docid": "bb94ef2ab26fddd794a5b469f3b51728", "text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1b5bc53b1039f3e7aecbc8dcb2f3b9a8", "text": "Agricultural lands occupy 37% of the earth's land surface. Agriculture accounts for 52 and 84% of global anthropogenic methane and nitrous oxide emissions. Agricultural soils may also act as a sink or source for CO2, but the net flux is small. Many agricultural practices can potentially mitigate greenhouse gas (GHG) emissions, the most prominent of which are improved cropland and grazing land management and restoration of degraded lands and cultivated organic soils. Lower, but still significant mitigation potential is provided by water and rice management, set-aside, land use change and agroforestry, livestock management and manure management. The global technical mitigation potential from agriculture (excluding fossil fuel offsets from biomass) by 2030, considering all gases, is estimated to be approximately 5500-6000Mt CO2-eq.yr-1, with economic potentials of approximately 1500-1600, 2500-2700 and 4000-4300Mt CO2-eq.yr-1 at carbon prices of up to 20, up to 50 and up to 100 US$ t CO2-eq.-1, respectively. In addition, GHG emissions could be reduced by substitution of fossil fuels for energy production by agricultural feedstocks (e.g. crop residues, dung and dedicated energy crops). The economic mitigation potential of biomass energy from agriculture is estimated to be 640, 2240 and 16 000Mt CO2-eq.yr-1 at 0-20, 0-50 and 0-100 US$ t CO2-eq.-1, respectively.", "title": "" }, { "docid": "d9214591462b0780ede6d58dab42f48c", "text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.", "title": "" }, { "docid": "514d9326cb54cec16f4dfb05deca3895", "text": "Photo publishing in Social Networks and other Web2.0 applications has become very popular due to the pervasive availability of cheap digital cameras, powerful batch upload tools and a huge amount of storage space. A portion of uploaded images are of a highly sensitive nature, disclosing many details of the users' private life. We have developed a web service which can detect private images within a user's photo stream and provide support in making privacy decisions in the sharing context. In addition, we present a privacy-oriented image search application which automatically identifies potentially sensitive images in the result set and separates them from the remaining pictures.", "title": "" } ]
scidocsrr
abe66f029600b23d6f9401a51417505d
The Feature Selection and Intrusion Detection Problems
[ { "docid": "2568f7528049b4ffc3d9a8b4f340262b", "text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.", "title": "" } ]
[ { "docid": "7c9cd59a4bb14f678c57ad438f1add12", "text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.", "title": "" }, { "docid": "bba15d88edc2574dcb3b12a78c3b2d57", "text": "Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higherorder probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) Robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.", "title": "" }, { "docid": "7ce1646e0fe1bd83f9feb5ec20233c93", "text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.", "title": "" }, { "docid": "4dfb1fab364811cdd9cd7baa8c9ae0f3", "text": "Understanding the mechanisms of evolution of brain pathways for complex behaviours is still in its infancy. Making further advances requires a deeper understanding of brain homologies, novelties and analogies. It also requires an understanding of how adaptive genetic modifications lead to restructuring of the brain. Recent advances in genomic and molecular biology techniques applied to brain research have provided exciting insights into how complex behaviours are shaped by selection of novel brain pathways and functions of the nervous system. Here, we review and further develop some insights to a new hypothesis on one mechanism that may contribute to nervous system evolution, in particular by brain pathway duplication. Like gene duplication, we propose that whole brain pathways can duplicate and the duplicated pathway diverge to take on new functions. We suggest that one mechanism of brain pathway duplication could be through gene duplication, although other mechanisms are possible. We focus on brain pathways for vocal learning and spoken language in song-learning birds and humans as example systems. This view presents a new framework for future research in our understanding of brain evolution and novel behavioural traits.", "title": "" }, { "docid": "4073da56cc874ea71f5e8f9c1c376cf8", "text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.", "title": "" }, { "docid": "f0bbe4e6d61a808588153c6b5fc843aa", "text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.", "title": "" }, { "docid": "094d027465ac59fda9ae67d62e83782f", "text": "In this paper, frequency domain techniques are used to derive the tracking properties of the recursive least squares (RLS) algorithm applied to an adaptive antenna array in a mobile fading environment, expanding the use of such frequency domain approaches for nonstationary RLS tracking to the interference canceling problem that characterizes the use of antenna arrays in mobile wireless communications. The analysis focuses on the effect of the exponential weighting of the correlation estimation filter and its effect on the estimations of the time variant autocorrelation matrix and cross-correlation vector. Specifically, the case of a flat Rayleigh fading desired signal applied to an array in the presence of static interferers is considered with an AR2 fading process approximating the Jakes’ fading model. The result is a mean square error (MSE) performance metric parameterized by the fading bandwidth and the RLS exponential weighting factor, allowing optimal parameter selection. The analytic results are verified and demonstrated with a simulation example.", "title": "" }, { "docid": "b14502732b07cfc3153cd419b01084e5", "text": "Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (non-strict evaluation with sharing of the results) with non-determinism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for non-strictness, sharing, and non-determinism in functional languages are tricky to combine.\n We present a practical way to write purely functional lazy non-deterministic programs that are efficient and perspicuous. We achieve this goal by embedding the programs into existing languages (such as Haskell, SML, and OCaml) with high-quality implementations, by making choices lazily and representing data with non-deterministic components, by working with custom monadic data types and search strategies, and by providing equational laws for the programmer to reason about their code.", "title": "" }, { "docid": "b677a4762ceb4ec6f9f1fc418a701982", "text": "NoSQL databases are the new breed of databases developed to overcome the drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other requirements of cloud computing. The common motivation of NoSQL design is to meet scalability and fail over. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB are mostly used and they can be termed as the representative of NoSQL world. This tutorial discusses the features of NoSQL databases in the light of CAP theorem.", "title": "" }, { "docid": "d440e08b7f2868459fbb31b94c15db5b", "text": "Recently, the necessity of hybrid-microgrid system has been proved as a modern power structure. This paper studies a power management system (PMS) in a hybrid network to control the power-flow procedure between DC and AC buses. The proposed architecture for PMS is designed to eliminate the power disturbances and manage the automatic connection among multiple sources. In this paper, PMS benefits from a 3-phase proportional resonance (PR) control ability to accurately adjust the inverter operation. Also, a Photo-Voltaic (PV) unit and a distributed generator (DG) are considered to supply the load demand power. Compared to the previous studies, the applied scheme has sufficient capability of quickly supplying the load in different scenarios with no network failures. The validity of implemented method is verified through the simulation results.", "title": "" }, { "docid": "0c70966c4dbe41458f7ec9692c566c1f", "text": "By 2012 the U.S. military had increased its investment in research and production of unmanned aerial vehicles (UAVs) from $2.3 billion in 2008 to $4.2 billion [1]. Currently UAVs are used for a wide range of missions such as border surveillance, reconnaissance, transportation and armed attacks. UAVs are presumed to provide their services at any time, be reliable, automated and autonomous. Based on these presumptions, governmental and military leaders expect UAVs to improve national security through surveillance or combat missions. To fulfill their missions, UAVs need to collect and process data. Therefore, UAVs may store a wide range of information from troop movements to environmental data and strategic operations. The amount and kind of information enclosed make UAVs an extremely interesting target for espionage and endangers UAVs of theft, manipulation and attacks. Events such as the loss of an RQ-170 Sentinel to Iranian military forces on 4th December 2011 [2] or the “keylogging” virus that infected an U.S. UAV fleet at Creech Air Force Base in Nevada in September 2011 [3] show that the efforts of the past to identify risks and harden UAVs are insufficient. Due to the increasing governmental and military reliance on UAVs to protect national security, the necessity of a methodical and reliable analysis of the technical vulnerabilities becomes apparent. We investigated recent attacks and developed a scheme for the risk assessment of UAVs based on the provided services and communication infrastructures. We provide a first approach to an UAV specific risk assessment and take into account the factors exposure, communication systems, storage media, sensor systems and fault handling mechanisms. We used this approach to assess the risk of some currently used UAVs: The “MQ-9 Reaper” and the “AR Drone”. A risk analysis of the “RQ-170 Sentinel” is discussed.", "title": "" }, { "docid": "7a3573bfb32dc1e081d43fe9eb35a23b", "text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.", "title": "" }, { "docid": "9e592238813d2bb28629f3dddaba109d", "text": "Traveling-wave array design techniques are applied to microstrip comb-line antennas in the millimeter-wave band. The simple design procedure is demonstrated. To neglect the effect of reflection waves in the design, a radiating element with a reflection-canceling slit and a stub-integrated radiating element are proposed. Matching performance is also improved.", "title": "" }, { "docid": "3e6aac2e0ff6099aabeee97dc1292531", "text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.", "title": "" }, { "docid": "4cb0d0d6f1823f108a3fc32e0c407605", "text": "This paper describes a novel method to approximate instantaneous frequency of non-stationary signals through an application of fractional Fourier transform (FRFT). FRFT enables us to build a compact and accurate chirp dictionary for each windowed signal, thus the proposed approach offers improved computational efficiency, and good performance when compared with chirp atom method.", "title": "" }, { "docid": "5a805b6f9e821b7505bccc7b70fdd557", "text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.", "title": "" }, { "docid": "6c730f32b02ca58f66e98f9fc5181484", "text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.", "title": "" }, { "docid": "38e9aa4644edcffe87dd5ae497e99bbe", "text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.", "title": "" }, { "docid": "c79510daa790e5c92e0c3899cc4a563b", "text": "Purpose – The purpose of this study is to interpret consumers’ emotion in their consumption experience in the context of mobile commerce from an experiential view. The study seeks to address concerns about the experiential aspects of mobile commerce regardless of the consumption type. For the purpose, the authors aims to propose a stimulus-organism-response (S-O-R) based model that incorporates both utilitarian and hedonic factors of consumers. Design/methodology/approach – A survey study was conducted to collect data from 293 mobile phone users. The questionnaire was administered in study classrooms, a library, or via e-mail. The measurement model and structural model were examined using LISREL 8.7. Findings – The results of this research implied that emotion played a significant role in the mobile consumption experience; hedonic factors had a positive effect on the consumption experience, while utilitarian factors had a negative effect on the consumption experience of consumers. The empirical findings also indicated that media richness was as important as subjective norms, and more important than convenience and self-efficacy. Originality/value – Few m-commerce studies have focused directly on the experiential aspects of consumption, including the hedonic experience and positive emotions among mobile device users. Applying the stimulus-organism-response (S-O-R) framework from the perspective of the experiential view, the current research model is developed to examine several utilitarian and hedonic factors in the context of the consumption experience, and indicates a comparison between the information processing (utilitarian) view and the experiential (hedonic) view of consumer behavior. It illustrates the relationships among six variables (i.e. convenience, media richness, subjective norms, self-efficacy, emotion, and consumption experience) in a mobile commerce context.", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
b1a65283bf38ab004e803669d80844b7
Design of a Compact Actuation and Control System for Flexible Medical Robots
[ { "docid": "d815d2bcc9436f9c9751ce18f87d2fe4", "text": "Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10 °C.", "title": "" } ]
[ { "docid": "e7d3fae34553c61827b78e50c2e205ee", "text": "Speaker Identification (SI) is the process of identifying the speaker from a given utterance by comparing the voice biometrics of the utterance with those utterance models stored beforehand. SI technologies are taken a new direction due to the advances in artificial intelligence and have been used widely in various domains. Feature extraction is one of the most important aspects of SI, which significantly influences the SI process and performance. This systematic review is conducted to identify, compare, and analyze various feature extraction approaches, methods, and algorithms of SI to provide a reference on feature extraction approaches for SI applications and future studies. The review was conducted according to Kitchenham systematic review methodology and guidelines, and provides an in-depth analysis on proposals and implementations of SI feature extraction methods discussed in the literature between year 2011 and 2106. Three research questions were determined and an initial set of 535 publications were identified to answer the questions. After applying exclusion criteria 160 related publications were shortlisted and reviewed in this paper; these papers were considered to answer the research questions. Results indicate that pure Mel-Frequency Cepstral Coefficients (MFCCs) based feature extraction approaches have been used more than any other approach. Furthermore, other MFCC variations, such as MFCC fusion and cleansing approaches, are proven to be very popular as well. This study identified that the current SI research trend is to develop a robust universal SI framework to address the important problems of SI such as adaptability, complexity, multi-lingual recognition, and noise robustness. The results presented in this research are based on past publications, citations, and number of implementations with citations being most relevant. This paper also presents the general process of SI. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c2c8c8a40caea744e40eb7bf570a6812", "text": "OBJECTIVE\nTo investigate the association between single nucleotide polymorphisms (SNPs) of BARD1 gene and susceptibility of early-onset breast cancer in Uygur women in Xinjiang.\n\n\nMETHODS\nA case-control study was designed to explore the genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene, detected by PCR-restriction fragment length polymorphism (PCR-RFLP) assay, in 144 early-onset breast cancer cases of Uygur women (≤ 40 years) and 136 cancer-free controls matched by age and ethnicity. The association between SNPs of BARD1 gene and risk of early-onset breast cancer in Uygur women was analyzed by unconditional logistic regression model.\n\n\nRESULTS\nEarly age at menarche, late age at first pregnancy, and positive family history of cancer may be important risk factors of early-onset breast cancer in Uygur women in Xinjiang. The frequencies of genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene showed significant differences between the cancer cases and cancer-free controls (P < 0.05). Compared with wild-type genotype Pro24Ser CC, it showed a lower incidence of early-onset breast cancer in Uygur women with variant genotypes of Pro24Ser TT (OR = 0.117, 95%CI = 0.058 - 0.236), and dominance-genotype CT+TT (OR = 0.279, 95%CI = 0.157 - 0.494), or Arg378Ser CC (OR = 0.348, 95%CI = 0.145 - 0.834) and Val507Met AA(OR = 0.359, 95%CI = 0.167 - 0.774). Furthermore, SNPS in three polymorphisms would have synergistic effects on the risk of breast cancer. In addition, the SNP-SNP interactions of dominance-genotypes (CT+TT, GC+CC and GA+AA) showed a 52.1% lower incidence of early-onset breast cancer in Uygur women (OR = 0.479, 95%CI = 0.230 - 0.995). Stratified analysis indicated that the protective effect of carrying T variant genotype (CT/TT) in Pro24Ser and carrying C variant genotype (GC/CC) in Arg378Ser were more evident in subjects with early age at menarche and negative family history of cancer. With an older menarche age, the protective effect was weaker.\n\n\nCONCLUSIONS\nSNPs of Pro24Ser(C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene are associated with significantly decreased risk of early-onset breast cancer in Uygur women in Xinjiang. Early age at menarche and negative family history of cancer can enhance the protective effect of mutant allele.", "title": "" }, { "docid": "b01028ef40b1fda74d0621c430ce9141", "text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.", "title": "" }, { "docid": "9490f117f153a16152237a5a6b08c0a3", "text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.", "title": "" }, { "docid": "13aef8ba225dd15dd013e155c319310e", "text": "ness and Approximations Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as followsness and Approximations • This rather absurd attack goes as follows Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” • The problem is that Davis fails to recognize that a lot of th hypercomputational models are abstract models that no one hopes to build in the near future. Thursday, June 9, 2011 Necessity of Noncomputable Reals Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines • Kieu-type Quantum Computation Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends The Main Case Science of Sciences Part 1: Chain Store Paradox Part 2: Turing-level Actors Part 3:MDL Computational Learning Theory CLT-based Model of Science", "title": "" }, { "docid": "74f017db6e98b068b29698886caec368", "text": "Social networks have become an additional marketing channel that could be integrated with the traditional ones as a part of the marketing mix. The change in the dynamics of the marketing interchange between companies and consumers as introduced by social networks has placed a focus on the non-transactional customer behavior. In this new marketing era, the terms engagement and participation became the central non-transactional constructs, used to describe the nature of participants’ specific interactions and/or interactive experiences. These changes imposed challenges to the traditional one-way marketing, resulting in companies experimenting with many different approaches, thus shaping a successful social media approach based on the trial-and-error experiences. To provide insights to practitioners willing to utilize social networks for marketing purposes, our study analyzes the influencing factors in terms of characteristics of the content communicated by the company, such as media type, content type, posting day and time, over the level of online customer engagement measured by number of likes, comments and shares, and interaction duration for the domain of a Facebook brand page. Our results show that there is a different effect of the analyzed factors over individual engagement measures. We discuss the implications of our findings for social media marketing.", "title": "" }, { "docid": "44050ba52838a583e2efb723b10f0234", "text": "This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality The volume spline is based on use of the Green’s function for interpolation of scalar function values of a chosen “carrier” solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from cross-sections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD f o r bodies with complex surfaces.", "title": "" }, { "docid": "4ddbdf0217d13c8b349137f1e59910d6", "text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.", "title": "" }, { "docid": "31bb1b7237951dbec124caf832401a43", "text": "This Thesis is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. For more information, please contact [email protected]. Recommended Citation Petersen, Amanda Mae, \"Beyond Black and White: An Examination of Afrocentric Facial Features and Sex in Criminal Sentencing\" (2014). Dissertations and Theses. Paper 1855.", "title": "" }, { "docid": "82df50c6c1c51b00d00d505dce80b7ab", "text": "This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004. The volume presents current research in ontology learning, addressing three perspectives: methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques; evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics. According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches. However, all selected papers pay considerably attention to the evaluation perspective, as this was a central topic of the ECAI 2004 workshop out of which most of the papers in this volume originate.", "title": "" }, { "docid": "ca550339bd91ba8e431f1e82fbaf5a99", "text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.", "title": "" }, { "docid": "1f0dbec4f21549780d25aa81401494c6", "text": "Parallel scientific applications require high-performanc e I/O support from underlying file systems. A comprehensive understanding of the expected workload is t herefore essential for the design of high-performance parallel file systems. We re-examine the w orkload characteristics in parallel computing environments in the light of recent technology ad vances and new applications. We analyze application traces from a cluster with hundreds o f nodes. On average, each application has only one or two typical request sizes. Large requests fro m several hundred kilobytes to several megabytes are very common. Although in some applications, s mall requests account for more than 90% of all requests, almost all of the I/O data are transferre d by large requests. All of these applications show bursty access patterns. More than 65% of write req uests have inter-arrival times within one millisecond in most applications. By running the same be nchmark on different file models, we also find that the write throughput of using an individual out p t file for each node exceeds that of using a shared file for all nodes by a factor of 5. This indicate s that current file systems are not well optimized for file sharing.", "title": "" }, { "docid": "5c8923335dd4ee4c2123b5b3245fb595", "text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.", "title": "" }, { "docid": "5edaa2ed52f29eeb9576ebdaeb819997", "text": "Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by cognitive and intellectual deficits and behavior disturbance. The electroencephalogram (EEG) has been used as a tool for diagnosing AD for several decades. The hallmark of EEG abnormalities in AD patients is a shift of the power spectrum to lower frequencies and a decrease in coherence of fast rhythms. These abnormalities are thought to be associated with functional disconnections among cortical areas resulting from death of cortical neurons, axonal pathology, cholinergic deficits, etc. This article reviews main findings of EEG abnormalities in AD patients obtained from conventional spectral analysis and nonlinear dynamical methods. In particular, nonlinear alterations in the EEG of AD patients, i.e. a decreased complexity of EEG patterns and reduced information transmission among cortical areas, and their clinical implications are discussed. For future studies, improvement of the accuracy of differential diagnosis and early detection of AD based on multimodal approaches, longitudinal studies on nonlinear dynamics of the EEG, drug effects on the EEG dynamics, and linear and nonlinear functional connectivity among cortical regions in AD are proposed to be investigated. EEG abnormalities of AD patients are characterized by slowed mean frequency, less complex activity, and reduced coherences among cortical regions. These abnormalities suggest that the EEG has utility as a valuable tool for differential and early diagnosis of AD.", "title": "" }, { "docid": "dd9e3513c4be6100b5d3b3f25469f028", "text": "Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing. KeywordsBlack Box; Grey Box; White Box.", "title": "" }, { "docid": "ca34e7cef347237a370fbf4772c77f3e", "text": "Given a set P of n points in the plane, we consider the problem of covering P with a minimum number of unit disks. This problem is known to be NP-hard. We present a simple 4-approximation algorithm for this problem which runs in O(n log n)-time. We also show how to extend this algorithm to other metrics, and to three dimensions.", "title": "" }, { "docid": "c4c3a9572659543c5cd5d1bb50a13bee", "text": "Optic disc (OD) is a key structure in retinal images. It serves as an indicator to detect various diseases such as glaucoma and changes related to new vessel formation on the OD in diabetic retinopathy (DR) or retinal vein occlusion. OD is also essential to locate structures such as the macula and the main vascular arcade. Most existing methods for OD localization are rule-based, either exploiting the OD appearance properties or the spatial relationship between the OD and the main vascular arcade. The detection of OD abnormalities has been performed through the detection of lesions such as hemorrhaeges or through measuring cup to disc ratio. Thus these methods result in complex and inflexible image analysis algorithms limiting their applicability to large image sets obtained either in epidemiological studies or in screening for retinal or optic nerve diseases. In this paper, we propose an end-to-end supervised model for OD abnormality detection. The most informative features of the OD are learned directly from retinal images and are adapted to the dataset at hand. Our experimental results validated the effectiveness of this current approach and showed its potential application.", "title": "" }, { "docid": "ea937e1209c270a7b6ab2214e0989fed", "text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.", "title": "" }, { "docid": "d8b3eb944d373741747eb840a18a490b", "text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.", "title": "" }, { "docid": "9b2dc34302b69ca863e4bcca26e09c96", "text": "Two opposing theories have been proposed to explain competitive advantage of firms. First, the market-based view (MBV) is focused on product or market positions and competition while second, the resource-based view (RBV) aims at explaining success by inwardly looking at unique resources and capabilities of a firm. Research has been struggling to distinguish impacts of these theories for illuminating performance. Business models are seen as an important concept to systemize the business and value creation logic of firms by defining different core components. Thus, this paper tries to assess associations between these components and MBV or RBV perspectives by applying content analysis. Two of the business model components were found to have strong links with the MBV while three of them showed indications of their roots lying in the resource-based perspective. These results are discussed and theorized in a final step by suggesting frameworks of the corresponding perspectives for further explaining competitive advantage.", "title": "" } ]
scidocsrr
bcc1689bceba390d7ad85220196a559f
Using CoreSight PTM to Integrate CRA Monitoring IPs in an ARM-Based SoC
[ { "docid": "35258abbafac62dbfbd0be08617e95bf", "text": "Code Reuse Attacks (CRAs) recently emerged as a new class of security exploits. CRAs construct malicious programs out of small fragments (gadgets) of existing code, thus eliminating the need for code injection. Existing defenses against CRAs often incur large performance overheads or require extensive binary rewriting and other changes to the system software. In this paper, we examine a signature-based detection of CRAs, where the attack is detected by observing the behavior of programs and detecting the gadget execution patterns. We first demonstrate that naive signature-based defenses can be defeated by introducing special “delay gadgets” as part of the attack. We then show how a software-configurable signature-based approach can be designed to defend against such stealth CRAs, including the attacks that manage to use longer-length gadgets. The proposed defense (called SCRAP) can be implemented entirely in hardware using simple logic at the commit stage of the pipeline. SCRAP is realized with minimal performance cost, no changes to the software layers and no implications on binary compatibility. Finally, we show that SCRAP generates no false alarms on a wide range of applications.", "title": "" }, { "docid": "92d5ebd49670681a5d43ba90731ae013", "text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.", "title": "" }, { "docid": "eb12e9e10d379fcbc156e94c3b447ce1", "text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.", "title": "" } ]
[ { "docid": "0c67628fb24c8cbd4a8e49fb30ba625e", "text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.", "title": "" }, { "docid": "95e0dfb614103b8beece915dd744e384", "text": "Failure diagnosis is the process of identifying the causes of impairment in a system’s function based on observable symptoms, i.e., determining which fault led to an observed failure. Since multiple faults can often lead to very similar symptoms, failure diagnosis is often the first line of defense when things go wrong a prerequisite before any corrective actions can be undertaken. The results of diagnosis also provide data about a system’s operational fault profile for use in offline resilience evaluation. While diagnosis has historically been a largely manual process requiring significant human input, techniques to automate as much of the process as possible have significantly grown in importance in many industries including telecommunications, internet services, automotive systems, and aerospace. This chapter presents a survey of automated failure diagnosis techniques including both model-based and model-free approaches. Industrial applications of these techniques in the above domains are presented, and finally, future trends and open challenges in the field are discussed.", "title": "" }, { "docid": "bc0fa704763199526c4f28e40fa11820", "text": "GPFS is a distributed file system run on some of the largest supercomputers and clusters. Through it's deployment, the authors have been able to gain a number of key insights into the methodology of developing a distributed file system which can reliably scale and maintain POSIX semantics. Achieving the necessary throughput requires parallel access for reading, writing and updating metadata. It is a process that is accomplished mostly through distributed locking.", "title": "" }, { "docid": "001b5a976b6b6ccb15ab80ead4617422", "text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.", "title": "" }, { "docid": "31f65e3f22aa1d6c05a17efc7e8a9b41", "text": "Methods The Fenofi brate Intervention and Event Lowering in Diabetes (FIELD) study was a multinational randomised trial of 9795 patients aged 50–75 years with type 2 diabetes mellitus. Eligible patients were randomly assigned to receive fenofi brate 200 mg/day (n=4895) or matching placebo (n=4900). At each clinic visit, information concerning laser treatment for diabetic retinopathy—a prespecifi ed tertiary endpoint of the main study—was gathered. Adjudication by ophthalmologists masked to treatment allocation defi ned instances of laser treatment for macular oedema, proliferative retinopathy, or other eye conditions. In a substudy of 1012 patients, standardised retinal photography was done and photographs graded with Early Treatment Diabetic Retinopathy Study (ETDRS) criteria to determine the cumulative incidence of diabetic retinopathy and its component lesions. Analyses were by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN64783481.", "title": "" }, { "docid": "ae8e043f980d313499433d49aa90467c", "text": "During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.", "title": "" }, { "docid": "4b8a46065520d2b7489bf0475321c726", "text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.", "title": "" }, { "docid": "5ce4e0532bf1f6f122f62b37ba61384e", "text": "Media violence poses a threat to public health inasmuch as it leads to an increase in real-world violence and aggression. Research shows that fictional television and film violence contribute to both a short-term and a long-term increase in aggression and violence in young viewers. Television news violence also contributes to increased violence, principally in the form of imitative suicides and acts of aggression. Video games are clearly capable of producing an increase in aggression and violence in the short term, although no long-term longitudinal studies capable of demonstrating long-term effects have been conducted. The relationship between media violence and real-world violence and aggression is moderated by the nature of the media content and characteristics of and social influences on the individual exposed to that content. Still, the average overall size of the effect is large enough to place it in the category of known threats to public health.", "title": "" }, { "docid": "8f65f1971405e0c225e3625bb682a2d4", "text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.", "title": "" }, { "docid": "de1f35d0e19cafc28a632984f0411f94", "text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.", "title": "" }, { "docid": "34cd47ff49e316f26e5596bc9717fd6d", "text": "In this paper, a BGA package having a ARM SoC chip is introduced, which has component-type embedded decoupling capacitors (decaps) for good power integrity performance of core power. To evaluate and confirm the impact of embedded decap on core PDN (power distribution network), two different packages were manufactured with and without the embedded decaps. The self impedances of system-level core PDN were simulated in frequency-domain and On-chip DvD (Dynamic Voltage Drop) simulations were performed in time-domain in order to verify the system-level impact of package embedded decap. There was clear improvement of system-level core PDN performance in middle frequency range when package embedded decaps were employed. In conclusion, the overall system-level core PDN for ARM SoC could meet the target impedance in frequency-domain as well as the target On-chip DvD level by having package embedded decaps.", "title": "" }, { "docid": "2c442933c4729e56e5f4f46b5b8071d6", "text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.", "title": "" }, { "docid": "5a549dbf3037a45a49c9f8f2e91b7aeb", "text": "How can we reuse existing knowledge, in the form of available datasets, when solving a new and apparently unrelated target task from a set of unlabeled data? In this work we make a first contribution to answer this question in the context of image classification. We frame this quest as an active learning problem and use zero-shot classifiers to guide the learning process by linking the new task to the the existing classifiers. By revisiting the dual formulation of adaptive SVM, we reveal two basic conditions to choose greedily only the most relevant samples to be annotated. On this basis we propose an effective active learning algorithm which learns the best possible target classification model with minimum human labeling effort. Extensive experiments on two challenging datasets show the value of our approach compared to the state-of-the-art active learning methodologies, as well as its potential to reuse past datasets with minimal effort for future tasks.", "title": "" }, { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" }, { "docid": "efd7512694ed378cb111c94e53890c89", "text": "Recent years have seen a significant growth and increased usage of large-scale knowledge resources in both academic research and industry. We can distinguish two main types of knowledge resources: those that store factual information about entities in the form of semantic relations (e.g., Freebase), namely so-called knowledge graphs, and those that represent general linguistic knowledge (e.g., WordNet or UWN). In this article, we present a third type of knowledge resource which completes the picture by connecting the two first types. Instances of this resource are graphs of semantically-associated relations (sar-graphs), whose purpose is to link semantic relations from factual knowledge graphs with their linguistic representations in human language. We present a general method for constructing sar-graphs using a languageand relation-independent, distantly supervised approach which, apart from generic language processing tools, relies solely on the availability of a lexical semantic resource, providing sense information for words, as well as a knowledge base containing seed relation instances. Using these seeds, our method extracts, validates and merges relationspecific linguistic patterns from text to create sar-graphs. To cope with the noisily labeled data arising in a distantly supervised setting, we propose several automatic pattern confidence estimation strategies, and also show how manual supervision can be used to improve the quality of sar-graph instances. We demonstrate the applicability of our method by constructing sar-graphs for 25 semantic relations, of which we make a subset publicly available at http://sargraph.dfki.de. We believe sar-graphs will prove to be useful linguistic resources for a wide variety of natural language processing tasks, and in particular for information extraction and knowledge base population. We illustrate their usefulness with experiments in relation extraction and in computer assisted language learning.", "title": "" }, { "docid": "35ed2c8db6b143629e806b68741e9977", "text": "Nowadays, smart wristbands have become one of the most prevailing wearable devices as they are small and portable. However, due to the limited size of the touch screens, smart wristbands typically have poor interactive experience. There are a few works appropriating the human body as a surface to extend the input. Yet by using multiple sensors at high sampling rates, they are not portable and are energy-consuming in practice. To break this stalemate, we proposed a portable, cost efficient text-entry system, termed ViType, which firstly leverages a single small form factor sensor to achieve a practical user input with much lower sampling rates. To enhance the input accuracy with less vibration information introduced by lower sampling rate, ViType designs a set of novel mechanisms, including an artificial neural network to process the vibration signals, and a runtime calibration and adaptation scheme to recover the error due to temporal instability. Extensive experiments have been conducted on 30 human subjects. The results demonstrate that ViType is robust to fight against various confounding factors. The average recognition accuracy is 94.8% with an initial training sample size of 20 for each key, which is 1.52 times higher than the state-of-the-art on-body typing system. Furthermore, when turning on the runtime calibration and adaptation system to update and enlarge the training sample size, the accuracy can reach around 98% on average during one month.", "title": "" }, { "docid": "dfba47fd3b84d6346052b559568a0c21", "text": "Understanding gaming motivations is important given the growing trend of incorporating game-based mechanisms in non-gaming applications. In this paper, we describe the development and validation of an online gaming motivations scale based on a 3-factor model. Data from 2,071 US participants and 645 Hong Kong and Taiwan participants is used to provide a cross-cultural validation of the developed scale. Analysis of actual in-game behavioral metrics is also provided to demonstrate predictive validity of the scale.", "title": "" }, { "docid": "ac4a1b85c72984fb0f25e3603651b8db", "text": "Deep Reinforcement Learning (deep RL) has made several breakthroughs in recent years in applications ranging from complex control tasks in unmanned vehicles to game playing. Despite their success, deep RL still lacks several important capacities of human intelligence, such as transfer learning, abstraction and interpretability. Deep Symbolic Reinforcement Learning (DSRL) seeks to incorporate such capacities to deep Q-networks (DQN) by learning a relevant symbolic representation prior to using Q-learning. In this paper, we propose a novel extension of DSRL, which we call Symbolic Reinforcement Learning with Common Sense (SRL+CS), offering a better balance between generalization and specialization, inspired by principles of common sense when assigning rewards and aggregating Q-values. Experiments reported in this paper show that SRL+CS learns consistently faster than Q-learning and DSRL, achieving also a higher accuracy. In the hardest case, where agents were trained in a deterministic environment and tested in a random environment, SRL+CS achieves nearly 100% average accuracy compared to DSRL’s 70% and DQN’s 50% accuracy. To the best of our knowledge, this is the first case of near perfect zero-shot transfer learning using Reinforcement Learning.", "title": "" }, { "docid": "56321ec6dfc3d4c55fc99125e942cf44", "text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.", "title": "" }, { "docid": "8b3431783f1dc699be1153ad80348d3e", "text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”", "title": "" } ]
scidocsrr
4c79c8887f81f5276b328f84cd4c847e
Safe and Reliable Path Planning for the Autonomous Vehicle Verdino
[ { "docid": "e0f5f73eb496b77cddc5820fb6306f4b", "text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.", "title": "" }, { "docid": "5e9dce428a2bcb6f7bc0074d9fe5162c", "text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.", "title": "" } ]
[ { "docid": "bdda2074b0ab2e12047d0702acb4d20a", "text": "Ferroptosis has emerged as a new form of regulated necrosis that is implicated in various human diseases. However, the mechanisms of ferroptosis are not well defined. This study reports the discovery of multiple molecular components of ferroptosis and its intimate interplay with cellular metabolism and redox machinery. Nutrient starvation often leads to sporadic apoptosis. Strikingly, we found that upon deprivation of amino acids, a more rapid and potent necrosis process can be induced in a serum-dependent manner, which was subsequently determined to be ferroptosis. Two serum factors, the iron-carrier protein transferrin and amino acid glutamine, were identified as the inducers of ferroptosis. We further found that the cell surface transferrin receptor and the glutamine-fueled intracellular metabolic pathway, glutaminolysis, played crucial roles in the death process. Inhibition of glutaminolysis, the essential component of ferroptosis, can reduce heart injury triggered by ischemia/reperfusion, suggesting a potential therapeutic approach for treating related diseases.", "title": "" }, { "docid": "fd7b4fb86b650c18cbc1d720679d94d5", "text": "Thermal sensors are used in modern microprocessors to provide information for: 1) throttling at the maximum temperature of operation, and 2) fan regulation at temperatures down to 50°C. Today's microprocessors are thermally limited in many applications, so accurate temperature readings are essential in order to maximize performance. There are fairly large thermal gradients across the core, which vary for different instructions, so it is necessary to position thermal sensors near hot-spots. In addition, the locations of the hot-spots may not be predictable during the design phase. Thus it is necessary for hot-spot sensors to be small enough to be moved late in the design cycle or even after first Silicon.", "title": "" }, { "docid": "cafaea34fd2183d6c43db3f46adde2f2", "text": "Currently, filling, smoothing, or recontouring the face through the use of injectable fillers is one of the most popular forms of cosmetic surgery. Because these materials promise a more youthful appearance without anesthesia in a noninvasive way, various fillers have been used widely in different parts of the world. However, most of these fillers have not been approved by the Food and Drug Administration, and their applications might cause unpleasant disfiguring complications. This report describes a case of foreign body granuloma in the cheeks secondary to polyethylene glycol injection and shows the possible complications associated with the use of facial fillers.", "title": "" }, { "docid": "27d7f7935c235a3631fba6e3df08f623", "text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.", "title": "" }, { "docid": "cb815a01960490760e2ac581e26f4486", "text": "To solve the weakly-singular Volterra integro-differential equations, the combined method of the Laplace Transform Method and the Adomian Decomposition Method is used. As a result, series solutions of the equations are constructed. In order to explore the rapid decay of the equations, the pade approximation is used. The results present validity and great potential of the method as a powerful algorithm in order to present series solutions for singular kind of differential equations.", "title": "" }, { "docid": "32ef354fff832d438e02ce5800f0909f", "text": "In this fast life, everyone is in hurry to reach their destinations. In this case waiting for the buses is not reliable. People who rely on the public transport their major concern is to know the real time location of the bus for which they are waiting for and the time it will take to reach their bus stop. This information helps people in making better travelling decisions. This paper gives the major challenges in the public transport system and discuses various approaches to intelligently manage it. Current position of the bus is acquired by integrating GPS device on the bus and coordinates of the bus are sent by either GPRS service provided by GSM networks or SMS or RFID. GPS device is enabled on the tracking device and this information is sent to centralized control unit or directly at the bus stops using RF receivers. This system is further integrated with the historical average speeds of each segment. This is done to improve the accuracy by including the factors like volume of traffic, crossings in each segment, day and time of day. People can track information using LEDs at bus stops, SMS, web application or Android application. GPS coordinates of the bus when sent to the centralized server where various arrival time estimation algorithms are applied using historical speed patterns.", "title": "" }, { "docid": "d15b28262bf453a19fe69c6b17b4d727", "text": "Eye gaze tracking is a promising input method which is gradually finding its way into the mainstream. An obvious question to arise is whether it can be used for point-and-click tasks, as an alternative for mouse or touch. Pointing with gaze is both fast and natural, although its accuracy is limited. There are still technical challenges with gaze tracking, as well as inherent physiological limitations. Furthermore, providing an alternative to clicking is challenging.\n We are considering use cases where input based purely on gaze is desired, and the click targets are discrete user interface (UI) elements which are too small to be reliably resolved by gaze alone, e.g., links in hypertext. We present Actigaze, a new gaze-only click alternative which is fast and accurate for this scenario. A clickable user interface element is selected by dwelling on one of a set of confirm buttons, based on two main design contributions: First, the confirm buttons stay on fixed positions with easily distinguishable visual identifiers such as colors, enabling procedural learning of the confirm button position. Secondly, UI elements are associated with confirm buttons through the visual identifiers in a way which minimizes the likelihood of inadvertent clicks. We evaluate two variants of the proposed click alternative, comparing them against the mouse and another gaze-only click alternative.", "title": "" }, { "docid": "01fa6041e3a2c555c0e58a41a5521f8e", "text": "This paper presents a detailed description of finite control set model predictive control (FCS-MPC) applied to power converters. Several key aspects related to this methodology are, in depth, presented and compared with traditional power converter control techniques, such as linear controllers with pulsewidth-modulation-based methods. The basic concepts, operating principles, control diagrams, and results are used to provide a comparison between the different control strategies. The analysis is performed on a traditional three-phase voltage source inverter, used as a simple and comprehensive reference frame. However, additional topologies and power systems are addressed to highlight differences, potentialities, and challenges of FCS-MPC. Among the conclusions are the feasibility and great potential of FCS-MPC due to present-day signal-processing capabilities, particularly for power systems with a reduced number of switching states and more complex operating principles, such as matrix converters. In addition, the possibility to address different or additional control objectives easily in a single cost function enables a simple, flexible, and improved performance controller for power-conversion systems.", "title": "" }, { "docid": "8cd701723c72b16dfe7d321cb657ee31", "text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.", "title": "" }, { "docid": "15a32d88604b2894b9a6f323907fac1d", "text": "We examined closely the cerebellar circuit model that we have proposed previously. The model granular layer generates a finite but very long sequence of active neuron populations without recurrence, which is able to represent the passage of time. For all the possible binary patterns fed into mossy fibres, the circuit generates the same number of different sequences of active neuron populations. Model Purkinje cells that receive parallel fiber inputs from neurons in the granular layer learn to stop eliciting spikes at the timing instructed by the arrival of signals from the inferior olive. These functional roles of the granular layer and Purkinje cells are regarded as a liquid state generator and readout neurons, respectively. Thus, the cerebellum that has been considered to date as a biological counterpart of a perceptron is reinterpreted to be a liquid state machine that possesses powerful information processing capability more than a perceptron.", "title": "" }, { "docid": "7a12529d179d9ca6b94dbac57c54059f", "text": "A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.", "title": "" }, { "docid": "424bf67761e234f6cf85eacabf38a502", "text": "Due to poor efficiencies of Incandescent Lamps (ILs), Fluorescent Lamps (FLs) and Compact Fluorescent Lamps (CFLs) are increasingly used in residential and commercial applications. This proliferation of FLs and CFLs increases the harmonics level in distribution systems that could affect power systems and end users. In order to quantify the harmonics produced by FLs and CFLs precisely, accurate modelling of these loads are required. Matlab Simulink is used to model and simulate the full models of FLs and CFLs to give close results to the experimental measurements. Moreover, a Constant Load Power (CLP) model is also modelled and its results are compared with the full models of FLs and CFLs. This CLP model is much faster to simulate and easier to model than the full model. Such models help engineers and researchers to evaluate the harmonics exist within households and commercial buildings.", "title": "" }, { "docid": "3f11c629670d986b8a266bae08e8a8d0", "text": "SURVIVAL ANALYSIS APPROACH FOR EARLY PREDICTION OF STUDENT DROPOUT by SATTAR AMERI December 2015 Advisor: Dr. Chandan Reddy Major: Computer Science Degree: Master of Science Retention of students at colleges and universities has long been a concern for educators for many decades. The consequences of student attrition are significant for both students, academic staffs and the overall institution. Thus, increasing student retention is a long term goal of any academic institution. The most vulnerable students at all institutions of higher education are the freshman students, who are at the highest risk of dropping out at the beginning of their study. Consequently, the early identification of “at-risk” students is a crucial task that needs to be addressed precisely. In this thesis, we develop a framework for early prediction of student success using survival analysis approach. We propose time-dependent Cox (TD-Cox), which is based on the Cox proportional hazard regression model and also captures time-varying factors to address the challenge of predicting dropout students as well as the semester that the dropout will occur, to enable proactive interventions. This is critical in student retention problem because not only correctly classifying whether student is going to dropout is important but also when this is going to happen is crucial to investigate. We evaluate our method on real student data collected at Wayne State University. The results show that the proposed Cox-based framework can predict the student dropout and the semester of dropout with high accuracy and precision compared to the other alternative state-of-the-art methods.", "title": "" }, { "docid": "acd458070c613d23618ccb9b4620da56", "text": "The Intelligent vehicle (IV) is experiencing revolutionary growth in research and industry, but it still suffers from many security vulnerabilities. Traditional security methods are incapable to provide secure IV communication. The major issues in IV communication, are trust, data accuracy and reliability of communication data in the communication channel. Blockchain technology works for the crypto currency, Bit-coin, which is recently used to build trust and reliability in peer-topeer networks having similar topologies as IV Communication. In this paper, we are proposing, Intelligent Vehicle-Trust Point (IV-TP) mechanism for IV communication among IVs using Blockchain technology. The IVs communicated data provides security and reliability using our proposed IV-TP. Our IV-TP mechanism provides trustworthiness for vehicles behavior, and vehicles legal and illegal action. Our proposal presents a reward based system, an exchange of some IV-TP among IVs, during successful communication. For the data management of the IVTP, we are using blockchain technology in the intelligent transportation system (ITS), which stores all IV-TP details of every vehicle and is accessed ubiquitously by IVs. In this paper, we evaluate our proposal with the help of intersection use case scenario for intelligent vehicles communication. Keywords— Blockchain, intelligent vehicles, security, component; ITS", "title": "" }, { "docid": "3b8bf3bda424456acec8a02a2240f18c", "text": "Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.", "title": "" }, { "docid": "f5bd155887dd2e40ad2d7a26bb5a6391", "text": "The field of research in digital humanities is undergoing a rapid transformation in recent years. A deep reflection on the current needs of the agents involved that takes into account key issues such as the inclusion of citizens in the creation and consumption of the cultural resources offered, the volume and complexity of datasets, available infrastructures, etcetera, is necessary. Present technologies make it possible to achieve projects that were impossible until recently, but the field is currently facing the challenge of proposing frameworks and systems to generalize and reproduce these proposals in other knowledge domains with similar but heterogeneous data sets. The track \"New trends in digital humanities\" of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM 2016), tries to set the basis of good practice in digital humanities by reflecting on models, technologies and methods to carry the transformation out.", "title": "" }, { "docid": "5794c31579595f8267bbad9278fe5fd2", "text": "Designed based on the underactuated mechanism, HIT/DLR Prosthetic Hand is a multi-sensory flve-flngered bio- prosthetic hand. Similarly with adult's hand, it is simple constructed and comprises 13 joints. Three motors actuate the thumb, index finger and the other three fingers each. Actuated by a motor, the thumb can move along cone surface, which resembles human thumb and is superior in the appearance. Driven by another motor and transmitted by springs, the mid finger, ring finger and little finger can envelop objects with complex shape. The appearance designation and sensory system are introduced. The grasp experiments are presented in detail. The hand has been greatly improved from HIT-ARhand. It was verified from experimentations, the hand has strong capability of self adaptation, can accomplish precise and power grasp for objects with complex shape.", "title": "" }, { "docid": "2f9e93892a013452df2cce84374ab7d7", "text": "Minimum cut/maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style \"push -relabel\" methods and algorithms based on Ford-Fulkerson style \"augmenting paths.\" We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow/min-cut algorithm is available upon request for research purposes.", "title": "" }, { "docid": "926be2a7a6202ca50aa57b1a54e8e8cd", "text": "Human needs with technical devices are increasing rapidly. In order to meet their requirements the system should be accurate and fast. The fastness and accuracy of a system depends on its intra and inter peripherals/algorithms. In the view of this, the proposed paper came into existence. It focuses on the development of the Fast Fourier Transform (FFT) algorithm, based on Decimation-InTime (DIT) domain, called Radix-4 DIT-FFT algorithm. VHDL is used as a design entity and for simulation Xilinx ISE. The synthesis results show that the computation for calculating the 256-bit 64-point FFT is efficient in terms of speed and is implemented on FPGA Spartan-3E kit.", "title": "" }, { "docid": "a502e61d4717396d37fa9a53ad604616", "text": "Hyperthermia, the procedure of raising the temperature of a part of or the whole body above normal for a defined period of time, is applied alone or as an adjunctive with various established cancer treatment modalities such as radiotherapy and chemotherapy. Clinical hyperthermia falls into three broad categories, namely, (1) localized hyperthermia, (2) regional hyperthermia, and (3) whole-body hyperthermia (WBH). Because of the various problems associated with each type of treatment, different heating techniques have evolved. In this article, background information on the biological rationale and current status of technologies concerning heating equipment for the application of hyperthermia to human cancer treatment are provided. The results of combinations of other modalities such as radiotherapy or chemotherapy with hyperthermia as a new treatment strategy are summarized. The article concludes with a discussion of challenges and opportunities for the future.", "title": "" } ]
scidocsrr
6c3fdeae4358c225363f50e856a465a2
Bayesian Affect Control Theory
[ { "docid": "3ef6a2d1c125d5c7edf60e3ceed23317", "text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.", "title": "" }, { "docid": "31a1a5ce4c9a8bc09cbecb396164ceb4", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" } ]
[ { "docid": "7b6cf139cae3e9dae8a2886ddabcfef0", "text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.", "title": "" }, { "docid": "f5e3014f479556cde21321cf1ce8f9e3", "text": "Physiological signals are widely used to perform medical assessment for monitoring an extensive range of pathologies, usually related to cardio-vascular diseases. Among these, both PhotoPlethysmoGraphy (PPG) and Electrocardiography (ECG) signals are those more employed. PPG signals are an emerging non-invasive measurement technique used to study blood volume pulsations through the detection and analysis of the back-scattered optical radiation coming from the skin. ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. In the present paper we propose a physiological ECG/PPG \"combo\" pipeline using an innovative bio-inspired nonlinear system based on a reaction-diffusion mathematical model, implemented by means of the Cellular Neural Network (CNN) methodology, to filter PPG signal by assigning a recognition score to the waveforms in the time series. The resulting \"clean\" PPG signal exempts from distortion and artifacts is used to validate for diagnostic purpose an EGC signal simultaneously detected for a same patient. The multisite combo PPG-ECG system proposed in this work overpasses the limitations of the state of the art in this field providing a reliable system for assessing the above-mentioned physiological parameters and their monitoring over time for robust medical assessment. The proposed system has been validated and the results confirmed the robustness of the proposed approach.", "title": "" }, { "docid": "70ea3e32d4928e7fd174b417ec8b6d0e", "text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.", "title": "" }, { "docid": "ae186ad5dce2bd3b32ffa993d33625a5", "text": "We present a system for acquiring, processing, and rendering panoramic light field still photography for display in Virtual Reality (VR). We acquire spherical light field datasets with two novel light field camera rigs designed for portable and efficient light field acquisition. We introduce a novel real-time light field reconstruction algorithm that uses a per-view geometry and a disk-based blending field. We also demonstrate how to use a light field prefiltering operation to project from a high-quality offline reconstruction model into our real-time model while suppressing artifacts. We introduce a practical approach for compressing light fields by modifying the VP9 video codec to provide high quality compression with real-time, random access decompression.\n We combine these components into a complete light field system offering convenient acquisition, compact file size, and high-quality rendering while generating stereo views at 90Hz on commodity VR hardware. Using our system, we built a freely available light field experience application called Welcome to Light Fields featuring a library of panoramic light field stills for consumer VR which has been downloaded over 15,000 times.", "title": "" }, { "docid": "efc48edab4d039b94a87d473e0158033", "text": "The classifier built from a data set with a highly skewed class distribution generally predicts the more frequently occurring classes much more often than the infrequently occurring classes. This is largely due to the fact that most classifiers are designed to maximize accuracy. In many instances, such as for medical diagnosis, this classification behavior is unacceptable because the minority class is the class of primary interest (i.e., it has a much higher misclassification cost than the majority class). In this paper we compare three methods for dealing with data that has a skewed class distribution and nonuniform misclassification costs. The first method incorporates the misclassification costs into the learning algorithm while the other two methods employ oversampling or undersampling to make the training data more balanced. In this paper we empirically compare the effectiveness of these methods in order to determine which produces the best overall classifier—and under what circumstances.", "title": "" }, { "docid": "8abedc8a3f3ad84c940e38735b759745", "text": "Degeneration is a senescence process that occurs in all living organisms. Although tremendous efforts have been exerted to alleviate this degenerative tendency, minimal progress has been achieved to date. The nematode, Caenorhabditis elegans (C. elegans), which shares over 60% genetic similarities with humans, is a model animal that is commonly used in studies on genetics, neuroscience, and molecular gerontology. However, studying the effect of exercise on C. elegans is difficult because of its small size unlike larger animals. To this end, we fabricated a flow chamber, called \"worm treadmill,\" to drive worms to exercise through swimming. In the device, the worms were oriented by electrotaxis on demand. After the exercise treatment, the lifespan, lipofuscin, reproductive capacity, and locomotive power of the worms were analyzed. The wild-type and the Alzheimer's disease model strains were utilized in the assessment. Although degeneration remained irreversible, both exercise-treated strains indicated an improved tendency compared with their control counterparts. Furthermore, low oxidative stress and lipofuscin accumulation were also observed among the exercise-treated worms. We conjecture that escalated antioxidant enzymes imparted the worms with an extra capacity to scavenge excessive oxidative stress from their bodies, which alleviated the adverse effects of degeneration. Our study highlights the significance of exercise in degeneration from the perspective of the simple life form, C. elegans.", "title": "" }, { "docid": "7c829563e98a6c75eb9b388bf0627271", "text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.", "title": "" }, { "docid": "fd809ccbf0042b84147e88e4009ab894", "text": "Professional sports is a roughly $500 billion dollar industry that is increasingly data-driven. In this paper we show how machine learning can be applied to generate a model that could lead to better on-field decisions by managers of professional baseball teams. Specifically we show how to use regularized linear regression to learn pitcher-specific predictive models that can be used to help decide when a starting pitcher should be replaced. A key step in the process is our method of converting categorical variables (e.g., the venue in which a game is played) into continuous variables suitable for the regression. Another key step is dealing with situations in which there is an insufficient amount of data to compute measures such as the effectiveness of a pitcher against specific batters. \n For each season we trained on the first 80% of the games, and tested on the rest. The results suggest that using our model could have led to better decisions than those made by major league managers. Applying our model would have led to a different decision 48% of the time. For those games in which a manager left a pitcher in that our model would have removed, the pitcher ended up performing poorly 60% of the time.", "title": "" }, { "docid": "e2be1b93be261deac59b5afde2f57ae1", "text": "The electronic and transport properties of carbon nanotube has been investigated in presence of ammonia gas molecule, using Density Functional Theory (DFT) based ab-initio approach. The model of CNT sensor has been build using zigzag (7, 0) CNT with a NH3 molecule adsorbed on its surface. The presence of NH3 molecule results in increase of CNT band gap. From the analysis of I-V curve, it is observed that the adsorption of NH3 leads to different voltage and current curve in comparison to its pristine state confirms the presence of NH3.", "title": "" }, { "docid": "5e7976392b26e7c2172d2e5c02d85c57", "text": "A multiprocessor virtual machine benefits its guest operating system in supporting scalable job throughput and request latency—useful properties in server consolidation where servers require several of the system processors for steady state or to handle load bursts. Typical operating systems, optimized for multiprocessor systems in their use of spin-locks for critical sections, can defeat flexible virtual machine scheduling due to lock-holder preemption and misbalanced load. The virtual machine must assist the guest operating system to avoid lock-holder preemption and to schedule jobs with knowledge of asymmetric processor allocation. We want to support a virtual machine environment with flexible scheduling policies, while maximizing guest performance. This paper presents solutions to avoid lock-holder preemption for both fully virtualized and paravirtualized environments. Experiments show that we can nearly eliminate the effects of lock-holder preemption. Furthermore, the paper presents a scheduler feedback mechanism that despite the presence of asymmetric processor allocation achieves optimal and fair load balancing in the guest operating system.", "title": "" }, { "docid": "f0af945042c44b20d6bd9f81a0b21b6b", "text": "We investigate a technique to adapt unsupervised word embeddings to specific applications, when only small and noisy labeled datasets are available. Current methods use pre-trained embeddings to initialize model parameters, and then use the labeled data to tailor them for the intended task. However, this approach is prone to overfitting when the training is performed with scarce and noisy data. To overcome this issue, we use the supervised data to find an embedding subspace that fits the task complexity. All the word representations are adapted through a projection into this task-specific subspace, even if they do not occur on the labeled dataset. This approach was recently used in the SemEval 2015 Twitter sentiment analysis challenge, attaining state-of-the-art results. Here we show results improving those of the challenge, as well as additional experiments in a Twitter Part-Of-Speech tagging task.", "title": "" }, { "docid": "babdf14e560236f5fcc8a827357514e5", "text": "Email: [email protected] Abstract: The NP-hard (complete) team orienteering problem is a particular vehicle routing problem with the aim of maximizing the profits gained from visiting control points without exceeding a travel cost limit. The team orienteering problem has a number of applications in several fields such as athlete recruiting, technician routing and tourist trip. Therefore, solving optimally the team orienteering problem would play a major role in logistic management. In this study, a novel randomized population constructive heuristic is introduced. This heuristic constructs a diversified initial population for population-based metaheuristics. The heuristics proved its efficiency. Indeed, experiments conducted on the well-known benchmarks of the team orienteering problem show that the initial population constructed by the presented heuristic wraps the best-known solution for 131 benchmarks and good solutions for a great number of benchmarks.", "title": "" }, { "docid": "5a601e08824185bafeb94ac432b6e92e", "text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "title": "" }, { "docid": "ede6bef7b623e95cf99b1d7c85332abb", "text": "The design of a temperature compensated IC on-chip oscillator and a low voltage detection circuitry sharing the bandgap reference is described. The circuit includes a new bandgap isolation strategy to reduce oscillator noise coupled through the current sources. The IC oscillator provides a selectable clock (11.6 MHz or 21.4 MHz) with digital trimming to minimize process variations. After fine-tuning the oscillator to the target frequency, the temperature compensated voltage and current references guarantees less than /spl plusmn/2.5% frequency variation from -40 to 125/spl deg/C, when operating from 3 V to 5 V of power supply. The low voltage detection circuit monitors the supply voltage applied to the system and generates the appropriate warning or even initiates a system shutdown before the in-circuit SoC presents malfunction. The module was implemented in a 0.5 /spl mu/m CMOS technology, occupies an area of 360 /spl times/ 530 /spl mu/m/sub 2/ and requires no external reference or components.", "title": "" }, { "docid": "1eb6558dab37b34d3c7c261654535104", "text": "We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation.", "title": "" }, { "docid": "e4b6dbd8238160457f14aacb8f9717ff", "text": "Abs t r ac t . The PKZIP program is one of the more widely used archive/ compression programs on personM, computers. It also has many compatible variants on other computers~ and is used by most BBS's and ftp sites to compress their archives. PKZIP provides a stream cipher which allows users to scramble files with variable length keys (passwords). In this paper we describe a known pla.intext attack on this cipher, which can find the internal representation of the key within a few hours on a personal computer using a few hundred bytes of known plaintext. In many cases, the actual user keys can also be found from the internal representation. We conclude that the PKZIP cipher is weak, and should not be used to protect valuable data.", "title": "" }, { "docid": "4096499f4e34f6c1f0c3bb0bb63fb748", "text": "A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.", "title": "" }, { "docid": "1d9b50bf7fa39c11cca4e864bbec5cf3", "text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.", "title": "" }, { "docid": "5f49c93d7007f0f14f1410ce7805b29a", "text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.", "title": "" }, { "docid": "50c639dfa7063d77cda26666eabeb969", "text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments", "title": "" } ]
scidocsrr
69d75f0ed2895666580edf3eaab50778
Dual-polarized Vivaldi array for X- and Ku-Band
[ { "docid": "873be467576bff16904d7abc6c961394", "text": "A bunny ear shaped combline element for dual-polarized compact aperture arrays is presented which provides relatively low noise temperature and low level cross polarization over a wide bandwidth and wide scanning angles. The element is corrugated along the outer edges between the elements to control the complex mutual coupling at high scan angles. This produces nearly linear polarized waves in the principle planes and lower than -10 dB cross polarization in the intercardinal plane. To achieve a low noise temperature, only metal conductors are used, which also results in a low cost of manufacture. Dual linear polarization or circular polarization can be realized by adopting two different arrangements of the orthogonal elements. The performances for both mechanical arrangements are investigated. The robustness of the new design over the conventional Vivaldi-type antennas is highlighted.", "title": "" } ]
[ { "docid": "3efaaabf9a93460bace2e70abc71801d", "text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.", "title": "" }, { "docid": "5f41bc81a483dd4deb5e70272d32ac77", "text": "In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.", "title": "" }, { "docid": "45356e33e51d8d2e2bfb6365d8269a69", "text": "We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the UFES's car, IARA. Finally, we list prominent autonomous research cars developed by technology companies and reported in the media.", "title": "" }, { "docid": "e051c1dafe2a2f45c48a79c320894795", "text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.", "title": "" }, { "docid": "9cdc7b6b382ce24362274b75da727183", "text": "Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.", "title": "" }, { "docid": "7e994507b7d1986bbc02411b221e9223", "text": "Users of online social networks voluntarily participate in different user groups or communities. Researches suggest the presence of strong local community structure in these social networks, i.e., users tend to meet other people via mutual friendship. Recently, different approaches have considered communities structure information for increasing the link prediction accuracy. Nevertheless, these approaches consider that users belong to just one community. In this paper, we propose three measures for the link prediction task which take into account all different communities that users belong to. We perform experiments for both unsupervised and supervised link prediction strategies. The evaluation method considers the links imbalance problem. Results show that our proposals outperform state-of-the-art unsupervised link prediction measures and help to improve the link prediction task approached as a supervised strategy.", "title": "" }, { "docid": "0444b38c0d20c999df4cb1294b5539c3", "text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "802d86ab9c6f55c25abe7c2b0c78544e", "text": "The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development, and discuss the potential of educational data mining driving human decision-making as an alternate paradigm for online learning, focusing on intelligence amplification rather than artificial intelligence.", "title": "" }, { "docid": "a297eea91a94a2945f6860b405205681", "text": "AIM\nThe aim of this study was to determine the treatment outcome of the use of a porcine monolayer collagen matrix (mCM) to augment peri-implant soft tissue in conjunction with immediate implant placement as an alternative to patient's own connective tissue.\n\n\nMATERIALS AND METHODS\nA total of 27 implants were placed immediately in 27 patients (14 males and 13 females, with a mean age of 52.2 years) with simultaneous augmentation of the soft tissue by the use of a mCM. The patients were randomly divided into two groups: Group I: An envelope flap was created and mCM was left coronally uncovered, and group II: A coronally repositioned flap was created and the mCM was covered by the mucosa. Soft-tissue thickness (STTh) was measured at the time of surgery (T0) and 6 months postoperatively (T1) using a customized stent. Cone beam computed tomographies (CBCTs) were taken from 12 representative cases at T1. A stringent plaque control regimen was enforced in all the patients during the 6-month observation period.\n\n\nRESULTS\nMean STTh change was similar in both groups (0.7 ± 0.2 and 0.7 ± 0.1 mm in groups I and II respectively). The comparison of STTh between T0 and T1 showed a statistically significant increase of soft tissue in both groups I and II as well as in the total examined population (p < 0.001). The STTh change as well as matrix thickness loss were comparable in both groups (p > 0.05). The evaluation of the CBCTs did not show any signs of resorption of the buccal bone plate.\n\n\nCONCLUSION\nWithin the limitations of this study, it could be concluded that the collagen matrix used in conjunction with immediate implant placement leads to an increased thickness of peri-implant soft tissue independent of the flap creation technique and could be an alternative to connective tissue graft.\n\n\nCLINICAL SIGNIFICANCE\nThe collagen matrix used seems to be a good alternative to patient's own connective tissue and could be used for the soft tissue augmentation around dental implants.", "title": "" }, { "docid": "06ba1eeef81df1b9a8888fd33f29855e", "text": "Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods.We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter to provide 31 bands over the near-infrared (0.7 m-1.0 m). Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure which is significantly different from person to person, but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose", "title": "" }, { "docid": "62ba312d26ffbbfdd52130c08031905f", "text": "The effects of intravascular laser irradiation of blood (ILIB), with 405 and 632.8 nm on serum blood sugar (BS) level, were comparatively studied. Twenty-four diabetic type 2 patients received 14 sessions of ILIB with blue and red lights. BS was measured before and after therapy. Serum BS decreased highly significant after ILIB with both red and blue lights (p < 0.0001), but we did not find significant difference between red and blue lights. The ILIB effect would be of benefit in the clinical treatment of diabetic type 2 patients, irrespective of lasers (blue or red lights) that are used.", "title": "" }, { "docid": "c55ddf94419271b6eed9358684750ca4", "text": "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard technique for pruning weights naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized feed-forward networks contain subnetworks (winning tickets) that—when trained in isolation—arrive at comparable test accuracy in a comparable number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Furthermore, the winning tickets we find above that size learn faster than the original network and exhibit higher test accuracy.", "title": "" }, { "docid": "e471e41553bf7c229a38f3d226ff8a28", "text": "Large AC machines are sometimes fed by multiple inverters. This paper presents the complete steady-state analysis of the PM synchronous machine with multiplex windings, suitable for driving by multiple independent inverters. Machines with 4, 6 and 9 phases are covered in detail. Particular attention is given to the magnetic interactions not only between individual phases, but between channels or groups of phases. This is of interest not only for determining performance and designing control systems, but also for analysing fault tolerance. It is shown how to calculate the necessary self- and mutual inductances and how to reduce them to a compact dq-axis model without loss of detail.", "title": "" }, { "docid": "c5078747fa86e925850fcd93df28219d", "text": "Experimentally investigating the relationship between moral judgment and action is difficult when the action of interest entails harming others. We adopt a new approach to this problem by placing subjects in an immersive, virtual reality environment that simulates the classic \"trolley problem.\" In this moral dilemma, the majority of research participants behaved as \"moral utilitarians,\" either (a) acting to cause the death of one individual in order to save the lives of five others, or (b) abstaining from action, when that action would have caused five deaths versus one. Confirming the emotional distinction between moral actions and omissions, autonomic arousal was greater when the utilitarian outcome required action, and increased arousal was associated with a decreased likelihood of utilitarian-biased behavior. This pattern of results held across individuals of different gender, age, and race.", "title": "" }, { "docid": "6bd7a3d4b330972328257d958ec2730e", "text": "Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented method outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.", "title": "" }, { "docid": "e15e979459e2a32e30447cb8f8b3a142", "text": "In the conventional unequal-split Wilkinson power divider, poor selectivity for each transmission path is usually a problem. To surmount this obstacle, the parallel coupled-line bandpass filter structure is utilized as an impedance transformer as well as a band selector for each transmission path in the proposed unequal-split Wilkinson power dividers. However, the bandpass filters in the proposed dividers require careful design because they may not be functional under certain conditions. For example, the odd-order coupled-line filters are not appropriate for impedance transformers in the proposed unequal-split dividers and high-isolation requirement. Using the even-order coupled-line filter transformers, this study proposes two types of unequal-split Wilkinson power dividers. The first type of the proposed dividers arranges two filter transformers near two output ports, respectively, and is capable of achieving a highly remarkable isolation between the two output ports and a good band selection in each transmission path. Specifically, not only the operating band but also the lower and higher stopbands can achieve highly favorable isolation for this type of divider. By arranging the load impedance of each port properly, the second type of the proposed dividers, which has only one filter transformer to be shared by each transmission path near the input port, is also proposed to provide effective isolation between two output ports and favorable selectivity in each transmission path.", "title": "" }, { "docid": "6dd3764687fa2f319b3162694be9fd62", "text": "Pump, compressor and fan systems often have a notable energy savings potential, which can be identified by monitoring their operation for instance by a frequency converter and model-based estimation methods. In such cases, sensorless determination of the system operating state relies on the accurate estimation of the motor rotational speed and shaft torque, which is commonly available in vector- and direct-torque-controlled frequency converters. However, frequency converter manufacturers seldom publish the expected estimation accuracies for the rotational speed and shaft torque. In this paper, the accuracy of these motor estimates is studied by laboratory measurements for a vector-controlled frequency converter both in the steady and dynamical states. In addition, the effect of the flux optimization feature on the estimation accuracy is studied. Finally, the impact of erroneous motor estimates on the flow rate estimation is demonstrated in the paper.", "title": "" }, { "docid": "110b84730ce059caf1463a4565aedc66", "text": "Due to increasing bandwidth requirements, Ethernet technology is emerging in embedded systems application areas such as automotive, avionics, and industrial control. In the automotive domain, Ethernet enables integration of cameras, radars, and fusion to build active safety and automated driving systems. While Ethernet provides the necessary communication bandwidth, solutions are needed to satisfy stringent dependability and temporal requirements of such safety-critical systems. This paper introduces an asynchronous traffic scheduling algorithm, which gives low delay guarantees in a switched Ethernet network, while maintaining a low implementation complexity. We present a timing analysis and demonstrate the tightness of the delay bounds by extensive simulation experiments.", "title": "" }, { "docid": "b3dd3c4325f4ef963d1bf4b5c64816c0", "text": "The Internet was originally designed to facilitate communication and research activities. However, the dramatic increase in the use of the Internet in recent years has led to pathological use (Internet addiction). This study is a preliminary investigation of the extent of Internet addiction in school children 16-18 years old in India. The Davis Online Cognition Scale (DOCS) was used to assess pathological Internet use. On the basis of total scores obtained (N = 100) on the DOCS, two groups were identified--dependents (18) and non-dependents (21), using mean +/- 1/2 SD as the criterion for selection. The UCLA loneliness scale was also administered to the subjects. Significant behavioral and functional usage differences were revealed between the two groups. Dependents were found to delay other work to spend time online, lose sleep due to late-night logons, and feel life would be boring without the Internet. The hours spent on the Internet by dependents were greater than those of non-dependents. On the loneliness measure, significant differences were found between the two groups, with the dependents scoring higher than the non-dependents.", "title": "" }, { "docid": "b24fe8a5357af646dd2706c62a46eb25", "text": "This paper presents an intelligent adaptive system for the integration of haptic output in graphical user interfaces. The system observes the user’s actions, extracts meaningful features, and generates a user and application specific model. When the model is sufficiently detailled, it is used to predict the widget which is most likely to be used next by the user. Upon entering this widget, two magnets in a specialized mouse are activated to stop the movement, so target acquisition becomes easier and more comfortable. Besides the intelligent control system, we will present several methods to generate haptic cues which might be integrated in multimodal user interfaces in the future.", "title": "" } ]
scidocsrr
866b5adf9b30ad1cc8e870a72114fbcf
FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS
[ { "docid": "bf707a96f7059b4c4f62d38255bb8333", "text": "We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for human detection of cars. Based on these observations, we selected the boundary of the car body, the boundary of the front windshield, and the shadow as the features. Some of these features are affected by the intensity of the car and whether or not there is a shadow along it. This information is represented in the structure of the Bayesian network that we use to integrate all features. Experiments show very promising results even on some very challenging images.", "title": "" } ]
[ { "docid": "2f58e94218fb0a46b9f654c1141b192d", "text": "How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research in semantics and understanding between 1978 and 2001, possibly rising again after 2001. We also introduce a model of the diversity of ideas, topic entropy, using it to show that COLING is a more diverse conference than ACL, but that both conferences as well as EMNLP are becoming broader over time. Finally, we apply Jensen-Shannon divergence of topic distributions to show that all three conferences are converging in the topics they cover.", "title": "" }, { "docid": "b1383088b26636e6ac13331a2419f794", "text": "This paper investigates the problem of blurring caused by motion during image capture of text documents. Motion blurring prevents proper optical character recognition of the document text contents. One area of such applications is to deblur name card images obtained from handheld cameras. In this paper, a complete motion deblurring procedure for document images has been proposed. The method handles both uniform linear motion blur and uniform acceleration motion blur. Experiments on synthetic and real-life blurred images prove the feasibility and reliability of this algorithm provided that the motion is not too irregular. The restoration procedure consumes only small amount of computation time.", "title": "" }, { "docid": "6d411b994567b18ea8ab9c2b9622e7f5", "text": "Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so.", "title": "" }, { "docid": "c2277b2502f5f64c7c7c7c03f992187c", "text": "Purpose – To provide useful references for manufacturing industry which guide the linkage of business strategies and performance indicators for information security projects. Design/methodology/approach – This study uses balanced scorecard (BSC) framework to set up performance index for information security management in organizations. Moreover, BSC used is to strengthen the linkage between foundational performance indicators and progressive business strategy theme. Findings – The general model of information security management builds the strategy map with 12 strategy themes and 35 key performance indicators are established. The development of strategy map also express how to link strategy themes to key performance indicators. Research limitations/implications – The investigation of listed manufacturing companies in Taiwan may limit the application elsewhere. Practical implications – Traditional performance measurement system like return on investment, sales growth is not enough to describe and manage intangible assets. This study based on BSC to measure information security management performance can provide the increasing value from improving measures and management insight in modern business. Originality/value – This study combines the information security researches and organizational performance studies. The result helps organizations to assess values of information security projects and consider how to link projects performance to business strategies.", "title": "" }, { "docid": "4b8ee1a2e6d80a0674e2ff8f940d16f9", "text": "Classification and knowledge extraction from complex spatiotemporal brain data such as EEG or fMRI is a complex challenge. A novel architecture named the NeuCube has been established in prior literature to address this. A number of key points in the implementation of this framework, including modular design, extensibility, scalability, the source of the biologically inspired spatial structure, encoding, classification, and visualisation tools must be considered. A Python version of this framework that conforms to these guidelines has been implemented.", "title": "" }, { "docid": "b4b06fc0372537459de882b48152c4c9", "text": "As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.", "title": "" }, { "docid": "634b4d0d8dd5e7f49986c75cb07b1822", "text": "Handwriting of Chinese has long been an important skill in East Asia. However, automatic generation of handwritten Chinese characters poses a great challenge due to the large number of characters. Various machine learning techniques have been used to recognize Chinese characters, but few works have studied the handwritten Chinese character generation problem, especially with unpaired training data. In this work, we formulate the Chinese handwritten character generation as a problem that learns a mapping from an existing printed font to a personalized handwritten style. We further propose DenseNet CycleGAN to generate Chinese handwritten characters. Our method is applied not only to commonly used Chinese characters but also to calligraphy work with aesthetic values. Furthermore, we propose content accuracy and style discrepancy as the evaluation metrics to assess the quality of the handwritten characters generated. We then use our proposed metrics to evaluate the generated characters from CASIA dataset as well as our newly introduced Lanting calligraphy dataset.", "title": "" }, { "docid": "42aca9ffdd5c0d2a2f310280d12afa1a", "text": "Communication skills courses are an essential component of undergraduate and postgraduate training and effective communication skills are actively promoted by medical defence organisations as a means of decreasing litigation. This article discusses active listening, a difficult discipline for anyone to practise, and examines why this is particularly so for doctors. It draws together themes from key literature in the field of communication skills, and examines how these theories apply in general practice.", "title": "" }, { "docid": "02b3d38c997026f07cc3be3da8664e72", "text": "Introduction The Situation Awareness Global Assessment Technique (SAGAT), is a global tool developed to assess SA across all of its elements based on a comprehensive assessment of operator SA requirements (Endsley, 1987b; 1988b; 1990c). Using SAGAT, a simulation employing a system of interest is frozen at randomly selected times and operators are queried as to their perceptions of the situation at that time. The system displays are blanked and the simulation is suspended while subjects quickly answer questions about their current perceptions of the situation. As a global measure, SAGAT includes queries about all operator SA requirements, including Level 1 (perception of data), Level 2 (comprehension of meaning) and Level 3 (projection of the near future) components. This includes a consideration of system functioning and status as well as relevant features of the external environment. SAGAT queries allow for detailed information about subject SA to be collected on an element by element basis that can be evaluated against reality, thus providing an objective assessment of operator SA. This type of assessment is a direct measure of SA — it taps into the operator's perceptions rather than infers them from behaviors that may be influenced by many other factors besides SA. Furthermore it does not require subjects or observers to make judgments about situation knowledge on the basis of incomplete information, as subjective assessments do. By collecting samples of SA data in this manner, situation perceptions can be collected immediately (while fresh in the operators’ minds), reducing numerous problems incurred when collecting data on mental events after the fact, but not incurring intrusiveness problems associated with on-line questioning. Multiple “snapshots” of operators’ SA can be acquired in this way, providing an index of the quality of SA provided by a particular design. By including queries across the full spectrum of an operator’s SA requirements, this approach minimizes possible biasing of attention, as subjects cannot prepare for the queries in advance since they could be queried over almost every aspect of the situation to which they would normally attend. The method is not without some costs, however, as a detailed analysis of SA requirements is required in order to develop the battery of queries to be administered.", "title": "" }, { "docid": "2dfad4f4b0d69085341dfb64d6b37d54", "text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.", "title": "" }, { "docid": "e0204cd8b28063beb87a312f3619e21b", "text": "We describe in this paper Hydra, an ensemble of convolutional neural networks (CNN) for geospatial land classification. The idea behind Hydra is to create an initial CNN that is coarsely optimized but provides a good starting pointing for further optimization, which will serve as the Hydra’s body. Then, the obtained weights are fine tuned multiple times to form an ensemble of CNNs that represent the Hydra’s heads. By doing so, we were able to reduce the training time while maintaining the classification performance of the ensemble. We created ensembles using two state-of-the-art CNN architectures, ResNet and DenseNet, to participate in the Functional Map of the World challenge. With this approach, we finished the competition in third place. We also applied the proposed framework to the NWPU-RESISC45 database and achieved the best reported performance so far. Code and CNN models are available at https://github.com/maups/hydra-fmow.", "title": "" }, { "docid": "995655a6a9f662d33e0525b3ea236ce4", "text": "A well-known problem in the design of operating systems is the selection of a resource allocation policy that will prevent deadlock. Deadlock is the situation in which resources have been allocated to various tasks in such a way that none of the tasks can continue. The various published solutions have been somewhat restrictive: either they do not handle the problem in sufficient generality or they suggest policies which will on occasion refuse a request which could have been safely granted. Algorithms are presented which examine a request in the light of the current allocation of resources and determi.~e whether or not the granting of the request will introduce the possibility of a deadlock. Proofs given in the appendixes show that the conditions imposed by the algorithms are both necessary and sufficient to prevent deadlock. The algorithms have been successfully used in the THE system.", "title": "" }, { "docid": "4d57b0dbc36c2eb058285b4a5b6c102c", "text": "OBJECTIVE\nThis study was planned to investigate the efficacy of neuromuscular rehabilitation and Johnstone Pressure Splints in the patients who had ataxic multiple sclerosis.\n\n\nMETHODS\nTwenty-six outpatients with multiple sclerosis were the subjects of the study. The control group (n = 13) was given neuromuscular rehabilitation, whereas the study group (n = 13) was treated with Johnstone Pressure Splints in addition.\n\n\nRESULTS\nIn pre- and posttreatment data, significant differences were found in sensation, anterior balance, gait parameters, and Expanded Disability Status Scale (p < 0.05). An important difference was observed in walking-on-two-lines data within the groups (p < 0.05). There also was a statistically significant difference in pendular movements and dysdiadakokinesia (p < 0.05). When the posttreatment values were compared, there was no significant difference between sensation, anterior balance, gait parameters, equilibrium and nonequilibrium coordination tests, Expanded Disability Status Scale, cortical onset latency, and central conduction time of somatosensory evoked potentials and motor evoked potentials (p > 0.05). Comparison of values revealed an important difference in cortical onset-P37 peak amplitude of somatosensory evoked potentials (right limbs) in favor of the study group (p < 0.05).\n\n\nCONCLUSIONS\nAccording to our study, it was determined that physiotherapy approaches were effective to decrease the ataxia. We conclude that the combination of suitable physiotherapy techniques is effective multiple sclerosis rehabilitation.", "title": "" }, { "docid": "cc3fbbff0a4d407df0736ef9d1be5dd0", "text": "The purpose of this study is to examine the effect of brand image benefits on satisfaction and loyalty intention in the context of color cosmetic product. Five brand image benefits consisting of functional, social, symbolic, experiential and appearance enhances were investigated. A survey carried out on 97 females showed that functional and appearance enhances significantly affect loyalty intention. Four of brand image benefits: functional, social, experiential, and appearance enhances are positively related to overall satisfaction. The results also indicated that overall satisfaction does influence customers' loyalty. The results imply that marketers should focus on brand image benefits in their effort to achieve customer loyalty.", "title": "" }, { "docid": "998591070fcfeaa307c5a6c807eabc30", "text": "Efficient vertical mobility is a critical component of tall building development and construction. This paper investigates recent advances in elevator technology and examines their impact on tall building development. It maps out, organizes, and collates complex and scattered information on multiple aspects of elevator design, and presents them in an accessible and non-technical discourse. Importantly, the paper contextualizes recent technological innovations by examining their implementations in recent major projects including One World Trade Center in New York; Shanghai Tower in Shanghai; Burj Khalifa in Dubai; Kingdom Tower in Jeddah, Saudi Arabia; and the green retrofit project of the Empire State Building in New York. Further, the paper discusses future vertical transportation models including a vertical subway concept, a space lift, and electromagnetic levitation technology. As these new technological advancements in elevator design empower architects to create new forms and shapes of large-scale, mixed-use developments, this paper concludes by highlighting the need for interdisciplinary research in incorporating elevators in skyscrapers.", "title": "" }, { "docid": "5aa7b8f78bea23dcdd0a083cb88ba6eb", "text": "PURPOSE\nParents, professionals, and policy makers need information on the long-term prognosis for children with communication disorders. Our primary purpose in this report was to help fill this gap by profiling the family, educational, occupational, and quality of life outcomes of young adults at 25 years of age (N = 244) from the Ottawa Language Study, a 20-year, prospective, longitudinal study of a community sample of individuals with (n = 112) and without (n = 132) a history of early speech and/or language impairments. A secondary purpose of this report was to use data from earlier phases of the study to predict important, real-life outcomes at age 25.\n\n\nMETHOD\nParticipants were initially identified at age 5 and subsequently followed at 12, 19, and 25 years of age. Direct assessments were conducted at all 4 time periods in multiple domains (demographic, communicative, cognitive, academic, behavioral, and psychosocial).\n\n\nRESULTS\nAt age 25, young adults with a history of language impairments showed poorer outcomes in multiple objective domains (communication, cognitive/academic, educational attainment, and occupational status) than their peers without early communication impairments and those with early speech-only impairments. However, those with language impairments did not differ in subjective perceptions of their quality of life from those in the other 2 groups. Objective outcomes at age 25 were predicted differentially by various combinations of multiple, interrelated risk factors, including poor language and reading skills, low family socioeconomic status, low performance IQ, and child behavior problems. Subjective well-being, however, was primarily associated with strong social networks of family, friends, and others.\n\n\nCONCLUSION\nThis information on the natural history of communication disorders may be useful in answering parents' questions, anticipating challenges that children with language disorders might encounter, and planning services to address those issues.", "title": "" }, { "docid": "57334078030a2b2d393a7c236d6a3a1c", "text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.", "title": "" }, { "docid": "ba129dec7a922884759bfec3f5f3048e", "text": "Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.", "title": "" }, { "docid": "0eff5b8ec08329b4a5d177baab1be512", "text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.", "title": "" }, { "docid": "5e75a4ea83600736c601e46cb18aa2c9", "text": "This paper deals with a low-cost 24GHz Doppler radar sensor for traffic surveillance. The basic building blocks of the transmit/receive chain, namely the antennas, the balanced power amplifier (PA), the dielectric resonator oscillator (DRO), the low noise amplifier (LNA) and the down conversion diode mixer are presented underlining the key technologies and manufacturing approaches by means the required performances can be attained while keeping industrial costs extremely low.", "title": "" } ]
scidocsrr
e7ca898a2a3ba288b2ae071ee3330a46
Autonomous Driving in Traffic: Boss and the Urban Challenge
[ { "docid": "e77c136b2d3e4afb36b27eeda946a37d", "text": "We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part II of a two-part paper, we describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge.", "title": "" } ]
[ { "docid": "d552b6beeea587bc014a4c31cabee121", "text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.", "title": "" }, { "docid": "4a9b4668296561b3522c3c57c64220c1", "text": "Hyperspectral imagery, which contains hundreds of spectral bands, has the potential to better describe the biological and chemical attributes on the plants than multispectral imagery and has been evaluated in this paper for the purpose of crop yield estimation. The spectrum of each pixel in a hyperspectral image is considered as a linear combinations of the spectra of the vegetation and the bare soil. Recently developed linear unmixing approaches are evaluated in this paper, which automatically extracts the spectra of the vegetation and bare soil from the images. The vegetation abundances are then computed based on the extracted spectra. In order to reduce the influences of this uncertainty and obtain a robust estimation results, the vegetation abundances extracted on two different dates on the same fields are then combined. The experiments are carried on the multidate hyperspectral images taken from two grain sorghum fields. The results show that the correlation coefficients between the vegetation abundances obtained by unsupervised linear unmixing approaches are as good as the results obtained by supervised methods, where the spectra of the vegetation and bare soil are measured in the laboratory. In addition, the combination of vegetation abundances extracted on different dates can improve the correlations (from 0.6 to 0.7).", "title": "" }, { "docid": "305679866d219b0856ed48230f30c549", "text": "The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously.\n Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself.\n The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube.", "title": "" }, { "docid": "4cb41f9de259f18cd8fe52d2f04756a6", "text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21", "title": "" }, { "docid": "749800c4dae57eb13b5c3df9e0c302a0", "text": "In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods.
 In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement.
 The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot.
 The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals.
 The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.", "title": "" }, { "docid": "09985252933e82cf1615dabcf1e6d9a2", "text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.", "title": "" }, { "docid": "56002273444d2078d5db47671255555a", "text": "The credit card has become the most popular mode of payment for both online as well as regular purchase, in cases of fraud associated with it are also rising. Credit card frauds are increasing day by day regardless of various techniques developed for its detection. Fraudsters are so experts that they generate new ways of committing fraudulent transactions each day which demands constant innovation for its detection techniques. Most of the techniques based on Artificial Intelligence, Fuzzy Logic, Neural Network, Logistic Regression, Naïve Bayesian, Machine Learning, Sequence Alignment, Decision tree, Bayesian network, meta learning, Genetic programming etc., these are evolved in detecting various credit card fraudulent transactions. This paper presents a survey of various techniques used in various credit card fraud detection mechanisms.", "title": "" }, { "docid": "b53f2f922661bfb14bf2181236fad566", "text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains", "title": "" }, { "docid": "785c716d4f127a5a5fee02bc29aeb352", "text": "In this paper we propose a novel, improved, phase generated carrier (PGC) demodulation algorithm based on the PGC-differential-cross-multiplying approach (PGC-DCM). The influence of phase modulation amplitude variation and light intensity disturbance (LID) on traditional PGC demodulation algorithms is analyzed theoretically and experimentally. An experimental system for remote no-contact microvibration measurement is set up to confirm the stability of the improved PGC algorithm with LID. In the experiment, when the LID with a frequency of 50 Hz and the depth of 0.3 is applied, the signal-to-noise and distortion ratio (SINAD) of the improved PGC algorithm is 19 dB, higher than the SINAD of the PGC-DCM algorithm, which is 8.7 dB.", "title": "" }, { "docid": "c2c81d5f7c1be2f6a877811cd61f055d", "text": "Since the cognitive revolution of the sixties, representation has served as the central concept of cognitive theory and representational theories of mind have provided the establishment view in cognitive science (Fodor, 1980; Gardner, 1985; Vera & Simon, 1993). Central to this line of thinking is the belief that knowledge exists solely in the head, and instruction involves finding the most efficient means for facilitating the “acquisition” of this knowledge (Gagne, Briggs, & Wager, 1993). Over the last two decades, however, numerous educational psychologists and instructional designers have begun abandoning cognitive theories that emphasize individual thinkers and their isolated minds. Instead, these researchers have adopted theories that emphasize the social and contextualized nature of cognition and meaning (Brown, Collins, & Duguid, 1989; Greeno, 1989, 1997; Hollan, Hutchins, & Kirsch, 2000; Lave & Wenger, 1991; Resnick, 1987; Salomon, 1993). Central to these reconceptualizations is an emphasis on contextualized activity and ongoing participation as the core units of analysis (Barab & Kirshner, 2001; Barab & Plucker, 2002; Brown & Duguid, 1991; Cook & Yanow, 1993;", "title": "" }, { "docid": "5c30ecda39e41e2b32659e12c9585ba6", "text": "We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.", "title": "" }, { "docid": "77ac3a28ffa420a1e4f1366d36b4c188", "text": " Call-Exner bodies are present in ovarian follicles of a range of species including human and rabbit, and in a range of human ovarian tumors. We have also found structures resembling Call-Exner bodies in bovine preantral and small antral follicles. Hematoxylin and eosin staining of single sections of bovine ovaries has shown that 30% of preantral follicles with more than one layer of granulosa cells and 45% of small (less than 650 μm) antral follicles have at least one Call-Exner body composed of a spherical eosinophilic region surrounded by a rosette of granulosa cells. Alcian blue stains the spherical eosinophilic region of the Call-Exner bodies. Electron microscopy has demonstrated that some Call-Exner bodies contain large aggregates of convoluted basal lamina, whereas others also contain regions of unassembled basal-lamina-like material. Individual chains of the basal lamina components type IV collagen (α1 to α5) and laminin (α1, β2 and δ1) have been immunolocalized to Call-Exner bodies in sections of fresh-frozen ovaries. Bovine Call-Exner bodies are presumably analogous to Call-Exner bodies in other species but are predominantly found in preantral and small antral follicles, rather than large antral follicles. With follicular development, the basal laminae of Call-Exner bodies change in their apparent ratio of type IV collagen to laminin, similar to changes observed in the follicular basal lamina, suggesting that these structures have a common cellular origin.", "title": "" }, { "docid": "bf5874dc1fc1c968d7c41eb573d8d04a", "text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.", "title": "" }, { "docid": "5b1f814b7d8f1495733f0dc391449296", "text": "Abstruct-A class of digital h e a r phase fiiite impulse response (FIR) filters for decimation (sampling rate decrease) and interpolation (sampling rate increase) are presented. They require no multipliers and use limited storage making them an economical alternative to conventional implementations for certain applications. A digital fiiter in this class consists of cascaded ideal integrator stages operating at a high sampling rate and an equal number of comb stages operating at a low sampling rate. Together, a single integrator-comb pair produces a uniform FIR. The number of cascaded integrator-comb pairs is chosen to meet design requirements for aliasing or imaging error. Design procedures and examples are given for both decimation and interpolation filters with the emphasis on frequency response and register width.", "title": "" }, { "docid": "e5eb79b313dad91de1144cd0098cde15", "text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.", "title": "" }, { "docid": "183e715ca8e5c329ba58387d31e2f0f7", "text": "We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.", "title": "" }, { "docid": "fe11fc1282a7efc34a9efe0e81fb21d6", "text": "Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above is feasible, and that key recovery attacks can be conducted with modest resources.", "title": "" }, { "docid": "c94e5133c083193227b26a9fb35a1fbd", "text": "Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called \"Virtual KITTI\", automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.", "title": "" }, { "docid": "4f2fa6ee3a5e7a4b9a7472993b992439", "text": "PURPOSE\nThe purpose of this research was to develop and evaluate a severity rating score for fecal incontinence, the Fecal Incontinence Severity Index.\n\n\nMETHODS\nThe Fecal Incontinence Severity Index is based on a type x frequency matrix. The matrix includes four types of leakage commonly found in the fecal incontinent population: gas, mucus, and liquid and solid stool and five frequencies: one to three times per month, once per week, twice per week, once per day, and twice per day. The Fecal Incontinence Severity Index was developed using both colon and rectal surgeons and patient input for the specification of the weighting scores.\n\n\nRESULTS\nSurgeons and patients had very similar weightings for each of the type x frequency combinations; significant differences occurred for only 3 of the 20 different weights. The Fecal Incontinence Severity Index score of a group of patients with fecal incontinence (N = 118) demonstrated significant correlations with three of the four scales found in a fecal incontinence quality-of-life scale.\n\n\nCONCLUSIONS\nEvaluation of the Fecal Incontinence Severity Index indicates that the index is a tool that can be used to assess severity of fecal incontinence. Overall, patient and surgeon ratings of severity are similar, with minor differences associated with the accidental loss of solid stool.", "title": "" }, { "docid": "ffd7afcf6e3b836733b80ed681e2a2b9", "text": "The emergence of cloud management systems, and the adoption of elastic cloud services enable dynamic adjustment of cloud hosted resources and provisioning. In order to effectively provision for dynamic workloads presented on cloud platforms, an accurate forecast of the load on the cloud resources is required. In this paper, we investigate various forecasting methods presented in recent research, identify and adapt evaluation metrics used in literature and compare forecasting methods on prediction performance. We investigate the performance gain of ensemble models when combining three of the best performing models into one model. We find that our 30th order Auto-regression model and Feed-Forward Neural Network method perform the best when evaluated on Google's Cluster dataset and using the provision specific metrics identified. We also show an improvement in forecasting accuracy when evaluating two ensemble models.", "title": "" } ]
scidocsrr
e981811cd59cecbef0fe719bccc6914a
On the Algorithmic Implementation of Stochastic Discrimination
[ { "docid": "8bb5acdafefc35f6c1adf00cfa47ac2c", "text": "A general method is introduced for separating points in multidimensional spaces through the use of stochastic processes. This technique is called stochastic discrimination.", "title": "" } ]
[ { "docid": "c2b0dfb06f82541fca0d2700969cf0d9", "text": "Magnetic resonance is an exceptionally powerful and versatile measurement technique. The basic structure of a magnetic resonance experiment has remained largely unchanged for almost 50 years, being mainly restricted to the qualitative probing of only a limited set of the properties that can in principle be accessed by this technique. Here we introduce an approach to data acquisition, post-processing and visualization—which we term ‘magnetic resonance fingerprinting’ (MRF)—that permits the simultaneous non-invasive quantification of multiple important properties of a material or tissue. MRF thus provides an alternative way to quantitatively detect and analyse complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to identify the presence of a specific target material or tissue, which will increase the sensitivity, specificity and speed of a magnetic resonance study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern-recognition algorithm, MRF inherently suppresses measurement errors and can thus improve measurement accuracy.", "title": "" }, { "docid": "f18dffe56c54c537bae8862a85132a32", "text": "A vast territory for research is open from mimicking the behaviour of microorganisms to defend themselves from competitors. Antibiotics secreted by bacteria or fungi can be copied to yield efficient molecules which are active against infectious diseases. On the other hand, nanotechnology provides novel techniques to probe and manipulate single atoms and molecules. Nanoparticles are finding a large variety of biomedical and pharmaceutical applications, since their size scale can be similar to that of biological molecules (e.g. proteins, DNA) and structures (e.g. viruses and bacteria). They are currently being used in imaging (El-Sayed et al., 2005), biosensing (Medintz et al.,2005), biomolecules immobilization (Carmona-Ribeiro, 2010a), gene and drug delivery (Carmona-Ribeiro, 2003; CarmonaRibeiro, 2010b) and vaccines (O ́Hagan et al., 2000; Lincopan & Carmona-Ribeiro, 2009; Lincopan et al., 2009). They can also incorporate antimicrobial agents (antibiotics, metals, peptides, surfactants and lipids), can be the antimicrobial agent or used to produce antimicrobial devices. Antimicrobial agents found in Nature can sucessfully be copied for synthesis of novel biomimetic but synthetic compounds. In this review, synthetic cationic surfactants and lipids, natural and synthetic peptides or particles, and hybrid antimicrobial films are overviewed unraveling novel antimicrobial approaches against infectious diseases.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "c8a7330443596d17fefe9f081b7ea5a4", "text": "The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.", "title": "" }, { "docid": "4f846635e4f23b7630d0c853559f71dc", "text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.", "title": "" }, { "docid": "e6107ac6d0450bb1ce4dab713e6dcffa", "text": "Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three “administrators” of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices. To appear in2nd Workshop on Privacy Enhancing Technologies , Lecture Notes in Computer Science. Springer Verlag, 2002. Copyright c © Springer", "title": "" }, { "docid": "cf998ec01aefef7cd80d2fdd25e872e1", "text": "Shunting inhibition, a conductance increase with a reversal potential close to the resting potential of the cell, has been shown to have a divisive effect on subthreshold excitatory postsynaptic potential amplitudes. It has therefore been assumed to have the same divisive effect on firing rates. We show that shunting inhibition actually has a subtractive effecton the firing rate in most circumstances. Averaged over several interspike intervals, the spiking mechanism effectively clamps the somatic membrane potential to a value significantly above the resting potential, so that the current through the shunting conductance is approximately independent of the firing rate. This leads to a subtractive rather than a divisive effect. In addition, at distal synapses, shunting inhibition will also have an approximately subtractive effect if the excitatory conductance is not small compared to the inhibitory conductance. Therefore regulating a cell's passive membrane conductancefor instance, via massive feedbackis not an adequate mechanism for normalizing or scaling its output.", "title": "" }, { "docid": "d7e2654767d1178871f3f787f7616a94", "text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.", "title": "" }, { "docid": "be3640467394a0e0b5a5035749b442e9", "text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.", "title": "" }, { "docid": "08aa9d795464d444095bbb73c067c2a9", "text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome​ 1​ by calling genetic variants present in an individual using billions of short, errorful sequence reads​ 2​ . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome​ 3,4​ . Here we show that a deep convolutional neural network​ 5​ can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself​. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads​. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes​ 6​ . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform​ . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent​ 8,9​ ), a major problem in an area with such rapid technological progress​ 1​ . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification​ 10​ , translation​ , gaming​ , and the life sciences​ 14–17​ . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture​ , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data​ 18​ , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling ​ Truth Challenge​ in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent​ . Though long-recognized as an invalid assumption​ 2​ , the true likelihood function that models multiple reads simultaneously is unknown​ 6,19,20​ . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator​ 21​ . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset​ 22​ (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse​ . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts​ , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle​ 24​ that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k)​ than found in a whole-genome (~4-5M)​ 26​ . T", "title": "" }, { "docid": "1490331d46b8c19fce0a94e072bff502", "text": "We explore the reliability and validity of a self-report measure of procrastination and conscientiousness designed for use with thirdto fifth-grade students. The responses of 120 students are compared with teacher and parent ratings of the student. Confirmatory and exploratory factor analyses were also used to examine the structure of the scale. Procrastination and conscientiousness are highly correlated (inversely); evidence suggests that procrastination and conscientiousness are aspects of the same construct. Procrastination and conscientiousness are correlated with the Physiological Anxiety subscale of the Revised Children’s Manifest Anxiety Scale, and with the Task (Mastery) and Avoidance (Task Aversiveness) subscales of Skaalvik’s (1997) Goal Orientation Scales. Both theoretical implications and implications for interventions are discussed. © 2002 Wiley Periodicals, Inc.", "title": "" }, { "docid": "797ab17a7621f4eaa870a8eb24f8b94d", "text": "A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25-132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50-180 cm.", "title": "" }, { "docid": "a306ea0a425a00819b81ea7f52544cfb", "text": "Early research in electronic markets seemed to suggest that E-Commerce transactions would result in decreased costs for buyers and sellers alike, and would therefore ultimately lead to the elimination of intermediaries from electronic value chains. However, a careful analysis of the structure and functions of electronic marketplaces reveals a different picture. Intermediaries provide many value-adding functions that cannot be easily substituted or ‘internalised’ through direct supplier-buyer dealings, and hence mediating parties may continue to play a significant role in the E-Commerce world. In this paper we provide an analysis of the potential roles of intermediaries in electronic markets and we articulate a number of hypotheses for the future of intermediation in such markets. Three main scenarios are discussed: the disintermediation scenario where market dynamics will favour direct buyer-seller transactions, the reintermediation scenario where traditional intermediaries will be forced to differentiate themselves and reemerge in the electronic marketplace, and the cybermediation scenario where wholly new markets for intermediaries will be created. The analysis suggests that the likelihood of each scenario dominating a given market is primarily dependent on the exact functions that intermediaries play in each case. A detailed discussion of such functions is presented in the paper, together with an analysis of likely outcomes in the form of a contingency model for intermediation in electronic markets.", "title": "" }, { "docid": "8e521a935f4cc2008146e4153a2bc3b5", "text": "The research work on supply-chain management has primarily focused on the study of materials flow and very little work has been done on the study of upstream flow of money. In this paper we study the flow of money in a supply chain from the viewpoint of a supply chain partner who receives money from the downstream partners and makes payments to the upstream partners. The objective is to schedule all payments within the constraints of the receipt of the money. A penalty is to be paid if payments are not made within the specified time. Any unused money in a given period can be invested to earn an interest. The problem is computationally complex and non-intuitive because of its dynamic nature. The incoming and outgoing monetary flows never stop and are sometimes unpredictable. For tractability purposes we first develop an integer programming model to represent the static problem, where monetary in-flows and out-flows are known before hand. We demonstrate that even the static problem is NP-Complete. First we develop a heuristic to solve this static problem. Next, the insights derived from the static problem analysis are used to develop two heuristics to solve the various level of dynamism of the problem. The performances of all these heuristics are measured and presented. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9547b04b76e653c8b4854ae193b4319f", "text": "© 2017 Western Digital Corporation or its affiliates. All rights reserved Emerging fast byte-addressable non-volatile memory (eNVM) technologies such as ReRAM and 3D Xpoint are projected to offer two orders of magnitude higher performance than flash. However, the existing solid-state drive (SSD) architecture optimizes for flash characteristics and is not adequate to exploit the full potential of eNVMs due to architectural and I/O interface (e.g., PCIe, SATA) limitations. To improve the storage performance and reduce the host main memory requirement for KVS, we propose a novel SSD architecture that extends the semantic of SSD with the KVS features and implements indexing capability inside SSD. It has in-storage processing engine that implements key-value operations such as get, put and delete to efficiently operate on KV datasets. The proposed system introduces a compute channel interface to offload key-value operations down to the SSD that significantly reduces the operating system, file system and other software overhead. This SSD achieves 4.96 Mops/sec get and 3.44 Mops/sec put operations and shows better scalability with increasing number of keyvalue pairs as compared to flash-based NVMe (flash-NVMe) and DRAMbased NVMe (DRAM-NVMe) devices. With decreasing DRAM size by 75%, its performance decreases gradually, achieving speedup of 3.23x as compared to DRAM-NVMe. This SSD significantly improves performance and reduces memory by exploiting the fine grain parallelism within a controller and keeping data movement local to effectively utilize eNVM bandwidth and eliminating the superfluous data movement between the host and the SSD. Abstract", "title": "" }, { "docid": "f480c08eea346215ccd01e21e9acfe81", "text": "In the era of big data, recommender system (RS) has become an effective information filtering tool that alleviates information overload for Web users. Collaborative filtering (CF), as one of the most successful recommendation techniques, has been widely studied by various research institutions and industries and has been applied in practice. CF makes recommendations for the current active user using lots of users’ historical rating information without analyzing the content of the information resource. However, in recent years, data sparsity and high dimensionality brought by big data have negatively affected the efficiency of the traditional CF-based recommendation approaches. In CF, the context information, such as time information and trust relationships among the friends, is introduced into RS to construct a training model to further improve the recommendation accuracy and user’s satisfaction, and therefore, a variety of hybrid CF-based recommendation algorithms have emerged. In this paper, we mainly review and summarize the traditional CF-based approaches and techniques used in RS and study some recent hybrid CF-based recommendation approaches and techniques, including the latest hybrid memory-based and model-based CF recommendation algorithms. Finally, we discuss the potential impact that may improve the RS and future direction. In this paper, we aim at introducing the recent hybrid CF-based recommendation techniques fusing social networks to solve data sparsity and high dimensionality and provide a novel point of view to improve the performance of RS, thereby presenting a useful resource in the state-of-the-art research result for future researchers.", "title": "" }, { "docid": "7eac260700c56178533ec687159ac244", "text": "Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence an intelligence chat bot will be used to give information or answers to any question asked by user related to bank. It is more like a virtual assistant, people feel like they are talking with real person. They speak the same language we do, can answer questions. In banks, at user care centres and enquiry desks, human is insufficient and usually takes long time to process the single request which results in wastage of time and also reduce quality of user service. The primary goal of this chat bot is user can interact with mentioning their queries in plain English and the chat bot can resolve their queries with appropriate response in return The proposed system would help duplicate the user utility experience with one difference that employee and yet get the queries attended and resolved. It can extend daily life, by providing solutions to help desks, telephone answering systems, user care centers. This paper defines the dataset that we have prepared from FAQs of bank websites, architecture and methodology used for developing such chatbot. Also this paper discusses the comparison of seven ML classification algorithm used for getting the class of input to chat bot.", "title": "" }, { "docid": "21cbea6b83aa89b61d8dab91abcf1b99", "text": "We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. Our source code is available on GitHub1.", "title": "" }, { "docid": "241609f10f9f5afbf6a939833b642a69", "text": "Heterogeneous or co-processor architectures are becoming an important component of high productivity computing systems (HPCS). In this work the performance of a GPU based HPCS is compared with the performance of a commercially available FPGA based HPC. Contrary to previous approaches that focussed on specific examples, a broader analysis is performed by considering processes at an architectural level. A set of benchmarks is employed that use different process architectures in order to exploit the benefits of each technology. These include the asynchronous pipelines common to \"map\" tasks, a partially synchronous tree common to \"reduce\" tasks and a fully synchronous, fully connected mesh. We show that the GPU is more productive than the FPGA architecture for most of the benchmarks and conclude that FPGA-based HPCS is being marginalised by GPUs.", "title": "" } ]
scidocsrr
9cd4fddb361734c215782018b8b9a529
Video games and prosocial behavior: A study of the effects of non-violent, violent and ultra-violent gameplay
[ { "docid": "b117e0e32d754f59c7d3eacdc609f63b", "text": "Mass media campaigns are widely used to expose high proportions of large populations to messages through routine uses of existing media, such as television, radio, and newspapers. Exposure to such messages is, therefore, generally passive. Such campaigns are frequently competing with factors, such as pervasive product marketing, powerful social norms, and behaviours driven by addiction or habit. In this Review we discuss the outcomes of mass media campaigns in the context of various health-risk behaviours (eg, use of tobacco, alcohol, and other drugs, heart disease risk factors, sex-related behaviours, road safety, cancer screening and prevention, child survival, and organ or blood donation). We conclude that mass media campaigns can produce positive changes or prevent negative changes in health-related behaviours across large populations. We assess what contributes to these outcomes, such as concurrent availability of required services and products, availability of community-based programmes, and policies that support behaviour change. Finally, we propose areas for improvement, such as investment in longer better-funded campaigns to achieve adequate population exposure to media messages.", "title": "" }, { "docid": "6fb168b933074250236980742e33f064", "text": "Recent research reveals that playing prosocial video games increases prosocial cognitions, positive affect, and helpful behaviors [Gentile et al., 2009; Greitemeyer and Osswald, 2009, 2010, 2011]. These results are consistent with the social-cognitive models of social behavior such as the general learning model [Buckley and Anderson, 2006]. However, no experimental studies have examined such effects on children. Previous research on violent video games suggests that short-term effects of video games are largely based on priming of existing behavioral scripts. Thus, it is unclear whether younger children will show similar effects. This research had 9-14 years olds play a prosocial, neutral, or violent video game, and assessed helpful and hurtful behaviors simultaneously through a new tangram measure. Prosocial games increased helpful and decreased hurtful behavior, whereas violent games had the opposite effects.", "title": "" } ]
[ { "docid": "162f46d8f789e39423b8cc80cae2461c", "text": "Various key-value (KV) stores are widely employed for data management to support Internet services as they offer higher efficiency, scalability, and availability than relational database systems. The log-structured merge tree (LSM-tree) based KV stores have attracted growing attention because they can eliminate random writes and maintain acceptable read performance. Recently, as the price per unit capacity of NAND flash decreases, solid state disks (SSDs) have been extensively adopted in enterprise-scale data centers to provide high I/O bandwidth and low access latency. However, it is inefficient to naively combine LSM-tree-based KV stores with SSDs, as the high parallelism enabled within the SSD cannot be fully exploited. Current LSM-tree-based KV stores are designed without assuming SSD's multi-channel architecture.\n To address this inadequacy, we propose LOCS, a system equipped with a customized SSD design, which exposes its internal flash channels to applications, to work with the LSM-tree-based KV store, specifically LevelDB in this work. We extend LevelDB to explicitly leverage the multiple channels of an SSD to exploit its abundant parallelism. In addition, we optimize scheduling and dispatching polices for concurrent I/O requests to further improve the efficiency of data access. Compared with the scenario where a stock LevelDB runs on a conventional SSD, the throughput of storage system can be improved by more than 4X after applying all proposed optimization techniques.", "title": "" }, { "docid": "f9c56d14c916bff37ab69bd949c30b04", "text": "We have examined 365 versions of Linux. For every versio n, we counted the number of instances of common (global) coupling between each of the 17 kernel modules and all the other modules in that version of Linux. We found that the num ber of instances of common coupling grows exponentially with version number. This result is significant at the 99.99% level, and no additional variables are needed to explain this increase. On the other hand, the number of lines of code in each kernel modules grows only linearly with v ersion number. We conclude that, unless Linux is restructured with a bare minimum of common c upling, the dependencies induced by common coupling will, at some future date, make Linu x exceedingly hard to maintain without inducing regression faults.", "title": "" }, { "docid": "dda120b6a1e76b0920f831325e9529da", "text": "This paper describes a practical and systematic procedure for modeling and identifying the flight dynamics of small, low-cost, fixed-wing uninhabited aerial vehicles (UAVs). The procedure is applied to the Ultra Stick 25e flight test vehicle of the University of Minnesota UAV flight control research group. The procedure hinges on a general model structure for fixed-wing UAV flight dynamics derived using first principles analysis. Wind tunnel tests and simplifying assumptions are applied to populate the model structure with an approximation of the Ultra Stick 25e flight dynamics. This baseline model is used to design informative flight experiments for the subsequent frequency domain system identification. The final identified model is validated against separately acquired time domain flight data.", "title": "" }, { "docid": "aea440261647e7e7d9880c0929c04f0d", "text": "This paper deals with parking space detection by using ultrasonic sensor. Using the multiple echo function, the accuracy of edge detection was increased. After inspecting effect on the multiple echo function in indoor experiment, we applied to 11 types of vehicles in real parking environment and made experiments on edge detection with various values of resolution. We can scan parking space more accurately in real parking environment. We propose the diagonal sensor to get information about the side of parking space. Our proposed method has benefit calculation and implementation is very simple.", "title": "" }, { "docid": "e74a57805aef21974b263b65f5d4b67a", "text": "Status epilepticus (SE) may cause death or severe sequelae unless seizures are terminated promptly. Various types of SE exist, and treatment should be adjusted to the specific type. Yet some basic guiding principles are broadly applicable: (1) early treatment is most effective, (2) benzodiazepines are the best first line agents, (3) electroencephalography should be used to confirm the termination of seizures in patients who are not alert and to monitor therapy in refractory cases, and (4) close attention to the appearance of systemic complications (from the SE per se or from the medications used to treat it) is essential. This article expands on these principles and summarizes current knowledge on the definition, classification, diagnosis, and treatment of SE.", "title": "" }, { "docid": "3fd8092faee792a316fb3d1d7c2b6244", "text": "The complete dynamics model of a four-Mecanum-wheeled robot considering mass eccentricity and friction uncertainty is derived using the Lagrange’s equation. Then based on the dynamics model, a nonlinear stable adaptive control law is derived using the backstepping method via Lyapunov stability theory. In order to compensate for the model uncertainty, a nonlinear damping term is included in the control law, and the parameter update law with σ-modification is considered for the uncertainty estimation. Computer simulations are conducted to illustrate the suggested control approach.", "title": "" }, { "docid": "6557347e1c0ebf014842c9ae2c77dbed", "text": "----------------------------------------------------------------------ABSTRACT-------------------------------------------------------------Steganography is derived from the Greek word steganos which literally means “Covered” and graphy means “Writing”, i.e. covered writing. Steganography refers to the science of “invisible” communication. For hiding secret information in various file formats, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. The Least Significant Bit (LSB) embedding technique suggests that data can be hidden in the least significant bits of the cover image and the human eye would be unable to notice the hidden image in the cover file. This technique can be used for hiding images in 24-Bit, 8-Bit, Gray scale format. This paper explains the LSB Embedding technique and Presents the evaluation for various file formats.", "title": "" }, { "docid": "f68f82e0d7f165557433580ad1e3e066", "text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press", "title": "" }, { "docid": "c26919afa32708786ae7f96b88883ed9", "text": "A Privacy Enhancement Technology (PET) is an application or a mechanism which allows users to protect the privacy of their personally identifiable information. Early PETs were about enabling anonymous mailing and anonymous browsing, but lately there have been active research and development efforts in many other problem domains. This paper describes the first pattern language for developing privacy enhancement technologies. Currently, it contains 12 patterns. These privacy patterns are not limited to a specific problem domain; they can be applied to design anonymity systems for various types of online communication, online data sharing, location monitoring, voting and electronic cash management. The pattern language guides a developer when he or she is designing a PET for an existing problem, or innovating a solution for a new problem.", "title": "" }, { "docid": "b829049a8abf47f8f13595ca54eaa009", "text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.", "title": "" }, { "docid": "31abfd6e4f6d9e56bc134ffd7c7b7ffc", "text": "Online social networks like Facebook recommend new friends to users based on an explicit social network that users build by adding each other as friends. The majority of earlier work in link prediction infers new interactions between users by mainly focusing on a single network type. However, users also form several implicit social networks through their daily interactions like commenting on people’s posts or rating similarly the same products. Prior work primarily exploited both explicit and implicit social networks to tackle the group/item recommendation problem that recommends to users groups to join or items to buy. In this paper, we show that auxiliary information from the useritem network fruitfully combines with the friendship network to enhance friend recommendations. We transform the well-known Katz algorithm to utilize a multi-modal network and provide friend recommendations. We experimentally show that the proposed method is more accurate in recommending friends when compared with two single source path-based algorithms using both synthetic and real data sets.", "title": "" }, { "docid": "2d54a447df50a31c6731e513bfbac06b", "text": "Lumbar intervertebral disc diseases are among the main causes of lower back pain (LBP). Desiccation is a common disease resulting from various reasons and ultimately most people are affected by desiccation at some age. We propose a probabilistic model that incorporates intervertebral disc appearance and contextual information for automating the diagnosis of lumbar disc desiccation. We utilize a Gibbs distribution for processing localized lumbar intervertebral discs' appearance and contextual information. We use 55 clinical T2-weighted MRI for lumbar area and achieve over 96% accuracy on a cross validation experiment.", "title": "" }, { "docid": "4d0e3b6681c45d6cc89ddc98fb6d447a", "text": "Voxel-based modeling techniques are known for their robustness and flexibility. However, they have three major shortcomings: (1) Memory intensive, since a large number of voxels are needed to represent high-resolution models (2) Computationally expensive, since a large number of voxels need to be visited (3) Computationally expensive isosurface extraction is needed to visualize the results. We describe techniques which alleviate these by taking advantage of self-similarity in the data making voxel-techniques practical and attractive. We describe algorithms for MEMS process emulation, isosurface extraction and visualization which utilize these techniques.", "title": "" }, { "docid": "c00470d69400066d11374539052f4a86", "text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.", "title": "" }, { "docid": "6d262139067d030c3ebb1169e93c6422", "text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.", "title": "" }, { "docid": "cdca4a6cb35cbc674c06465c742dfe50", "text": "The generation of new lymphatic vessels through lymphangiogenesis and the remodelling of existing lymphatics are thought to be important steps in cancer metastasis. The past decade has been exciting in terms of research into the molecular and cellular biology of lymphatic vessels in cancer, and it has been shown that the molecular control of tumour lymphangiogenesis has similarities to that of tumour angiogenesis. Nevertheless, there are significant mechanistic differences between these biological processes. We are now developing a greater understanding of the specific roles of distinct lymphatic vessel subtypes in cancer, and this provides opportunities to improve diagnostic and therapeutic approaches that aim to restrict the progression of cancer.", "title": "" }, { "docid": "6d9393c95ca9c6534c98c0d0a4451fbc", "text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.", "title": "" }, { "docid": "db4c2238363a173ba1c1e28da809d567", "text": "In most applications of Ground Penetrating Radar (GPR), it is very important to combine the radar with an accurate positioning system. This allows solving errors in the localisation of buried objects, which may be generated by measurement conditions such as the soil slope, in the case of a ground-coupled GPR, and the aerial vehicle altitude, in the case of a GPR mounted on a drone or helicopter. This paper presents the implementation of a low-cost system for positioning, tracking and trimming of GPR data. The proposed system integrates Global Positioning System (GPS) data with those of an Inertial Measurement Unit (IMU). So far, the electronic board including GPS and IMU was designed, developed and tested in the laboratory. As a next step, GPR results will be collected in outdoor scenarios of practical interest and the accuracy of data measured by using our positioning system will be compared to the accuracy of data measured without using it.", "title": "" }, { "docid": "e56abb473e262fec3c0260202564be0a", "text": "This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: “An android is a robot” vs. “Snowcap is unmistakable”. Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.", "title": "" }, { "docid": "985df151ccbc9bf47b05cffde47a6342", "text": "This paper establishes the criteria to ensure stable operation of two-stage, bidirectional, isolated AC-DC converters. The bi-directional converter is analyzed in the context of a building block module (BBM) that enables a fully modular architecture for universal power flow conversion applications (AC-DC, DC-AC and DC-DC). The BBM consists of independently controlled AC-DC and isolated DC-DC converters that are cascaded for bidirectional power flow applications. The cascaded converters have different control objectives in different directions of power flow. This paper discusses methods to obtain the appropriate input and output impedances that determine stability in the context of bi-directional AC-DC power conversion. Design procedures to ensure stable operation with minimal interaction between the cascaded stages are presented. The analysis and design methods are validated through extensive simulation and hardware results.", "title": "" } ]
scidocsrr
54d666b4b04de6cb9f79d5cd8fbffff5
"What happens if..." Learning to Predict the Effect of Forces in Images
[ { "docid": "503ccd79172e5b8b3cc3a26cf0d1b485", "text": "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image-based object detector, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.", "title": "" } ]
[ { "docid": "9e0de5990eb093698628b8f625a5be6b", "text": "A team of RAND Corporation researchers projected in 2005 that rapid adoption of health information technology (IT) could save the United States more than $81 billion annually. Seven years later the empirical data on the technology's impact on health care efficiency and safety are mixed, and annual health care expenditures in the United States have grown by $800 billion. In our view, the disappointing performance of health IT to date can be largely attributed to several factors: sluggish adoption of health IT systems, coupled with the choice of systems that are neither interoperable nor easy to use; and the failure of health care providers and institutions to reengineer care processes to reap the full benefits of health IT. We believe that the original promise of health IT can be met if the systems are redesigned to address these flaws by creating more-standardized systems that are easier to use, are truly interoperable, and afford patients more access to and control over their health data. Providers must do their part by reengineering care processes to take full advantage of efficiencies offered by health IT, in the context of redesigned payment models that favor value over volume.", "title": "" }, { "docid": "08dbe11a42f7018966c9ca2db5c1fa98", "text": "Person re-identification has important applications in video surveillance. It is particularly challenging because observed pedestrians undergo significant variations across camera views, and there are a large number of pedestrians to be distinguished given small pedestrian images from surveillance videos. This chapter discusses different approaches of improving the key components of a person reidentification system, including feature design, feature learning and metric learning, as well as their strength and weakness. It provides an overview of various person reidentification systems and their evaluation on benchmark datasets. Mutliple benchmark datasets for person re-identification are summarized and discussed. The performance of some state-of-the-art person identification approaches on benchmark datasets is compared and analyzed. It also discusses a few future research directions on improving benchmark datasets, evaluation methodology and system desgin.", "title": "" }, { "docid": "b0b024072e7cde0b404a9be5862ecdd1", "text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.", "title": "" }, { "docid": "0f58d491e74620f43df12ba0ec19cda8", "text": "Latent Dirichlet allocation (LDA) (Blei, Ng, Jordan 2003) is a fully generative statistical language model on the content and topics of a corpus of documents. In this paper we apply a modification of LDA, the novel multi-corpus LDA technique for web spam classification. We create a bag-of-words document for every Web site and run LDA both on the corpus of sites labeled as spam and as non-spam. In this way collections of spam and non-spam topics are created in the training phase. In the test phase we take the union of these collections, and an unseen site is deemed spam if its total spam topic probability is above a threshold. As far as we know, this is the first web retrieval application of LDA. We test this method on the UK2007-WEBSPAM corpus, and reach a relative improvement of 11% in F-measure by a logistic regression based combination with strong link and content baseline classifiers.", "title": "" }, { "docid": "570d08da0139a6910423e4a41e76d8b1", "text": "One of the most important application areas of signal processing (SP) is, without a doubt, the software-defined radio (SDR) field [1]-[3]. Although their introduction dates back to the 1980s, SDRs are now becoming the dominant technology in radio communications, thanks to the dramatic development of SP-optimized programmable hardware, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs). Today, the computational throughput of these devices is such that sophisticated SP tasks can be efficiently handled, so that both the baseband and intermediate frequency (IF) sections of current communication systems are usually implemented, according to the SDR paradigm, by the FPGA's reconfigurable circuitry (e.g., [4]-[6]), or by the software running on DSPs.", "title": "" }, { "docid": "6970acb72318375a5af6aa03ad634f7e", "text": "BACKGROUND\nMyopia is an important public health problem because it is common and is associated with increased risk for chorioretinal degeneration, retinal detachment, and other vision- threatening abnormalities. In animals, ocular elongation and myopia progression can be lessened with atropine treatment. This study provides information about progression of myopia and atropine therapy for myopia in humans.\n\n\nMETHODS\nA total of 214 residents of Olmsted County, Minnesota (118 girls and 96 boys, median age, 11 years; range 6 to 15 years) received atropine for myopia from 1967 through 1974. Control subjects were matched by age, sex, refractive error, and date of baseline examination to 194 of those receiving atropine. Duration of treatment with atropine ranged from 18 weeks to 11.5 years (median 3.5 years).\n\n\nRESULTS\nMedian followup from initial to last refraction in the atropine group (11.7 years) was similar to that in the control group (12.4 years). Photophobia and blurred vision were frequently reported, but no serious adverse effects were associated with atropine therapy. Mean myopia progression during atropine treatment adjusted for age and refractive error (0.05 diopters per year) was significantly less than that among control subjects (0.36 diopters per year)(P<.001). Final refractions standardized to the age of 20 years showed a greater mean level of myopia in the control group (3.78 diopters) than in the atropine group (2.79 diopters) (P<.001).\n\n\nCONCLUSIONS\nThe data support the view that atropine therapy is associated with decreased progression of myopia and that beneficial effects remain after treatment has been discontinued.", "title": "" }, { "docid": "b8c5aa7628cf52fac71b31bb77ccfac0", "text": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.", "title": "" }, { "docid": "3fe30c4d898ec34b83a36efbba8019ff", "text": "Find the secret to improve the quality of life by reading this introduction to pattern recognition statistical structural neural and fuzzy logic approaches. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.", "title": "" }, { "docid": "5868ec5c17bf7349166ccd0600cc6b07", "text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.", "title": "" }, { "docid": "644d262f1d2f64805392c15506764558", "text": "In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision eld about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed signi cant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.", "title": "" }, { "docid": "7aded3885476c7d37228855916255d79", "text": "The web is a rich resource of structured data. There has been an increasing interest in using web structured data for many applications such as data integration, web search and question answering. In this paper, we present DEXTER, a system to find product sites on the web, and detect and extract product specifications from them. Since product specifications exist in multiple product sites, our focused crawler relies on search queries and backlinks to discover product sites. To perform the detection, and handle the high diversity of specifications in terms of content, size and format, our system uses supervised learning to classify HTML fragments (e.g., tables and lists) present in web pages as specifications or not. To perform large-scale extraction of the attribute-value pairs from the HTML fragments identified by the specification detector, DEXTER adopts two lightweight strategies: a domain-independent and unsupervised wrapper method, which relies on the observation that these HTML fragments have very similar structure; and a combination of this strategy with a previous approach, which infers extraction patterns by annotations generated by automatic but noisy annotators. The results show that our crawler strategy to locate product specification pages is effective: (1) it discovered 1.46M product specification pages from 3, 005 sites and 9 different categories; (2) the specification detector obtains high values of F-measure (close to 0.9) over a heterogeneous set of product specifications; and (3) our efficient wrapper methods for attribute-value extraction get very high values of precision (0.92) and recall (0.95) and obtain better results than a state-of-the-art, supervised rule-based wrapper.", "title": "" }, { "docid": "b6ea053b02ebdb3519effdd55a4acf16", "text": "The naive Bayes classifier is an efficient classification model that is easy to learn and has a high accuracy in many domains. However, it has two main drawbacks: (i) its classification accuracy decreases when the attributes are not independent, and (ii) it can not deal with nonparametric continuous attributes. In this work we propose a method that deals with both problems, and learns an optimal naive Bayes classifier. The method includes two phases, discretization and structural improvement, which are repeated alternately until the classification accuracy can not be improved. Discretization is based on the minimum description length principle. To deal with dependent and irrelevant attributes, we apply a structural improvement method that eliminates and/or joins attributes, based on mutual and conditional information measures. The method has been tested in two different domains with good results", "title": "" }, { "docid": "eb4f7427eb73ac0a0486e8ecb2172b52", "text": "In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the forward additive Lucas-Kanade and the simultaneous inverse compositional algorithm through simulations. Under noisy conditions and photometric distortions our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the simultaneous inverse compositional algorithm but at a lower computational complexity.", "title": "" }, { "docid": "3d4a112bd166027a526e57f4969b3bd6", "text": "Two acid phosphatases isolated from culturedIpomoea (moring glory) cells were separated by column chromatography on DEAE-cellulose. The two acid phosphatases have different pH optima (pH 4.8–5.0 and 6.0) and do not require the presence of divalent ions. The enzymes possess high activity toward pyrophosphate,p-nitrophenylphosphate, nucleoside di- and triphosphates, and much less activity toward nucleoside monophosphates and sugar esters. The two phosphatases differ from each other in Michaelis constants, in the degree of inhibition by arsenate, fluoride and phosphate and have quantitative differences of substrate specificity. In addition, they also differ in their response to various ions.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "db93b1e7b56f0d37c69fce9094b72bc3", "text": "The Man-In-The-Middle (MITM) attack is one of the most well known attacks in computer security, representing one of the biggest concerns for security professionals. MITM targets the actual data that flows between endpoints, and the confidentiality and integrity of the data itself. In this paper, we extensively review the literature on MITM to analyse and categorize the scope of MITM attacks, considering both a reference model, such as the open systems interconnection (OSI) model, as well as two specific widely used network technologies, i.e., GSM and UMTS. In particular, we classify MITM attacks based on several parameters, like location of an attacker in the network, nature of a communication channel, and impersonation techniques. Based on an impersonation techniques classification, we then provide execution steps for each MITM class. We survey existing countermeasures and discuss the comparison among them. Finally, based on our analysis, we propose a categorisation of MITM prevention mechanisms, and we identify some possible directions for future research.", "title": "" }, { "docid": "cab97e23b7aa291709ecf18e29f580cf", "text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.", "title": "" }, { "docid": "e4a1f577cb232f6f76fba149a69db58f", "text": "During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions.", "title": "" }, { "docid": "5b131fbca259f07bd1d84d4f61761903", "text": "We aimed to identify a blood flow restriction (BFR) endurance exercise protocol that would both maximize cardiopulmonary and metabolic strain, and minimize the perception of effort. Twelve healthy males (23 ± 2 years, 75 ± 7 kg) performed five different exercise protocols in randomized order: HI, high-intensity exercise starting at 105% of the incremental peak power (P peak); I-BFR30, intermittent BFR at 30% P peak; C-BFR30, continuous BFR at 30% P peak; CON30, control exercise without BFR at 30% P peak; I-BFR0, intermittent BFR during unloaded exercise. Cardiopulmonary, gastrocnemius oxygenation (StO2), capillary lactate ([La]), and perceived exertion (RPE) were measured. V̇O2, ventilation (V̇ E), heart rate (HR), [La] and RPE were greater in HI than all other protocols. However, muscle StO2 was not different between HI (set1—57.8 ± 5.8; set2—58.1 ± 7.2%) and I-BRF30 (set1—59.4 ± 4.1; set2—60.5 ± 6.6%, p < 0.05). While physiologic responses were mostly similar between I-BFR30 and C-BFR30, [La] was greater in I-BFR30 (4.2 ± 1.1 vs. 2.6 ± 1.1 mmol L−1, p = 0.014) and RPE was less (5.6 ± 2.1 and 7.4 ± 2.6; p = 0.014). I-BFR30 showed similar reduced muscle StO2 compared with HI, and increased blood lactate compared to C-BFR30 exercise. Therefore, this study demonstrate that endurance cycling with intermittent BFR promotes muscle deoxygenation and metabolic strain, which may translate into increased endurance training adaptations while minimizing power output and RPE.", "title": "" } ]
scidocsrr
f6c5620afa78588d3bfef71f6690a2fc
Automatic Video Summarization by Graph Modeling
[ { "docid": "e5261ee5ea2df8bae7cc82cb4841dea0", "text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "title": "" }, { "docid": "aea474fcacb8af1d820413b5f842056f", "text": ".4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \\vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●", "title": "" } ]
[ { "docid": "298d3280deb3bb326314a7324d135911", "text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.", "title": "" }, { "docid": "8dc9f29e305d66590948896de2e0a672", "text": "Affective events are events that impact people in positive or negative ways. When people discuss an event, people understand not only the affective polarity but also the reason for the event being positive or negative. In this paper, we aim to categorize affective events based on the reasons why events are affective. We propose that an event is affective to people often because the event describes or indicates the satisfaction or violation of certain kind of human needs. For example, the event “I broke my leg” affects people negatively because the need to be physically healthy is violated. “I play computer games” has a positive affect on people because the need to have fun is probably satisfied. To categorize affective events in narrative human language, we define seven common human need categories and introduce a new data set of randomly sampled affective events with manual human need annotations. In addition, we explored two types of methods: a LIWC lexicon based method and supervised classifiers to automatically categorize affective event expressions with respect to human needs. Experiments show that these methods achieved moderate performance on this task.", "title": "" }, { "docid": "77d0786af4c5eee510a64790af497e25", "text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.", "title": "" }, { "docid": "3cceb3792d55bd14adb579bb9e3932ec", "text": "BACKGROUND\nTrastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nMETHODS\nToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404.\n\n\nFINDINGS\n594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups.\n\n\nINTERPRETATION\nTrastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nFUNDING\nF Hoffmann-La Roche.", "title": "" }, { "docid": "59932c6e6b406a41d814e651d32da9b2", "text": "The purpose of this study was to examine the effects of virtual reality simulation (VRS) on learning outcomes and retention of disaster training. The study used a longitudinal experimental design using two groups and repeated measures. A convenience sample of associate degree nursing students enrolled in a disaster course was randomized into two groups; both groups completed web-based modules; the treatment group also completed a virtually simulated disaster experience. Learning was measured using a 20-question multiple-choice knowledge assessment pre/post and at 2 months following training. Results were analyzed using the generalized linear model. Independent and paired t tests were used to examine the between- and within-participant differences. The main effect of the virtual simulation was strongly significant (p < .0001). The VRS effect demonstrated stability over time. In this preliminary examination, VRS is an instructional method that reinforces learning and improves learning retention.", "title": "" }, { "docid": "a6872c1cab2577547c9a7643a6acd03e", "text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.", "title": "" }, { "docid": "7dead097d1055a713bb56f9369eb1f98", "text": "Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of web application vulnerabilities in last decade is growing constantly. Improper input validation and sanitization are reasons for most of them. The most important of these vulnerabilities based on improper input validation and sanitization is SQL injection (SQLI) vulnerability. The primary focus of our research was to develop a reliable black-box vulnerability scanner for detecting SQLI vulnerability - SQLIVDT (SQL Injection Vulnerability Detection Tool). The black-box approach is based on simulation of SQLI attacks against web applications. Thus, the scope of analysis is limited to HTTP responses and HTML pages received from the application server. In order to achieve efficient SQLI vulnerability detection, an efficient algorithm for HTML page similarity detection is used. The proposed tool showed promising results as compared to six well-known web application scanners.", "title": "" }, { "docid": "edd9795ce024f8fed8057992cf3f4279", "text": "INTRODUCTION\nIdiopathic talipes equinovarus is the most common congenital defect characterized by the presence of a congenital dysplasia of all musculoskeletal tissues distal to the knee. For many years, the treatment has been based on extensive surgery after manipulation and cast trial. Owing to poor surgical results, Ponseti developed a new treatment protocol consisting of manipulation with cast and an Achilles tenotomy. The new technique requires 4 years of orthotic management to guarantee good results. The most recent studies have emphasized how difficult it is to comply with the orthotic posttreatment protocol. Poor compliance has been attributed to parent's low educational and low income level. The purpose of the study is to evaluate if poor compliance is due to the complexity of the orthotic use or if it is related to family education, cultural, or income factors.\n\n\nMETHOD\nFifty-three patients with 73 idiopathic talipes equinovarus feet were treated with the Ponseti technique and followed for 48 months after completing the cast treatment. There was a male predominance (72%). The mean age at presentation was 1 month (range: 1 wk to 7 mo). Twenty patients (38%) had bilateral involvement, 17 patients (32%) had right side affected, and 16 patients (30%) had the left side involved. The mean time of manipulation and casting treatment was 6 weeks (range: 4 to 10 wk). Thirty-eight patients (72%) required Achilles tenotomy as stipulated by the protocol. Recurrence was considered if there was a deterioration of the Dimeglio severity score requiring remanipulation and casting.\n\n\nRESULTS\nTwenty-four out of 73 feet treated by our service showed the evidence of recurrence (33%). Sex, age at presentation, cast treatment duration, unilateral or bilateral, severity score, the necessity of Achilles tenotomy, family educational, or income level did not reveal any significant correlation with the recurrence risk. Noncompliance with the orthotic use showed a significant correlation with the recurrence rate. The noncompliance rate did not show any correlation with the patient demographic data or parent's education level, insurance, or cultural factors as proposed previously.\n\n\nCONCLUSION\nThe use of the brace is extremely relevant with the Ponseti technique outcome (recurrence) in the treatment of idiopathic talipes equinovarus. Noncompliance is not related to family education, cultural, or income level. The Ponseti postcasting orthotic protocol needs to be reevaluated to a less demanding option to improve outcome and brace compliance.", "title": "" }, { "docid": "7db00719532ab0d9b408d692171d908f", "text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence", "title": "" }, { "docid": "a2842352924cbd1deff52976425a0bd6", "text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "8ed2fa021e5b812de90795251b5c2b64", "text": "A new implicit surface fitting method for surface reconstruction from scattered point data is proposed. The method combines an adaptive partition of unity approximation with least-squares RBF fitting and is capable of generating a high quality surface reconstruction. Given a set of points scattered over a smooth surface, first a sparse set of overlapped local approximations is constructed. The partition of unity generated from these local approximants already gives a faithful surface reconstruction. The final reconstruction is obtained by adding compactly supported RBFs. The main feature of the developed approach consists of using various regularization schemes which lead to economical, yet accurate surface reconstruction.", "title": "" }, { "docid": "99fdab0b77428f98e9486d1cc7430757", "text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas", "title": "" }, { "docid": "f7f609ebb1a0fcf789e5e2e5fe463718", "text": "Individuals with generalized anxiety disorder (GAD) display poor emotional conflict adaptation, a cognitive control process requiring the adjustment of performance based on previous-trial conflict. It is unclear whether GAD-related conflict adaptation difficulties are present during tasks without emotionally-salient stimuli. We examined conflict adaptation using the N2 component of the event-related potential (ERP) and behavioral responses on a Flanker task from 35 individuals with GAD and 35 controls. Groups did not differ on conflict adaptation accuracy; individuals with GAD also displayed intact RT conflict adaptation. In contrast, individuals with GAD showed decreased amplitude N2 principal component for conflict adaptation. Correlations showed increased anxiety and depressive symptoms were associated with longer RT conflict adaptation effects and lower ERP amplitudes, but not when separated by group. We conclude that individuals with GAD show reduced conflict-related component processes that may be influenced by compensatory activity, even in the absence of emotionally-salient stimuli.", "title": "" }, { "docid": "e6bb946ea2984ccb54fd37833bb55585", "text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: [email protected] (Hana RABBOUCH), [email protected] (Foued SAÂDAOUI), [email protected] (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017", "title": "" }, { "docid": "4d84b8dbcd0d5922fa3b20287b75c449", "text": "We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of the tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve the load balance as well as reduce the communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP-/PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We observed up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation.", "title": "" }, { "docid": "6c682f3412cc98eac5ae2a2356dccef7", "text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.", "title": "" }, { "docid": "14fe7deaece11b3d4cd4701199a18599", "text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.", "title": "" }, { "docid": "041772bbad50a5bf537c0097e1331bdd", "text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.", "title": "" }, { "docid": "d1eed1d7875930865944c98fbab5f7e1", "text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.", "title": "" } ]
scidocsrr
be3bde921a65f73375afbcdd6a19940a
Intergroup emotions: explaining offensive action tendencies in an intergroup context.
[ { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" } ]
[ { "docid": "bc57dfee1a00d7cfb025a1a5840623f8", "text": "Production and consumption relationship shows that marketing plays an important role in enterprises. In the competitive market, it is very important to be able to sell rather than produce. Nowadays, marketing is customeroriented and aims to meet the needs and expectations of customers to increase their satisfaction. While creating a marketing strategy, an enterprise must consider many factors. Which is why, the process can and should be considered as a multi-criteria decision making (MCDM) case. In this study, marketing strategies and marketing decisions in the new-product-development process has been analyzed in a macro level. To deal quantitatively with imprecision or uncertainty, fuzzy sets theory has been used throughout the analysis.", "title": "" }, { "docid": "f267f44fe9463ac0114335959f9739fa", "text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.", "title": "" }, { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "765e766515c9c241ffd2d84572fd887f", "text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi", "title": "" }, { "docid": "227f23f0357e0cad280eb8e6dec4526b", "text": "This paper presents an iterative and analytical approach to optimal synthesis of a multiplexer with a star-junction. Two types of commonly used lumped-element junction models, namely, nonresonant node (NRN) type and resonant type, are considered and treated in a uniform way. A new circuit equivalence called phased-inverter to frequency-invariant reactance inverter transformation is introduced. It allows direct adoption of the optimal synthesis theory of a bandpass filter for synthesizing channel filters connected to a star-junction by converting the synthesized phase shift to the susceptance compensation at the junction. Since each channel filter is dealt with individually and alternately, when synthesizing a multiplexer with a high number of channels, good accuracy can still be maintained. Therefore, the approach can be used to synthesize a wide range of multiplexers. Illustrative examples of synthesizing a diplexer with a common resonant type of junction and a triplexer with an NRN type of junction are given to demonstrate the effectiveness of the proposed approach. A prototype of a coaxial resonator diplexer according to the synthesized circuit model is fabricated to validate the synthesized result. Excellent agreement is obtained.", "title": "" }, { "docid": "a8d6fe9d4670d1ccc4569aa322f665ee", "text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.", "title": "" }, { "docid": "6aa9eaad1024bf49e24eabc70d5d153d", "text": "High-quality documentary photo series have a special place in rhinoplasty. The exact photographic reproduction of the nasal contours is an essential part of surgical planning, documentation and follow-up of one’s own work. Good photographs can only be achieved using suitable technology and with a good knowledge of photography. Standard operating procedures are also necessary. The photographic equipment should consist of a digital single-lens reflex camera, studio flash equipment and a suitable room for photography with a suitable backdrop. The high standards required cannot be achieved with simple photographic equipment. The most important part of the equipment is the optics. Fixed focal length lenses with a focal length of about 105 mm are especially suited to this type of work. Nowadays, even a surgeon without any photographic training is in a position to produce a complete series of clinical images. With digital technology, any of us can take good photographs. The correct exposure, the right depth of focus for the key areas of the nose and the right camera angle are the decisive factors in a good image series. Up to six standard images are recommended in the literature for the proper documentation of nasal surgery. The most important are frontal, three quarters and profile views. In special cases, close-up images may also be necessary. Preparing a professional image series is labour-intensive and very expensive. Large hospitals no longer employ professional photographers. Despite this, we must strive to maintain a high standard of photodocumenation for publications and to ensure that cases can be compared at congresses.", "title": "" }, { "docid": "d0a6ca9838f8844077fdac61d1d75af1", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "82835828a7f8c073d3520cdb4b6c47be", "text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.", "title": "" }, { "docid": "48e917ffb0e5636f5ca17b3242c07706", "text": "Two studies examined the influence of approach and avoidance social goals on memory for and evaluation of ambiguous social information. Study 1 found that individual differences in avoidance social goals were associated with greater memory of negative information, negatively biased interpretation of ambiguous social cues, and a more pessimistic evaluation of social actors. Study 2 experimentally manipulated social goals and found that individuals high in avoidance social motivation remembered more negative information and expressed more dislike for a stranger in the avoidance condition than in the approach condition. Results suggest that avoidance social goals are associated with emphasizing potential threats when making sense of the social environment.", "title": "" }, { "docid": "9666ac68ee1aeb8ce18ccd2615cdabb2", "text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security", "title": "" }, { "docid": "ec237c01100bf6afa26f3b01a62577f3", "text": "Polyphenols are secondary metabolites of plants and are generally involved in defense against ultraviolet radiation or aggression by pathogens. In the last decade, there has been much interest in the potential health benefits of dietary plant polyphenols as antioxidant. Epidemiological studies and associated meta-analyses strongly suggest that long term consumption of diets rich in plant polyphenols offer protection against development of cancers, cardiovascular diseases, diabetes, osteoporosis and neurodegenerative diseases. Here we present knowledge about the biological effects of plant polyphenols in the context of relevance to human health.", "title": "" }, { "docid": "61d8761f3c6a8974d0384faf9a084b53", "text": "With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into “malignant” and “benign” cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.", "title": "" }, { "docid": "9d0ea524b8f591d9ea337a8c789e51c1", "text": "Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.", "title": "" }, { "docid": "458470e18ce2ab134841f76440cfdc2b", "text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "title": "" }, { "docid": "f407ea856f2d00dca1868373e1bd9e2f", "text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.", "title": "" }, { "docid": "eec33c75a0ec9b055a857054d05bcf54", "text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.", "title": "" }, { "docid": "e985d20f75d29c24fda39135e0e54636", "text": "Software testing is a highly complex and time consu ming activityIt is even difficult to say when tes ing is complete. The effective combination of black box (external) a nd white box (internal) testing is known as Gray-bo x testing. Gray box testing is a powerful idea if one knows something about how the product works on the inside; one can test it b etter, even from the outside. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. It is not to be confused with white box testing, testi ng approach that attempts to cover the internals of the product in detail. Gray box testing is a test strategy based partly on internal s. This paper will present all the three methodolog y Black-box, White-box, Graybox and how this method has been applied to validat e cri ical software systems. KeywordsBlack-box, White-box, Gray-box or Grey-box Introduction In most software projects, testing is not given the necessary attention. Statistics reveal that the ne arly 30-40% of the effort goes into testing irrespective of the type of project; h ardly any time is allocated for testing. The comput er industry is changing at a very rapid pace. In order to keep pace with a rapidly ch anging computer industry, software test must develo p methods to verify and validate software for all aspects of the product li fecycle. Test case design techniques can be broadly split into two main categories: Black box & White box. Black box + White box = Gray Box Spelling: Note that Gray is also spelt as Grey. Hence Gray Box Testing and Grey Box Testing mean the same. Gray Box testing is a technique to test the applica tion with limited knowledge of the internal working s of an application. In software testing, the term the more you know the be tter carries a lot of weight when testing an applic ation. Mastering the domain of a system always gives the t ester an edge over someone with limited domain know ledge. Unlike black box testing, where the tester only tests the applicatio n's user interface, in Gray box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scena rios when making the test plan. The gray-box testing goes mainly with the testing of web applications b ecause it considers high-level development, operati ng environment, and compatibility conditions. During b lack-box or white-box analysis it is harder to iden tify problems, related to endto-end data flow. Context-specific problems, associ ated with web site testing are usually found during gray-box verifying. Bridge between Black Box and White Box – ISSN 2277-1956/V2N1-175-185 Testing Methods Fig 1: Classification 1. Black Box Testing Black box testing is a software testing techniques in which looking at the internal code structure, implementation details and knowledge of internal pa ths of the software. testing is based entirely on the software requireme nts and specifications. Black box testing is best suited for rapid test sce nario testing and quick Web Service Services provides quick feedback on the functional re diness of operations t better suited for operations that have enumerated necessary. It is used for finding the following errors: 1. Incorrect or missing functions 2. Interface errors 3. Errors in data structures or External database access 4. Performance errors 5. Initialization and termination errors Example A tester, without knowledge of the internal structu res of a website, tests the web pages by using a br owse ; providing inputs (clicks, keystrokes) and verifying the outputs agai nst the expected outcome. Levels Applicable To Black Box testing method is applicable to all levels of the software testing process: Testing, and Acceptance Testing. The higher the level, and hence the bigger and more c mplex the box, the mo method comes into use. Black Box Testing Techniques Following are some techniques that can be used for esigning black box tests. Equivalence partitioning Equivalence Partitioning is a software test design technique that involves selecting representative values from each partition as test data. Boundary Value Analysis Boundary Value Analysis is a software test design t echnique that involves determination of boundaries for selecting values that are at the boundaries and jus t inside/outside of the boundaries as test data. Cause Effect Graphing Cause Effect Graphing is a software test design tec hnique that involves identifying the cases (input c onditions) and conditions), producing a CauseEffect Graph, and generating test cases accordingly . Gray Box Testing Technique", "title": "" }, { "docid": "7ad4f52279e85f8e20239e1ea6c85bbb", "text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.", "title": "" }, { "docid": "4825e492dc1b7b645a5b92dde0c766cd", "text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.", "title": "" } ]
scidocsrr
1fb2020d50c3431d79a881ab8be753f5
EEG-based estimation of mental fatigue by using KPCA-HMM and complexity parameters
[ { "docid": "17c12cc27cd66d0289fe3baa9ab4124d", "text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "title": "" } ]
[ { "docid": "976f16e21505277525fa697876b8fe96", "text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.", "title": "" }, { "docid": "d5c0950e12e76c5c63b92ef7cd002782", "text": "In recent years, machine learning approaches have been successfully applied for analysis of neuroimaging data, to help in the context of disease diagnosis. We provide, in this paper, an overview of recent support vector machine-based methods developed and applied in psychiatric neuroimaging for the investigation of schizophrenia. In particular, we focus on the algorithms implemented by our group, which have been applied to classify subjects affected by schizophrenia and healthy controls, comparing them in terms of accuracy results with other recently published studies. First we give a description of the basic terminology used in pattern recognition and machine learning. Then we separately summarize and explain each study, highlighting the main features that characterize each method. Finally, as an outcome of the comparison of the results obtained applying the described different techniques, conclusions are drawn in order to understand how much automatic classification approaches can be considered a useful tool in understanding the biological underpinnings of schizophrenia. We then conclude by discussing the main implications achievable by the application of these methods into clinical practice.", "title": "" }, { "docid": "0868f1ccd67db523026f1650b03311ba", "text": "Conflict with humans over livestock and crops seriously undermines the conservation prospects of India's large and potentially dangerous mammals such as the tiger (Panthera tigris) and elephant (Elephas maximus). This study, carried out in Bhadra Tiger Reserve in south India, estimates the extent of material and monetary loss incurred by resident villagers between 1996 and 1999 in conflicts with large felines and elephants, describes the spatiotemporal patterns of animal damage, and evaluates the success of compensation schemes that have formed the mainstay of loss-alleviation measures. Annually each household lost an estimated 12% (0.9 head) of their total holding to large felines, and approximately 11% of their annual grain production (0.82 tonnes per family) to elephants. Compensations awarded offset only 5% of the livestock loss and 14% of crop losses and were accompanied by protracted delays in the processing of claims. Although the compensation scheme has largely failed to achieve its objective of alleviating loss, its implementation requires urgent improvement if reprisal against large wild mammals is to be minimized. Furthermore, innovative schemes of livestock and crop insurance need to be tested as alternatives to compensations.", "title": "" }, { "docid": "b988525d515588da8becc18c2aa21e82", "text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.", "title": "" }, { "docid": "06f4ec7c6425164ee7fc38a8b26b8437", "text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.", "title": "" }, { "docid": "cb6223183d3602d2e67aafc0b835a405", "text": "Electrocardiogram is widely used to diagnose the congestive heart failure (CHF). It is the primary noninvasive diagnostic tool that can guide in the management and follow-up of patients with CHF. Heart rate variability (HRV) signals which are nonlinear in nature possess the hidden signatures of various cardiac diseases. Therefore, this paper proposes a nonlinear methodology, empirical mode decomposition (EMD), for an automated identification and classification of normal and CHF using HRV signals. In this work, HRV signals are subjected to EMD to obtain intrinsic mode functions (IMFs). From these IMFs, thirteen nonlinear features such as approximate entropy $$ (E_{\\text{ap}}^{x} ) $$ ( E ap x ) , sample entropy $$ (E_{\\text{s}}^{x} ) $$ ( E s x ) , Tsallis entropy $$ (E_{\\text{ts}}^{x} ) $$ ( E ts x ) , fuzzy entropy $$ (E_{\\text{f}}^{x} ) $$ ( E f x ) , Kolmogorov Sinai entropy $$ (E_{\\text{ks}}^{x} ) $$ ( E ks x ) , modified multiscale entropy $$ (E_{{{\\text{mms}}_{y} }}^{x} ) $$ ( E mms y x ) , permutation entropy $$ (E_{\\text{p}}^{x} ) $$ ( E p x ) , Renyi entropy $$ (E_{\\text{r}}^{x} ) $$ ( E r x ) , Shannon entropy $$ (E_{\\text{sh}}^{x} ) $$ ( E sh x ) , wavelet entropy $$ (E_{\\text{w}}^{x} ) $$ ( E w x ) , signal activity $$ (S_{\\text{a}}^{x} ) $$ ( S a x ) , Hjorth mobility $$ (H_{\\text{m}}^{x} ) $$ ( H m x ) , and Hjorth complexity $$ (H_{\\text{c}}^{x} ) $$ ( H c x ) are extracted. Then, different ranking methods are used to rank these extracted features, and later, probabilistic neural network and support vector machine are used for differentiating the highly ranked nonlinear features into normal and CHF classes. We have obtained an accuracy, sensitivity, and specificity of 97.64, 97.01, and 98.24 %, respectively, in identifying the CHF. The proposed automated technique is able to identify the person having CHF alarming (alerting) the clinicians to respond quickly with proper treatment action. Thus, this method may act as a valuable tool for increasing the survival rate of many cardiac patients.", "title": "" }, { "docid": "f6ac111d3ece47f9881a4f1b0ce6d4be", "text": "An Enterprise Framework (EF) is a software architecture. Such frameworks expose a rich set of semantics and modeling paradigms for developing and extending enterprise applications. EFs are, by design, the cornerstone of an organization’s systems development activities. EFs offer a streamlined and flexible alternative to traditional tools and applications which feature numerous point solutions integrated into complex and often inflexible environments. Enterprise Frameworks play an important role since they allow reuse of design knowledge and offer techniques for creating reference models and scalable architectures for enterprise integration. These models and architectures are sufficiently flexible and powerful to be used at multiple levels, e.g. from the integration of the planning systems of geographically distributed factories, to generate a global virtual factory, down to the monitoring and control system for a single production cell. These frameworks implement or enforce well-documented standards for component integration and collaboration. The architecture of an Enterprise framework provides for ready integration with new or existing components. It defines how these components must interact with the framework and how objects will collaborate. In addition, it defines how developers' work together to develop and extend enterprise applications based on the framework. Therefore, the goal of an Enterprise framework is to reduce complexity and lifecycle costs of enterprise systems, while ensuring flexibility.", "title": "" }, { "docid": "6514ddb39c465a8ca207e24e60071e7f", "text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.", "title": "" }, { "docid": "7fed6f57ba2e17db5986d47742dc1a9c", "text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.", "title": "" }, { "docid": "08e121203b159b7d59f17d65a33580f4", "text": "Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy.", "title": "" }, { "docid": "38438e6a0bd03ad5f076daa1f248d001", "text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.", "title": "" }, { "docid": "a27660db1d7d2a6724ce5fd8991479f7", "text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.", "title": "" }, { "docid": "b6a8f45bd10c30040ed476b9d11aa908", "text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.", "title": "" }, { "docid": "39e332a58625a12ef3e14c1a547a8cad", "text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.", "title": "" }, { "docid": "49517920ddecf10a384dc3e98e39459b", "text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.", "title": "" }, { "docid": "5f8ac79ad733d031ecaff19a748666e2", "text": "Decision making techniques used to help evaluate current suppliers should aim at classifying performance of individual suppliers against desired levels of performance so as to devise suitable action plans to increase suppliers' performance and capabilities. Moreover, decision making related to what course of action to take for a particular supplier depends on the evaluation of short and long term factors of performance, as well as on the type of item to be supplied. However, most of the propositions found in the literature do not consider the type of supplied item and are more suitable for ordering suppliers rather than categorizing them. To deal with this limitation, this paper presents a new approach based on fuzzy inference combined with the simple fuzzy grid method to help decisionmaking in the supplier evaluation for development. This approach follows a procedure for pattern classification based on decision rules to categorize supplier performance according to the item category so as to indicate strengths and weaknesses of current suppliers, helping decision makers review supplier development action plans. Applying the method to a company in the automotive sector shows that it brings objectivity and consistency to supplier evaluation, supporting consensus building through the decision making process. Critical items can be identified which aim at proposing directives for managing and developing suppliers for leverage, bottleneck and strategic items. It also helps to identify suppliers in need of attention or suppliers that should be replaced. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d09b5d295fb78756cc6141471a2415a3", "text": "One-point (or n-point) crossover has the property that schemata exhibited by both parents are ‘respected’transferred to the offspring without disruption. In addition, new schemata may, potentially, be created by combination of the genes on which the parents differ. Some argue that the preservation of similarity is the important aspect of crossover, and that the combination of differences (key to the building-block hypothesis) is unlikely to be valuable. In this paper, we discuss the operation of recombination on a hierarchical buildingblock problem. Uniform crossover, which preserves similarity, fails on this problem. Whereas, one-point crossover, that both preserves similarity and combines differences, succeeds. In fact, a somewhat perverse recombination operator, that combines differences but destroys schemata that are common to both parents, also succeeds. Thus, in this problem, combination of schemata from dissimilar parents is required, and preserving similarity is not required. The test problem represents an extreme case, but it serves to illustrate the different aspects of recombination that are available in regular operators such as one-point crossover.", "title": "" }, { "docid": "0d0fae25e045c730b68d63e2df1dfc7f", "text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.", "title": "" }, { "docid": "75233d6d94fec1f43fa02e8043470d4d", "text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.", "title": "" }, { "docid": "81e49c8763f390e4b86968ff91214b5a", "text": "Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox.", "title": "" } ]
scidocsrr
15eed311643a5d0a05b3196ffd168eed
Temporal Data in Relational Database Systems: A Comparison
[ { "docid": "44f41d363390f6f079f2e67067ffa36d", "text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢", "title": "" } ]
[ { "docid": "e4578f9c8ebe99988528b876b162b65a", "text": "This paper concerns the form-finding problem for general and symmetric tensegrity structures with shape constraints. A number of different geometries are treated and several fundamental properties of tensegrity structures are identified that simplify the form-finding problem. The concept of a tensegrity invariance (similarity) transformation is defined and it is shown that tensegrity equilibrium is preserved under affine node position transformations. This result provides the basis for a new tensegrity form-finding tool. The generality of the problem formulation makes it suitable for the automated generation of the equations and their derivatives. State-of-the-art numerical algorithms are applied to solve several example problems. Examples are given for tensegrity plates, shell-class symmetric tensegrity structures and structures generated by applying similarity transformation. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1fa2b4aa557c0efef7a53717dbe0c3fe", "text": "Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations.", "title": "" }, { "docid": "25793a93fec7a1ccea0869252a8a0141", "text": "Condition monitoring of induction motors is a fast emerging technology for online detection of incipient faults. It avoids unexpected failure of a critical system. Approximately 30-40% of faults of induction motors are stator faults. This work presents a comprehensive review of various stator faults, their causes, detection parameters/techniques, and latest trends in the condition monitoring technology. It is aimed at providing a broad perspective on the status of stator fault monitoring to researchers and application engineers using induction motors. A list of 183 research publications on the subject is appended for quick reference.", "title": "" }, { "docid": "6bee9f6c4a240cc53049f183d8079c62", "text": "This study aims to analyze the benefits of improved multiscale reasoning for object detection and localization with deep convolutional neural networks. To that end, an efficient and general object detection framework which operates on scale volumes of a deep feature pyramid is proposed. In contrast to the proposed approach, most current state-of-the-art object detectors operate on a single-scale in training, while testing involves independent evaluation across scales. One benefit of the proposed approach is in better capturing of multi-scale contextual information, resulting in significant gains in both detection performance and localization quality of objects on the PASCAL VOC dataset and a multi-view highway vehicles dataset. The joint detection and localization scale-specific models are shown to especially benefit detection of challenging object categories which exhibit large scale variation as well as detection of small objects.", "title": "" }, { "docid": "b32b02b7230b6d5520e30de6b19b7496", "text": "We prove that an adiabatic theorem generally holds for slow tapers in photonic crystals and other strongly grated waveguides with arbitrary index modulation, exactly as in conventional waveguides. This provides a guaranteed pathway to efficient and broad-bandwidth couplers with, e.g., uniform waveguides. We show that adiabatic transmission can only occur, however, if the operating mode is propagating (nonevanescent) and guided at every point in the taper. Moreover, we demonstrate how straightforward taper designs in photonic crystals can violate these conditions, but that adiabaticity is restored by simple design principles involving only the independent band structures of the intermediate gratings. For these and other analyses, we develop a generalization of the standard coupled-mode theory to handle arbitrary nonuniform gratings via an instantaneous Bloch-mode basis, yielding a continuous set of differential equations for the basis coefficients. We show how one can thereby compute semianalytical reflection and transmission through crystal tapers of almost any length, using only a single pair of modes in the unit cells of uniform gratings. Unlike other numerical methods, our technique becomes more accurate as the taper becomes more gradual, with no significant increase in the computation time or memory. We also include numerical examples comparing to a well-established scattering-matrix method in two dimensions.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "d8e60dc8378fe39f698eede2b6687a0f", "text": "Today's complex software systems are neither secure nor reliable. The rudimentary software protection primitives provided by current hardware forces systems to run many distrusting software components (e.g., procedures, libraries, plugins, modules) in the same protection domain, or otherwise suffer degraded performance from address space switches.\n We present CODOMs (COde-centric memory DOMains), a novel architecture that can provide finer-grained isolation between software components with effectively zero run-time overhead, all at a fraction of the complexity of other approaches. An implementation of CODOMs in a cycle-accurate full-system x86 simulator demonstrates that with the right hardware support, finer-grained protection and run-time performance can peacefully coexist.", "title": "" }, { "docid": "25921de89de837e2bcd2a815ec181564", "text": "Satellite-based Global Positioning Systems (GPS) have enabled a variety of location-based services such as navigation systems, and become increasingly popular and important in our everyday life. However, GPS does not work well in indoor environments where walls, floors and other construction objects greatly attenuate satellite signals. In this paper, we propose an Indoor Positioning System (IPS) based on widely deployed indoor WiFi systems. Our system uses not only the Received Signal Strength (RSS) values measured at the current location but also the previous location information to determine the current location of a mobile user. We have conducted a large number of experiments in the Schorr Center of the University of Nebraska-Lincoln, and our experiment results show that our proposed system outperforms all other WiFi-based RSS IPSs in the comparison, and is 5% more accurate on average than others. iii ACKNOWLEDGMENTS Firstly, I would like to express my heartfelt gratitude to my advisor and committee chair, Professor Lisong Xu and the co-advisor Professor Zhigang Shen for their constant encouragement and guidance throughout the course of my master's study and all the stages of the writing of this thesis. Without their consistent and illuminating instruction, this thesis work could not have reached its present form. Their technical and editorial advice and infinite patience were essential for the completion of this thesis. I feel privileged to have had the opportunity to study under them. I thank Professor Ziguo Zhong and Professor Mehmet Vuran for serving on my Master's Thesis defense committee, and their involvement has greatly improved and clarified this work. I specially thank Prof Ziguo Zhong again, since his support has always been very generous in both time and research resources. I thank all the CSE staff and friends, for their friendship and for all the memorable times in UNL. I would like to thank everyone who has helped me along the way. At last, I give my deepest thanks go to my parents for their self-giving love and support throughout my life.", "title": "" }, { "docid": "3d332b3ae4487a7272ae1e2204965f98", "text": "Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.", "title": "" }, { "docid": "1f0afd99242e7de4bfe837142011451b", "text": "This research examines the business impact of online reviews. It empirically investigates the influence of numerical and textual reviews on product sales performance. We use a Joint Sentiment-Topic model to extract the topics and associated sentiments in review texts. We further propose that numerical rating mediates the effects of textual sentiments. Findings not only contribute to the knowledge of how eWOM impacts product sales, but also illustrate how numerical rating and textual reviews interplay while shaping product sales. In practice, the findings help online vendors strategize business analytics operations by focusing on more relevant aspects that ultimately drive sales.", "title": "" }, { "docid": "2ce36ce9de500ba2367b1af83ac3e816", "text": "We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41", "title": "" }, { "docid": "37d353f5b8f0034209f75a3848580642", "text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.", "title": "" }, { "docid": "e5fa2011c64c3e1f7e9d97f545579d2b", "text": "Remote health monitoring (RHM) can help save the cost burden of unhealthy lifestyles. Of increased popularity is the use of smartphones to collect data, measure physical activity, and provide coaching and feedback to users. One challenge with this method is to improve adherence to prescribed medical regimens. In this paper we present a new battery optimization method that increases the battery lifetime of smartphones which monitor physical activity. We designed a system, WANDA-CVD, to test our battery optimization method. The focus of this report describes our in-lab pilot study and a study aimed at reducing cardiovascular disease (CVD) in young women, the Women's Heart Health study. Conclusively, our battery optimization technique improved battery lifetime by 300%. This method also increased participant adherence to the remote health monitoring system in the Women's Heart Health study by 53%.", "title": "" }, { "docid": "846d3587de5fd84a3d25d5c746cfd702", "text": "A methodological study on significance of image processing and its applications in the field of computer vision is carried out here. During an image processing operation the input given is an image and its output is an enhanced high quality image as per the techniques used. Image processing usually referred as digital image processing, but optical and analog image processing also are possible. Our study provides a solid introduction to image processing along with segmentation techniques, computer vision fundamentals and its applied applications that will be of worth to the image processing and computer vision research communities.", "title": "" }, { "docid": "6c160e73840b0baeb9dd88cbea68becc", "text": "We report a case of an 11-year-old girl with virginal breast hypertrophy; a rare condition characterised by rapid breast enlargement in the peripubertal period. In this paper we highlight complexities of management in this age group.", "title": "" }, { "docid": "dadcea041dcc49d7d837cb8c938830f3", "text": "Software Defined Networking (SDN) has been proposed as a drastic shift in the networking paradigm, by decoupling network control from the data plane and making the switching infrastructure truly programmable. The key enabler of SDN, OpenFlow, has seen widespread deployment on production networks and its adoption is constantly increasing. Although openness and programmability are primary features of OpenFlow, security is of core importance for real-world deployment. In this work, we perform a security analysis of OpenFlow using STRIDE and attack tree modeling methods, and we evaluate our approach on an emulated network testbed. The evaluation assumes an attacker model with access to the network data plane. Finally, we propose appropriate counter-measures that can potentially mitigate the security issues associated with OpenFlow networks. Our analysis and evaluation approach are not exhaustive, but are intended to be adaptable and extensible to new versions and deployment contexts of OpenFlow.", "title": "" }, { "docid": "4175a43d90c597a9c875a8bfafe05977", "text": "Exploitable software vulnerabilities pose severe threats to its information security and privacy. Although a great amount of efforts have been dedicated to improving software security, research on quantifying software exploitability is still in its infancy. In this work, we propose ExploitMeter, a fuzzing-based framework of quantifying software exploitability that facilitates decision-making for software assurance and cyber insurance. Designed to be dynamic, efficient and rigorous, ExploitMeter integrates machine learning-based prediction and dynamic fuzzing tests in a Bayesian manner. Using 100 Linux applications, we conduct extensive experiments to evaluate the performance of ExploitMeter in a dynamic environment.", "title": "" }, { "docid": "21b9b7995cabde4656c73e9e278b2bf5", "text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.", "title": "" }, { "docid": "ea29b3421c36178680ae63c16b9cecad", "text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.", "title": "" }, { "docid": "a0d90497749ff73b3c971a49fa35ffa9", "text": "Current energy policies address environmental issues including environmentally friendly technologies to increase energy supplies and encourage cleaner, more efficient energy use, and address air pollution, greenhouse effect, global warming, and climate change. The biofuel policy aims to promote the use in transport of fuels made from biomass, as well as other renewable fuels. Biofuels provide the prospect of new economic opportunities for people in rural areas in oil importer and developing countries. The central policy of biofuel concerns job creation, greater efficiency in the general business environment, and protection of the environment. Projections are important tools for long-term planning and policy settings. Renewable energy sources that use indigenous resources have the potential to provide energy services with zero or almost zero emissions of both air pollutants and greenhouse gases. Biofuels are expected to reduce dependence on imported petroleum with associated political and economic vulnerability, reduce greenhouse gas emissions and other pollutants, and revitalize the economy by increasing demand and prices for agricultural products. 2009 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
5f5b949a4f90253e6585c69ecc2325e1
Four Principles of Memory Improvement : A Guide to Improving Learning Efficiency
[ { "docid": "660d47a9ffc013f444954f3f210de05e", "text": "Taking tests enhances learning. But what happens when one cannot answer a test question-does an unsuccessful retrieval attempt impede future learning or enhance it? The authors examined this question using materials that ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2, participants were asked fictional general-knowledge questions (e.g., \"What peace treaty ended the Calumet War?\"). In Experiments 3-6, participants were shown a cue word (e.g., whale) and were asked to guess a weak associate (e.g., mammal); the rare trials on which participants guessed the correct response were excluded from the analyses. In the test condition, participants attempted to answer the question before being shown the answer; in the read-only condition, the question and answer were presented together. Unsuccessful retrieval attempts enhanced learning with both types of materials. These results demonstrate that retrieval attempts enhance future learning; they also suggest that taking challenging tests-instead of avoiding errors-may be one key to effective learning.", "title": "" }, { "docid": "4d7cd44f2bbe9896049a7868165bd415", "text": "Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "42d5712d781140edbc6a35703d786e15", "text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance", "title": "" }, { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "bd3374fefa94fbb11d344d651c0f55bc", "text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method", "title": "" }, { "docid": "e776c87ec35d67c6acbdf79d8a5cac0a", "text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "title": "" }, { "docid": "512d29a398f51041466884f4decec84a", "text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2", "title": "" }, { "docid": "113b8cfda23cf7e8b3d7b4821d549bf7", "text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.", "title": "" }, { "docid": "dc5bb80426556e3dd9090a705d3e17b4", "text": "OBJECTIVES\nThe aim of this study was to locate the scientific literature dealing with addiction to the Internet, video games, and cell phones and to characterize the pattern of publications in these areas.\n\n\nMETHODS\nOne hundred seventy-nine valid articles were retrieved from PubMed and PsycINFO between 1996 and 2005 related to pathological Internet, cell phone, or video game use.\n\n\nRESULTS\nThe years with the highest numbers of articles published were 2004 (n = 42) and 2005 (n = 40). The most productive countries, in terms of number of articles published, were the United States (n = 52), China (n = 23), the United Kingdom (n = 17), Taiwan (n = 13), and South Korea (n = 9). The most commonly used language was English (65.4%), followed by Chinese (12.8%) and Spanish (4.5%). Articles were published in 96 different journals, of which 22 published 2 or more articles. The journal that published the most articles was Cyberpsychology & Behavior (n = 41). Addiction to the Internet was the most intensely studied (85.3%), followed by addiction to video games (13.6%) and cell phones (2.1%).\n\n\nCONCLUSIONS\nThe number of publications in this area is growing, but it is difficult to conduct precise searches due to a lack of clear terminology. To facilitate retrieval, bibliographic databases should include descriptor terms referring specifically to Internet, video games, and cell phone addiction as well as to more general addictions involving communications and information technologies and other behavioral addictions.", "title": "" }, { "docid": "b240041ea6a885151fd39d863b9217dc", "text": "Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature.", "title": "" }, { "docid": "43ef67c897e7f998b1eb7d3524d514f4", "text": "This brief proposes a delta-sigma modulator that operates at extremely low voltage without using a clock boosting technique. To maintain the advantages of a discrete-time integrator in oversampled data converters, a mixed differential difference amplifier (DDA) integrator is developed that removes the input sampling switch in a switched-capacitor integrator. Conventionally, many low-voltage delta-sigma modulators have used high-voltage generating circuits to boost the clock voltage levels. A mixed DDA integrator with both a switched-resistor and a switched-capacitor technique is developed to implement a discrete-time integrator without clock boosted switches. The proposed mixed DDA integrator is demonstrated by a third-order delta-sigma modulator with a feedforward topology. The fabricated modulator shows a 68-dB signal-to-noise-plus-distortion ratio for a 20-kHz signal bandwidth with an oversampling ratio of 80. The chip consumes 140 μW of power at a true 0.4-V power supply, which is the lowest voltage without a clock boosting technique among the state-of-the-art modulators in this signal band.", "title": "" }, { "docid": "106fefb169c7e95999fb411b4e07954e", "text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.", "title": "" }, { "docid": "e797fbf7b53214df32d5694527ce5ba3", "text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "0b5f0cd5b8d49d57324a0199b4925490", "text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.", "title": "" }, { "docid": "06502355f6db37b73806e9e57476e749", "text": "BACKGROUND\nBecause the trend of pharmacotherapy is toward controlling diet rather than administration of drugs, in our study we examined the probable relationship between Creatine (Cr) or Whey (Wh) consumption and anesthesia (analgesia effect of ketamine). Creatine and Wh are among the most favorable supplements in the market. Whey is a protein, which is extracted from milk and is a rich source of amino acids. Creatine is an amino acid derivative that can change to ATP in the body. Both of these supplements result in Nitric Oxide (NO) retention, which is believed to be effective in N-Methyl-D-aspartate (NMDA) receptor analgesia.\n\n\nOBJECTIVES\nThe main question of this study was whether Wh and Cr are effective on analgesic and anesthetic characteristics of ketamine and whether this is related to NO retention or amino acids' features.\n\n\nMATERIALS AND METHODS\nWe divided 30 male Wistar rats to three (n = 10) groups; including Cr, Wh and sham (water only) groups. Each group was administered (by gavage) the supplements for an intermediate dosage during 25 days. After this period, they became anesthetized using a Ketamine-Xylazine (KX) and their time to anesthesia and analgesia, and total sleep time were recorded.\n\n\nRESULTS\nData were analyzed twice using the SPSS 18 software with Analysis of Variance (ANOVA) and post hoc test; first time we expunged the rats that didn't become anesthetized and the second time we included all of the samples. There was a significant P-value (P < 0.05) for total anesthesia time in the second analysis. Bonferroni multiple comparison indicated that the difference was between Cr and Sham groups (P < 0.021).\n\n\nCONCLUSIONS\nThe data only indicated that there might be a significant relationship between Cr consumption and total sleep time. Further studies, with rats of different gender and different dosage of supplement and anesthetics are suggested.", "title": "" }, { "docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea", "text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.", "title": "" }, { "docid": "35ae4e59fd277d57c2746dfccf9b26b0", "text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.", "title": "" }, { "docid": "cd3d9bb066729fc7107c0fef89f664fe", "text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.", "title": "" }, { "docid": "f04682957e97b8ccb4f40bf07dde2310", "text": "This paper introduces a dataset gathered entirely in urban scenarios with a car equipped with one stereo camera and five laser scanners, among other sensors. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20 fps) during a 36.8 km trajectory, which allows the benchmarking of a variety of computer vision techniques. We describe the employed sensors and highlight some applications which could be benchmarked with the presented work. Both plain text and binary files are provided, as well as open source tools for working with the binary versions. The dataset is available for download in http://www.mrpt.org/MalagaUrbanDataset.", "title": "" }, { "docid": "644d2fcc7f2514252c2b9da01bb1ef42", "text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1", "title": "" }, { "docid": "e289d20455fd856ce4cf72589b3e206b", "text": "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.", "title": "" } ]
scidocsrr
8fc6f52f98b9361f63ac81179118573b
Lehmer ’ s ENIAC computation
[ { "docid": "386bcf00ecc6ff1e21a8b06632cdf77e", "text": "With an interactive simulation of the ENIAC, users can wire complex configurations of the machine's modules. The simulation, written in Java, can be started from an Internet site. The simulation has been tested with a 6-meter-long data wall, which provides the closest available approximation to the look and feel of programming this historical computer.", "title": "" } ]
[ { "docid": "bda4bdc27e9ea401abb214c3fb7c9813", "text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.", "title": "" }, { "docid": "ca927f5557a6a5713e9313848fbbc5b1", "text": "A wide band CMOS LC-tank voltage controlled oscillator (VCO) with small VCO gain (KVCO) variation was developed. For small KVCO variation, serial capacitor bank was added to the LC-tank with parallel capacitor array. Implemented in a 0.18 mum CMOS RF technology, the proposed VCO can be tuned from 4.39 GHz to 5.26 GHz with the VCO gain variation less than 9.56%. While consuming 3.5 mA from a 1.8 V supply, the VCO has -113.65 dBc/Hz phase noise at 1 MHz offset from the carrier.", "title": "" }, { "docid": "b22b0a553971d9d81a8196f40f97255c", "text": "Latent fingerprints are routinely found at crime scenes due to the inadvertent contact of the criminals' finger tips with various objects. As such, they have been used as crucial evidence for identifying and convicting criminals by law enforcement agencies. However, compared to plain and rolled prints, latent fingerprints usually have poor quality of ridge impressions with small fingerprint area, and contain large overlap between the foreground area (friction ridge pattern) and structured or random noise in the background. Accordingly, latent fingerprint segmentation is a difficult problem. In this paper, we propose a latent fingerprint segmentation algorithm whose goal is to separate the fingerprint region (region of interest) from background. Our algorithm utilizes both ridge orientation and frequency features. The orientation tensor is used to obtain the symmetric patterns of fingerprint ridge orientation, and local Fourier analysis method is used to estimate the local ridge frequency of the latent fingerprint. Candidate fingerprint (foreground) regions are obtained for each feature type; an intersection of regions from orientation and frequency features localizes the true latent fingerprint regions. To verify the viability of the proposed segmentation algorithm, we evaluated the segmentation results in two aspects: a comparison with the ground truth foreground and matching performance based on segmented region.", "title": "" }, { "docid": "362b1a5119733eba058d1faab2d23ebf", "text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:", "title": "" }, { "docid": "81e0cc5f85857542c039b0c5fe80e010", "text": "This paper proposes a pitch estimation algorithm that is based on optimal harmonic model fitting. The algorithm operates directly on the time-domain signal and has a relatively simple mathematical background. To increase its efficiency and accuracy, the algorithm is applied in combination with an autocorrelation-based initialization phase. For testing purposes we compare its performance on pitch-annotated corpora with several conventional time-domain pitch estimation algorithms, and also with a recently proposed one. The results show that even the autocorrelation-based first phase significantly outperforms the traditional methods, and also slightly the recently proposed yin algorithm. After applying the second phase – the harmonic approximation step – the amount of errors can be further reduced by about 20% relative to the error obtained in the first phase.", "title": "" }, { "docid": "8583702b48549c5bbf1553fa0e39a882", "text": "A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "ae7bcfb547c4dcb1f30cfc48dd1d494f", "text": "Recently, authority ranking has received increasing interests in both academia and industry, and it is applicable to many problems such as discovering influential nodes and building recommendation systems. Various graph-based ranking approaches like PageRank have been used to rank authors and papers separately in homogeneous networks. In this paper, we take venue information into consideration and propose a novel graph-based ranking framework, Tri-Rank, to co-rank authors, papers and venues simultaneously in heterogeneous networks. This approach is a flexible framework and it ranks authors, papers and venues iteratively in a mutually reinforcing way to achieve a more synthetic, fair ranking result. We conduct extensive experiments using the data collected from ACM Digital Library. The experimental results show that Tri-Rank is more effective and efficient than the state-of-the-art baselines including PageRank, HITS and Co-Rank in ranking authors. The papers and venues ranked by Tri-Rank also demonstrate that Tri-Rank is rational.", "title": "" }, { "docid": "7d232cd3fd69bbe33add101551dfdf25", "text": "The vector space model is one of the classical and widely applied information retrieval models to rank the web page based on similarity values. The retrieval operations consist of cosine similarity function to compute the similarity values between a given query and the set of documents retrieved and then rank the documents according to the relevance. In this paper, we are presenting different approaches of vector space model to compute similarity values of hits from search engine for given queries based on terms weight. In order to achieve the goal of an effective evaluation algorithm, our work intends to extensive analysis of the main aspects of Vector space model, its approaches and provides a comprehensive comparison for Term-Count", "title": "" }, { "docid": "161fab4195de0d0358de9bd74f3c0805", "text": "Working with sensitive data is often a balancing act between privacy and integrity concerns. Consider, for instance, a medical researcher who has analyzed a patient database to judge the effectiveness of a new treatment and would now like to publish her findings. On the one hand, the patients may be concerned that the researcher's results contain too much information and accidentally leak some private fact about themselves; on the other hand, the readers of the published study may be concerned that the results contain too little information, limiting their ability to detect errors in the calculations or flaws in the methodology.\n This paper presents VerDP, a system for private data analysis that provides both strong integrity and strong differential privacy guarantees. VerDP accepts queries that are written in a special query language, and it processes them only if a) it can certify them as differentially private, and if b) it can prove the integrity of the result in zero knowledge. Our experimental evaluation shows that VerDP can successfully process several different queries from the differential privacy literature, and that the cost of generating and verifying the proofs is practical: for example, a histogram query over a 63,488-entry data set resulted in a 20 kB proof that took 32 EC2 instances less than two hours to generate, and that could be verified on a single machine in about one second.", "title": "" }, { "docid": "9c300cdde4964fc126e7e8af5882747e", "text": "BACKGROUND\nThe purpose of this qualitative study was to investigate advanced cancer patients' perspectives on the importance, feasibility, teaching methods, and issues associated with training healthcare providers in compassionate care.\n\n\nMETHODS\nThis study utilized grounded theory, a qualitative research method, to develop an empirical understanding of compassion education rooted in direct patient reports. Audio-recorded semi-structured interviews were conducted to obtain an in-depth understanding of compassion training from the perspectives of hospitalized advanced cancer patients (n = 53). Data were analyzed in accordance with grounded theory to determine the key elements of the underlying theory.\n\n\nRESULTS\nThree overarching categories and associated themes emerged from the data: compassion aptitude, cultivating compassion, and training methods. Participants spoke of compassion as an innate quality embedded in the character of learners prior to their healthcare training, which could be nurtured through experiential learning and reflective practices. Patients felt that the innate qualities that learners possessed at baseline were further fashioned by personal and practice experiences, and vocational motivators. Participants also provided recommendations for compassion training, including developing an interpersonal relationship with patients, seeing the patient as a person, and developing a human connection. Teaching methods that patients suggested in compassion training included patient-centered communication, self-reflection exercises, and compassionate role modeling.\n\n\nCONCLUSIONS\nThis study provides insight on compassion training for both current and future healthcare providers, from the perspectives of the end recipients of healthcare provider training - patients. Developing a theoretical base for patient centred, evidence-informed, compassion training is a crucial initial step toward the further development of this core healthcare competency.", "title": "" }, { "docid": "99b2cf752848a5b787b378719dc934f1", "text": "This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.", "title": "" }, { "docid": "a8fdd94eea9b888f3c936c69598d2ad2", "text": "To reduce the high failure rate of software projects, managers need better tools to assess and manage software project risk. In order to create such tools, however, information systems researchers must first develop a better understanding of the dimensions of software project risk and how they can affect project performance. Progress in this area has been hindered by: (1) a lack of validated instruments for measuring software project risk that tap into the dimensions of risk that are seen as important by software project managers, and (2) a lack of theory to explain the linkages between various dimensions of software project risk and project performance. In this study, six dimensions of software project risk were identified and reliable and valid measures were developed for each. Guided by sociotechnical systems theory, an exploratory model was developed and tested. The results show that social subsystem risk influences technical subsystem risk, which, in turn, influences the level of project management risk, and ultimately, project performance. The implications of these findings for research and practice are discussed. Subject Areas: Sociotechnical Systems Theory, Software Project Risk, and Structural Equation Modeling. ∗The authors would like to thank the Project Management Institute’s Information Systems Special Interest Group (PMI-ISSIG) for supporting this research. We would also like to thank Georgia State University for their financial support through the PhD research grant program. The authors gratefully acknowledge Al Segars and Ed Rigdon for their insightful comments and assistance at various stages of this project. †Corresponding author.", "title": "" }, { "docid": "20deb56f6d004a8e33d1e1a4f579c1ba", "text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.", "title": "" }, { "docid": "3e9de22ac9f81cf3233950a0d72ef15a", "text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.", "title": "" }, { "docid": "641f8ac3567d543dd5df40a21629fbd7", "text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.", "title": "" }, { "docid": "ce8cabea6fff858da1fb9894860f7c2d", "text": "This thesis investigates artificial agents learning to make strategic decisions in imperfect-information games. In particular, we introduce a novel approach to reinforcement learning from self-play. We introduce Smooth UCT, which combines the game-theoretic notion of fictitious play with Monte Carlo Tree Search (MCTS). Smooth UCT outperformed a classic MCTS method in several imperfect-information poker games and won three silver medals in the 2014 Annual Computer Poker Competition. We develop Extensive-Form Fictitious Play (XFP) that is entirely implemented in sequential strategies, thus extending this prominent game-theoretic model of learning to sequential games. XFP provides a principled foundation for self-play reinforcement learning in imperfect-information games. We introduce Fictitious Self-Play (FSP), a class of sample-based reinforcement learning algorithms that approximate XFP. We instantiate FSP with neuralnetwork function approximation and deep learning techniques, producing Neural FSP (NFSP). We demonstrate that (approximate) Nash equilibria and their representations (abstractions) can be learned using NFSP end to end, i.e. interfacing with the raw inputs and outputs of the domain. NFSP approached the performance of state-of-the-art, superhuman algorithms in Limit Texas Hold’em an imperfect-information game at the absolute limit of tractability using massive computational resources. This is the first time that any reinforcement learning algorithm, learning solely from game outcomes without prior domain knowledge, achieved such a feat.", "title": "" }, { "docid": "7daf4d9d3204cdaf9a1f28a29335802d", "text": "Hole mobility and velocity are extracted from scaled strained-Si0.45Ge0.55 channel p-MOSFETs on insulator. Devices have been fabricated with sub-100-nm gate lengths, demonstrating hole mobility and velocity enhancements in strained- Si0.45Ge0.55 channel devices relative to Si. The effective hole mobility is extracted utilizing the dR/dL method. A hole mobility enhancement is observed relative to Si hole universal mobility for short-channel devices with gate lengths ranging from 65 to 150 nm. Hole velocities extracted using several different methods are compared. The hole velocity of strained-SiGe p-MOSFETs is enhanced over comparable Si control devices. The hole velocity enhancements extracted are on the order of 30%. Ballistic velocity simulations suggest that the addition of (110) uniaxial compressive strain to Si0.45Ge0.55 can result in a more substantial increase in velocity relative to relaxed Si.", "title": "" }, { "docid": "8c4540f3724dab3a173e94bdba7b0999", "text": "The significant growth of the Internet of Things (IoT) is revolutionizing the way people live by transforming everyday Internet-enabled objects into an interconnected ecosystem of digital and personal information accessible anytime and anywhere. As more objects become Internet-enabled, the security and privacy of the personal information generated, processed and stored by IoT devices become complex and challenging to manage. This paper details the current security and privacy challenges presented by the increasing use of the IoT. Furthermore, investigate and analyze the limitations of the existing solutions with regard to addressing security and privacy challenges in IoT and propose a possible solution to address these challenges. The results of this proposed solution could be implemented during the IoT design, building, testing and deployment phases in the real-life environments to minimize the security and privacy challenges associated with IoT.", "title": "" } ]
scidocsrr
154f8e8e4ee64ce4143eeda45cd842ba
Who are the Devils Wearing Prada in New York City?
[ { "docid": "b17fdc300edc22ab855d4c29588731b2", "text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.", "title": "" } ]
[ { "docid": "8dfa68e87eee41dbef8e137b860e19cc", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "d142ad76c2c5bb1565ef539188ce7d43", "text": "The recent discovery of new classes of small RNAs has opened unknown territories to explore new regulations of physiopathological events. We have recently demonstrated that RNY (or Y RNA)-derived small RNAs (referred to as s-RNYs) are an independent class of clinical biomarkers to detect coronary artery lesions and are associated with atherosclerosis burden. Here, we have studied the role of s-RNYs in human and mouse monocytes/macrophages and have shown that in lipid-laden monocytes/macrophages s-RNY expression is timely correlated to the activation of both NF-κB and caspase 3-dependent cell death pathways. Loss- or gain-of-function experiments demonstrated that s-RNYs activate caspase 3 and NF-κB signaling pathways ultimately promoting cell death and inflammatory responses. As, in atherosclerosis, Ro60-associated s-RNYs generated by apoptotic macrophages are released in the blood of patients, we have investigated the extracellular function of the s-RNY/Ro60 complex. Our data demonstrated that s-RNY/Ro60 complex induces caspase 3-dependent cell death and NF-κB-dependent inflammation, when added to the medium of cultured monocytes/macrophages. Finally, we have shown that s-RNY function is mediated by Toll-like receptor 7 (TLR7). Indeed using chloroquine, which disrupts signaling of endosome-localized TLRs 3, 7, 8 and 9 or the more specific TLR7/9 antagonist, the phosphorothioated oligonucleotide IRS954, we blocked the effect of either intracellular or extracellular s-RNYs. These results position s-RNYs as relevant novel functional molecules that impacts on macrophage physiopathology, indicating their potential role as mediators of inflammatory diseases, such as atherosclerosis.", "title": "" }, { "docid": "7b6482f295304b2a7a4c6082d0300dc9", "text": "In this paper we proposed SVM algorithm for MNIST dataset with fringe and its complementary version, inverse fringe as feature for SVM. MNIST data-set is consists of 60000 examples of training set and 10000 examples of test set. In our experiments we started with using fringe distance map as feature and found that the accuracy of system on trained data is 99.99% and on test data it is 97.14%, using inverse fringe distance map as feature and found that the accuracy of system on trained data is 99.92% and on test data is 97.72% and using combination of above two feature as feature and found that the accuracy of system on trained data is 100 and on test data is 97.55%.", "title": "" }, { "docid": "d21213e0dbef657d5e7ec8689fe427ed", "text": "Cutaneous infections due to Listeria monocytogenes are rare. Typically, infections manifest as nonpainful, nonpruritic, self-limited, localized, papulopustular or vesiculopustular eruptions in healthy persons. Most cases follow direct inoculation of the skin in veterinarians or farmers who have exposure to animal products of conception. Less commonly, skin lesions may arise from hematogenous dissemination in compromised hosts with invasive disease. Here, we report the first case in a gardener that occurred following exposure to soil and vegetation.", "title": "" }, { "docid": "cb4bf3bc76586e455dc863bc1ca2800e", "text": "Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost.", "title": "" }, { "docid": "d676598b1afe341079b4705284d6a911", "text": "Quality of underwater image is poor due to the environment of water medium. The physical property of water medium causes attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper extends the methods of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kreis hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.", "title": "" }, { "docid": "24625cbc472bf376b44ac6e962696d0b", "text": "Although deep neural networks have made tremendous progress in the area of multimedia representation, training neural models requires a large amount of data and time. It is well known that utilizing trained models as initial weights often achieves lower training error than neural networks that are not pre-trained. A fine-tuning step helps to both reduce the computational cost and improve the performance. Therefore, sharing trained models has been very important for the rapid progress of research and development. In addition, trained models could be important assets for the owner(s) who trained them; hence, we regard trained models as intellectual property. In this paper, we propose a digital watermarking technology for ownership authorization of deep neural networks. First, we formulate a new problem: embedding watermarks into deep neural networks. We also define requirements, embedding situations, and attack types on watermarking in deep neural networks. Second, we propose a general framework for embedding a watermark in model parameters, using a parameter regularizer. Our approach does not impair the performance of networks into which a watermark is placed because the watermark is embedded while training the host network. Finally, we perform comprehensive experiments to reveal the potential of watermarking deep neural networks as the basis of this new research effort. We show that our framework can embed a watermark during the training of a deep neural network from scratch, and during fine-tuning and distilling, without impairing its performance. The embedded watermark does not disappear even after fine-tuning or parameter pruning; the watermark remains complete even after 65% of parameters are pruned.", "title": "" }, { "docid": "c99fd51e8577a5300389c565aebebdb3", "text": "Face Detection and Recognition is an important area in the field of substantiation. Maintenance of records of students along with monitoring of class attendance is an area of administration that requires significant amount of time and efforts for management. Automated Attendance Management System performs the daily activities of attendance analysis, for which face recognition is an important aspect. The prevalent techniques and methodologies for detecting and recognizing faces by using feature extraction tools like mean, standard deviation etc fail to overcome issues such as scaling, pose, illumination, variations. The proposed system provides features such as detection of faces, extraction of the features, detection of extracted features, and analysis of student’s attendance. The proposed system integrates techniques such as Principal Component Analysis (PCA) for feature extraction and voila-jones for face detection &Euclidian distance classifier. Faces are recognized using PCA, using the database that contains images of students and is used to recognize student using the captured image. Better accuracy is attained in results and the system takes into account the changes that occurs in the face over the period of time.", "title": "" }, { "docid": "7e10aa210d6985d757a21b8b6c49ae53", "text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t", "title": "" }, { "docid": "0e482ebd5fa8f8f3fc67b01e9e6ee4bc", "text": "Lung cancer is one of the most deadly diseases. It has a high death rate and its incidence rate has been increasing all over the world. Lung cancer appears as a solitary nodule in chest x-ray radiograph (CXR). Therefore, lung nodule detection in CXR could have a significant impact on early detection of lung cancer. Radiologists define a lung nodule in CXR as “solitary white nodule-like blob.” However, the solitary feature has not been employed for lung nodule detection before. In this paper, a solitary feature-based lung nodule detection method was proposed. We employed stationary wavelet transform and convergence index filter to extract the texture features and used AdaBoost to generate white nodule-likeness map. A solitary feature was defined to evaluate the isolation degree of candidates. Both the isolation degree and the white nodule likeness were used as final evaluation of lung nodule candidates. The proposed method shows better performance and robustness than those reported in previous research. More than 80% and 93% of lung nodules in the lung field in the Japanese Society of Radiological Technology (JSRT) database were detected when the false positives per image were two and five, respectively. The proposed approach has the potential of being used in clinical practice.", "title": "" }, { "docid": "61309b5f8943f3728f714cd40f260731", "text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "115028551249d3cb1accbd0841b9930a", "text": "To study the Lombard reflex, more realistic databases representing real-world conditions need to be recorded and analyzed. In this paper we 1) summarize a procedure to record Lombard data which provides a good approximation of realistic conditions, 2) present an analysis per class of sounds for duration and energy of words recorded while subjects are listening to noise through open-ear headphones a) when speakers are in communication with a recognition device and b) when reading a list, and 3) report on the influence of speaking style on speakerdependent and speaker-independent experiments. This paper extends a previous study aimed at analyzing the influence of the communication factor on the Lombard reflex. We also show evidence that it is difficult to separate the speaker from the environment stressor (in this case the noise) when studying the Lombard reflex. The main conclusion of our pilot study is that the communication factor should not be neglected because it strongly influences the Lombard reflex.", "title": "" }, { "docid": "fb915584f23482986e672b1a38993ca1", "text": "We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.", "title": "" }, { "docid": "843e7bfe22d8b93852374dde8715ca42", "text": "In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets.", "title": "" }, { "docid": "aca800983a0e24aa663c09cccb91f02a", "text": "A multiple model adaptive estimator (MMAE) [1, 2, 6, 8, 9, 11] consists of a bank of parallel Kalman filters, each with a different model, and a hypothesis testing algorithm as shown in Fig. 1. Each of the internal models of the Kalman filters can be represented by a discrete value of a parameter vector (ak; k= 1,2, : : : ,K). The Kalman filters are provided a measurement vector (z) and the input vector (u), and produce a state estimate (x̂k) and a residual (rk). The hypothesis testing algorithm uses the residuals to compute conditional probabilities (pk) of the various hypotheses that are modeled in the Kalman filters, conditioned on the history of measurements received up to that time, and to compute an estimate of the true parameter vector (â). The conventional MMAE computes conditional probabilities (pk) in a manner that exploits three of four characteristics of Kalman filter residuals that are based on a correctly modeled hypothesis—that they should be Gaussian, zero-mean, and of computable covariance—but does not exploit the fact that they should also be white. The algorithm developed herein addresses this directly, yielding a complement to the conventional MMAE. One application of MMAE is flight control sensor/actuator failure detection and identification, where each Kalman filter has a different failure status model (ak) that it uses to form the state estimate (x̂k) and the residual (rk). The hypothesis testing algorithm assigns conditional probabilities (pk) to each of the hypotheses that were used to form the Kalman filter models. These conditional probabilities indicate the relative correctness of the various filter models, and can be used to select the best estimate of the true system failure status, weight the individual state estimates appropriately, and form a probability-weighted average state estimate (x̂MMAE). A primary objection to implementing an MMAE-based (or other) failure detection algorithm is the need to dither the system constantly to enhance failure identifiability. The MMAE compares the magnitudes of the residuals (appropriately scaled to account for various uncertainties and noises) from the various filters and chooses the hypothesis that corresponds to the residual that has a history of having smallest (scaled) magnitude. Large residuals must be produced by the filters with models that are incorrect to be able to discount these incorrect hypotheses. The residual is the difference between the measurement of the system output and the filter’s prediction of what that measurement should be, based on the filter-assumed system model. Therefore, to produce the needed large residuals in the incorrect filters, we need to produce a history of sufficiently large system outputs, so we need to dither the system constantly and thereby", "title": "" }, { "docid": "c5cfe386f6561eab1003d5572443612e", "text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.", "title": "" }, { "docid": "c5d06fe50c16278943fe1df7ad8be888", "text": "Current main memory organizations in embedded and mobile application systems are DRAM dominated. The ever-increasing gap between today's processor and memory speeds makes the DRAM subsystem design a major aspect of computer system design. However, the limitations to DRAM scaling and other challenges like refresh provide undesired trade-offs between performance, energy and area to be made by architecture designers. Several emerging NVM options are being explored to at least partly remedy this but today it is very hard to assess the viability of these proposals because the simulations are not fully based on realistic assumptions on the NVM memory technologies and on the system architecture level. In this paper, we propose to use realistic, calibrated STT-MRAM models and a well calibrated cross-layer simulation and exploration framework, named SEAT, to better consider technologies aspects and architecture constraints. We will focus on general purpose/mobile SoC multi-core architectures. We will highlight results for a number of relevant benchmarks, representatives of numerous applications based on actual system architecture. The most energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 27% at the cost of 2x the area and the least energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 8% at the around the same area or lesser when compared to DRAM.", "title": "" }, { "docid": "eb6f055399614a4e0876ffefae8d6a28", "text": "For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.", "title": "" }, { "docid": "7a430880e5274fbb9d8cf4085920a5b6", "text": "Human beings are biologically adapted for culture in ways that other primates are not. The difference can be clearly seen when the social learning skills of humans and their nearest primate relatives are systematically compared. The human adaptation for culture begins to make itself manifest in human ontogeny at around 1 year of age as human infants come to understand other persons as intentional agents like the self and so engage in joint attentional interactions with them. This understanding then enables young children (a) to employ some uniquely powerful forms of cultural learning to acquire the accumulated wisdom of their cultures, especially as embodied in language, and also (b) to comprehend their worlds in some uniquely powerful ways involving perspectivally based symbolic representations. Until fairly recently, the study of children's cognitive development was dominated by the theory of Jean Piaget. Piaget's theory was detailed , elaborate, comprehensive, and, in many important respects, wrong. In attempting to fill the theoretical vacuum created by Piaget's demise, developmental psychologists have sorted themselves into two main groups. In the first group are those theorists who emphasize biology. These neo-nativists believe that organic evolution has provided human beings with some specific domains of knowledge of the world and its workings and that this knowledge is best characterized as \" innate. \" Such domains include, for example , mathematics, language, biology , and psychology. In the other group are theorists who have focused on the cultural dimension of human cognitive development. These cultural psychologists begin with the fact that human children grow into cogni-tively competent adults in the context of a structured social world full of material and symbolic arti-facts such as tools and language, structured social interactions such as rituals and games, and cultural institutions such as families and religions. The claim is that the cultural context is not just a facilitator or motivator for cognitive development , but rather a unique \" ontoge-netic niche \" (i.e., a unique context for development) that actually structures human cognition in fundamental ways. There are many thoughtful scientists in each of these theoretical camps. This suggests the possibility that each has identified some aspects of the overall theory that will be needed to go beyond Piaget and incorporate adequately both the cultural and the biological dimensions of human cognitive development. What is needed to achieve this aim, in my opinion, is (a) an evolutionary approach to the human …", "title": "" } ]
scidocsrr
8eaf4f6e40e4a0c9585c8d572cd77814
A Horizontal Fragmentation Algorithm for the Fact Relation in a Distributed Data Warehouse
[ { "docid": "cd892dec53069137c1c2cfe565375c62", "text": "Optimal application performance on a Distributed Object Based System (DOBS) requires class fragmentation and the development of allocation schemes to place fragments at distributed sites so data transfer is minimized. Fragmentation enhances application performance by reducing the amount of irrelevant data accessed and the amount of data transferred unnecessarily between distributed sites. Algorithms for effecting horizontal and vertical fragmentation ofrelations exist, but fragmentation techniques for class objects in a distributed object based system are yet to appear in the literature. This paper first reviews a taxonomy of the fragmentation problem in a distributed object base. The paper then contributes by presenting a comprehensive set of algorithms for horizontally fragmenting the four realizable class models on the taxonomy. The fundamental approach is top-down, where the entity of fragmentation is the class object. Our approach consists of first generating primary horizontal fragments of a class based on only applications accessing this class, and secondly generating derived horizontal fragments of the class arising from primary fragments of its subclasses, its complex attributes (contained classes), and/or its complex methods classes. Finally, we combine the sets of primary and derived fragments of each class to produce the best possible fragments. Thus, these algorithms account for inheritance and class composition hierarchies as well as method nesting among objects, and are shown to be polynomial time.", "title": "" } ]
[ { "docid": "d1114f1ced731a700d40dd97fe62b82b", "text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.", "title": "" }, { "docid": "80c21770ada160225e17cb9673fff3b3", "text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.", "title": "" }, { "docid": "aed80386c32e16f70fff3cbc44b07d97", "text": "The vision for the \"Web of Things\" (WoT) aims at bringing physical objects of the world into the World Wide Web. The Web is constantly evolving and has changed over the last couple of decades and the changes have spurted new areas of growth. The primary focus of the WoT is to bridge the gap between physical and digital worlds over a common and widely used platform, which is the Web. Everyday physical \"things\", which are not Web-enabled, and have limited or zero computing capability, can be accommodated within the Web. As a step towards this direction, this work focuses on the specification of a thing, its descriptors and functions that could participate in the process of its discovery and operations. Besides, in this model for the WoT, we also propose a semantic Web-based architecture to integrate these things as Web resources to further demystify the realization of the WoT vision.", "title": "" }, { "docid": "c3c5931200ff752d8285cc1068e779ee", "text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.", "title": "" }, { "docid": "812c41737bb2a311d45c5566f773a282", "text": "Acceleration, sprint and agility performance are crucial in sports like soccer. There are few studies regarding the effect of training on youth soccer players in agility performance and in sprint distances shorter than 30 meter. Therefore, the aim of the recent study was to examine the effect of a high-intensity sprint and plyometric training program on 13-year-old male soccer players. A training group of 14 adolescent male soccer players, mean age (±SD) 13.5 years (±0.24) followed an eight week intervention program for one hour per week, and a group of 12 adolescent male soccer players of corresponding age, mean age 13.5 years (±0.23) served as control a group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.23 s (±0.34) to post 7.69 s (± 0.34) (p<0.01), and a significant improvement in 0-20m linear sprint, pre 3.54s (±0.17) to post 3.42s (±0.18) (p<0.05). In 0-10m sprint the participants also showed an improvement, pre 2.02s (±0.11) to post 1.96s (± 0.11), however this was not significant. The correlation between 10-m sprint and agility was r = 0.53 (p<0.01), and between 20-m linear sprint and agility performance, r = 0.67 (p<0.01). The major finding in the study is the significant improvement in agility performance and in 0-20 m linear sprint in the intervention group. These findings suggest that organizing the training sessions with short-burst high-intensity sprint and plyometric exercises interspersed with adequate recovery time, may result in improvements in both agility and in linear sprint performance in adolescent male soccer players. Another finding is the correlation between linear sprint and agility performance, indicating a difference when compared to adults. 4 | Mathisen: EFFECT OF HIGH-SPEED...", "title": "" }, { "docid": "ccff1c7fa149a033b49c3a6330d4e0f3", "text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.", "title": "" }, { "docid": "177c5969917e04ea94773d1c545fae82", "text": "Attitudes toward global warming are influenced by various heuristics, which may distort policy away from what is optimal for the well-being of people. These possible distortions, or biases, include: a focus on harms that we cause, as opposed to those that we can remedy more easily; a feeling that those who cause a problem should fix it; a desire to undo a problem rather than compensate for its presence; parochial concern with one’s own group (nation); and neglect of risks that are not available. Although most of these biases tend to make us attend relatively too much to global warming, other biases, such as wishful thinking, cause us to attend too little. I discuss these possible effects and illustrate some of them with an experiment conducted on the World Wide Web.", "title": "" }, { "docid": "34382f9716058d727f467716350788a7", "text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.", "title": "" }, { "docid": "b741698d7e4d15cb7f4e203f2ddbce1d", "text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.", "title": "" }, { "docid": "8ba9439094fae89d6ff14d03476878b9", "text": "In this paper we present a framework for the real-time control of lightweight autonomous vehicles which comprehends a proposed hardand software design. The system can be used for many kinds of vehicles and offers high computing power and flexibility in respect of the control algorithms and additional application dependent tasks. It was originally developed to control a small quad-rotor UAV where stringent restrictions in weight and size of the hardware components exist, but has been transfered to a fixed-wing UAV and a ground vehicle for inand outdoor search and rescue missions. The modular structure and the use of a standard PC architecture at an early stage simplifies reuse of components and fast integration of new features. Figure 1: Quadrotor UAV controlled by the proposed system", "title": "" }, { "docid": "5f96b65c7facf35cd0b2e629a2e98662", "text": "Effectively evaluating visualization techniques is a difficult task often assessed through feedback from user studies and expert evaluations. This work presents an alternative approach to visualization evaluation in which brain activity is passively recorded using electroencephalography (EEG). These measurements are used to compare different visualization techniques in terms of the burden they place on a viewer’s cognitive resources. In this paper, EEG signals and response times are recorded while users interpret different representations of data distributions. This information is processed to provide insight into the cognitive load imposed on the viewer. This paper describes the design of the user study performed, the extraction of cognitive load measures from EEG data, and how those measures are used to quantitatively evaluate the effectiveness of visualizations.", "title": "" }, { "docid": "9ae370847ec965a3ce9c7636f8d6a726", "text": "In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.", "title": "" }, { "docid": "3d9e279afe4ba8beb1effd4f26550f67", "text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.", "title": "" }, { "docid": "97561632e9d87093a5de4f1e4b096df7", "text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected] Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected]", "title": "" }, { "docid": "5c469bbeb053c187c2d14fd9f27c4426", "text": "Fatigue damage increases with applied load cycles in a cumulative manner. Cumulative fatigue damage analysis plays a key role in life prediction of components and structures subjected to field load histories. Since the introduction of damage accumulation concept by Palmgren about 70 years ago and ‘linear damage rule’ by Miner about 50 years ago, the treatment of cumulative fatigue damage has received increasingly more attention. As a result, many damage models have been developed. Even though early theories on cumulative fatigue damage have been reviewed by several researchers, no comprehensive report has appeared recently to review the considerable efforts made since the late 1970s. This article provides a comprehensive review of cumulative fatigue damage theories for metals and their alloys. emphasizing the approaches developed between the early 1970s to the early 1990s. These theories are grouped into six categories: linear damage rules; nonlinear damage curve and two-stage linearization approaches; life curve modification methods; approaches based on crack growth concepts: continuum damage mechanics models: and energy-based theories.", "title": "" }, { "docid": "b0bcd65de1841474dba09e9b1b5c2763", "text": "Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.", "title": "" }, { "docid": "c31ffcb1514f437313c2f3f0abaf3a88", "text": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.", "title": "" }, { "docid": "2a68d57f8d59205122dd11461accecab", "text": "A resistive methanol sensor based on ZnO hexagonal nanorods having average diameter (60–70 nm) and average length of <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\sim}{\\rm 500}~{\\rm nm}$</tex></formula>, is reported in this paper. A low temperature chemical bath deposition technique is employed to deposit vertically aligned ZnO hexagonal nanorods using zinc acetate dihydrate and hexamethylenetetramine (HMT) precursors at 100<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula> on a <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SiO}_{2}$</tex></formula> substrate having Sol-Gel grown ZnO seed layer. After structural (XRD, FESEM) and electrical (Hall effect) characterizations, four types of sensors structures incorporating the effect of catalytic metal electrode (Pd-Ag) and Pd nanoparticle sensitization, are fabricated and tested for sensing methanol vapor in the temperature range of 27<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex> </formula>–300<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>. The as deposited ZnO nanorods with Pd-Ag catalytic contact offered appreciably high dynamic range (190–3040 ppm) at moderately lower temperature (200<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>) compared to the sensors with noncatalytic electrode (Au). Surface modification of nanorods by Pd nanoparticles offered faster response and recovery with increased response magnitude for both type of electrodes, but at the cost of lower dynamic range (190–950 ppm). The possible sensing mechanism has also been discussed briefly.", "title": "" }, { "docid": "ef1f34e7bc08b78bfbf7317cd102c89e", "text": "Most modern trackers typically employ a bounding box given in the first frame to track visual objects, where their tracking results are often sensitive to the initialization. In this paper, we propose a new tracking method, Reliable Patch Trackers (RPT), which attempts to identify and exploit the reliable patches that can be tracked effectively through the whole tracking process. Specifically, we present a tracking reliability metric to measure how reliably a patch can be tracked, where a probability model is proposed to estimate the distribution of reliable patches under a sequential Monte Carlo framework. As the reliable patches distributed over the image, we exploit the motion trajectories to distinguish them from the background. Therefore, the visual object can be defined as the clustering of homo-trajectory patches, where a Hough voting-like scheme is employed to estimate the target state. Encouraging experimental results on a large set of sequences showed that the proposed approach is very effective and in comparison to the state-of-the-art trackers. The full source code of our implementation will be publicly available.", "title": "" }, { "docid": "90084e7b31e89f5eb169a0824dde993b", "text": "In this work, we present a novel way of using neural network for graph-based dependency parsing, which fits the neural network into a simple probabilistic model and can be furthermore generalized to high-order parsing. Instead of the sparse features used in traditional methods, we utilize distributed dense feature representations for neural network, which give better feature representations. The proposed parsers are evaluated on English and Chinese Penn Treebanks. Compared to existing work, our parsers give competitive performance with much more efficient inference.", "title": "" } ]
scidocsrr
0ce169d13f1650ed08cab1fe6935545e
Advancing the state of mobile cloud computing
[ { "docid": "a08aa88aa3b4249baddbd8843e5c9be3", "text": "We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.", "title": "" } ]
[ { "docid": "6e30387a3706dea2b7d18668c08bb31b", "text": "The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic arkup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which akes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We ay that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog resents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string etric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. oreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the articular community jargon used by end users. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "673d6aea6c9a3ebde1d8bf30be9a8804", "text": "FDTD numerical study compared to the results of measurement is reported for double-ridged horn antenna with sinusoidal profile of the ridge. Different transitions from coaxial to double-ridged waveguide were considered on the preliminary step of the study. Next, a suitable configuration for feeding the ridges of antenna was chosen. The sinusoidal ridge taper is described in the next part of the paper. Finally, the simulations results of complete antenna are presented. Theoretical characteristics of reflection and antenna patterns are compared to the results of measurements showing acceptable accordance.", "title": "" }, { "docid": "7e7651261be84e2e05cde0ac9df69e6d", "text": "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.", "title": "" }, { "docid": "de1c4c92e95320f5526c8af06acfadc0", "text": "Provides a method for automatic translation UML diagrams to Petri nets, which is to convert formats like structure .xmi and .cpn. We consider the transformation of the most frequently used items on activity diagram - state action, condition, fork and join based on rules transformation. These elements are shown in activity diagram and its corresponding Petri net. It is noted that in active diagram presence four types elements - state action, a pseudostate, final state and transition, in Petri nets involved three types elements - place, transition and arc. Discussed in detail the comparison of initial state, state action and final state of activity diagram and places Petri nets - element name and its properties.", "title": "" }, { "docid": "bbf764205f770481b787e76db5a3b614", "text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.", "title": "" }, { "docid": "700191eaaaf0bdd293fc3bbd24467a32", "text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.", "title": "" }, { "docid": "3394eb51b71e5def4e4637963da347ab", "text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.", "title": "" }, { "docid": "1523534d398b4900c90d94e3f1bee422", "text": "PURPOSE\nThe purpose of this pilot study was to examine the effectiveness of hippotherapy as an intervention for the treatment of postural instability in individuals with multiple sclerosis (MS).\n\n\nSUBJECTS\nA sample of convenience of 15 individuals with MS (24-72 years) were recruited from support groups and assessed for balance deficits.\n\n\nMETHODS\nThis study was a nonequivalent pretest-posttest comparison group design. Nine individuals (4 males, 5 females) received weekly hippotherapy intervention for 14 weeks. The other 6 individuals (2 males, 4 females) served as a comparison group. All participants were assessed with the Berg Balance Scale (BBS) and Tinetti Performance Oriented Mobility Assessment (POMA) at 0, 7, and 14 weeks.\n\n\nRESULTS\nThe group receiving hippotherapy showed statistically significant improvement from pretest (0 week) to posttest (14 week) on the BBS (mean increase 9.15 points (x (2) = 8.82, p = 0.012)) and POMA scores (mean increase 5.13 (x (2) = 10.38, p = 0.006)). The comparison group had no significant changes on the BBS (mean increase 0.73 (x (2) = 0.40, p = 0.819)) or POMA (mean decrease 0.13 (x (2) = 1.41, p = 0.494)). A statistically significant difference was also found between the groups' final BBS scores (treatment group median = 55.0, comparison group median 41.0), U = 7, r = -0.49.\n\n\nDISCUSSION\nHippotherapy shows promise for the treatment of balance disorders in persons with MS. Further research is needed to refine protocols and selection criteria.", "title": "" }, { "docid": "e65d14dc0777e4a14fea6d00f06d9bfc", "text": "A novel single-layer dual band-notched printed circle-like slot antenna for ultrawideband (UWB) applications is presented. The proposed antenna comprises a circle-like slot, a trident-shaped feed line, and two nested C-shaped stubs. By using a trident-shaped feed line, much wider impedance bandwidth is obtained. Due to inserting a pair of nested C-shaped stubs on the back surface of the substrate, two frequency band-notches of 5.1-6.2 (WLAN) and 3-3.8 GHz (WiMAX) are achieved. The nested stubs are connected to the tuning stub using two cylindrical via pins. The designed antenna has a total size of 26 × 30 mm2 and operates over the frequency band between 2.5 and 25 GHz. Throughout this letter, experimental results of the impedance bandwidth, gain, and radiation patterns are compared and discussed .", "title": "" }, { "docid": "cc9686bac7de957afe52906763799554", "text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.", "title": "" }, { "docid": "4264c3ed6ea24a896377a7efa2b425b0", "text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, [email protected]} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER", "title": "" }, { "docid": "630c4e87333606c6c8e7345cb0865c64", "text": "MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geodistributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.", "title": "" }, { "docid": "ecf5be2966efe597978a25c72dc676e4", "text": "A compact ±45° dual-polarized magneto-electric (ME) dipole base station antenna is proposed for 2G/3G/LTE applications. The antenna is excited by two Γ-shaped probes placed at a convenient location and two orthogonally octagonal loop electric dipoles are employed to achieve a wide impedance bandwidth. A stable antenna gain and a stable radiation pattern are realized by using a rectangular box-shaped reflector instead of planar one. The antenna is prototype and measured. Measured results show overlapped impedance bandwidth is 58% with standing-wave ratio (SWR) ≤ 1.5 from 1.68 to 3.05 GHz, port-to-port isolation is large than 26 dB within the bandwidth, and stable antenna gains of 8.6 ± 0.8 dBi and 8.3 ± 0.6 dBi for port 1 and port 2, respectively. Nearly symmetrical radiation patterns with low back lobe radiation both in horizontal and vertical planes, and narrow beamwidth can be also obtained. Moreover, the size of the antenna is very compact, which is only 0.79λ0 × 0.79λ0 × 0.26λ0. The proposed antenna can be used for multiband base stations in next generation communication systems.", "title": "" }, { "docid": "a94f066ec5db089da7fd19ac30fe6ee3", "text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.", "title": "" }, { "docid": "af49fef0867a951366cfb21288eeb3ed", "text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.", "title": "" }, { "docid": "e6912f1b9e6060b452f2313766288e97", "text": "The air-core inductance of power transformers is measured using a nonideal low-power rectifier. Its dc output serves to drive the transformer into deep saturation, and its ripple provides low-amplitude variable excitation. The principal advantage of the proposed method is its simplicity. For validation, the experimental results are compared with 3-D finite-element simulations.", "title": "" }, { "docid": "a41c9650da7ca29a51d310cb4a3c814d", "text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance", "title": "" }, { "docid": "86fca69ae48592e06109f7b05180db28", "text": "Background: The software development industry has been adopting agile methods instead of traditional software development methods because they are more flexible and can bring benefits such as handling requirements changes, productivity gains and business alignment. Objective: This study seeks to evaluate, synthesize, and present aspects of research on agile methods tailoring including the method tailoring approaches adopted and the criteria used for agile practice selection. Method: The method adopted was a Systematic Literature Review (SLR) on studies published from 2002 to 2014. Results: 56 out of 783 papers have been identified as describing agile method tailoring approaches. These studies have been identified as case studies regarding the empirical research, as solution proposals regarding the research type, and as evaluation studies regarding the research validation type. Most of the papers used method engineering to implement tailoring and were not specific to any agile method on their scope. Conclusion: Most of agile methods tailoring research papers proposed or improved a technique, were implemented as case studies analyzing one case in details and validated their findings using evaluation. Method engineering was the base for tailoring, the approaches are independent of agile method and the main criteria used are internal environment and objectives variables. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "21eddfd81b640fc1810723e93f94ae5d", "text": "R. B. Gnanajothi, Topics in graph theory, Ph. D. thesis, Madurai Kamaraj University, India, 1991. E. M. Badr, On the Odd Gracefulness of Cyclic Snakes With Pendant Edges, International journal on applications of graph theory in wireless ad hoc networks and sensor networks (GRAPH-HOC) Vol. 4, No. 4, December 2012. E. M. Badr, M. I. Moussa & K. Kathiresan (2011): Crown graphs and subdivision of ladders are odd graceful, International Journal of Computer Mathematics, 88:17, 3570-3576. A. Rosa, On certain valuation of the vertices of a graph, Theory of Graphs (International Symposium, Rome, July 1966), Gordon and Breach, New York and Dunod Paris (1967) 349-355. A. Solairaju & P. Muruganantham, Even Vertex Gracefulness of Fan Graph,", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" } ]
scidocsrr
f96b809d0bef1e2640bcb9c2b9486305
Music segmentation and summarization based on self-similarity matrix
[ { "docid": "6327964ae4eb3410a1772edee4ff358d", "text": "We introduce a method for the automatic extraction of musical structures in popular music. The proposed algorithm uses non-negative matrix factorization to segment regions of acoustically similar frames in a self-similarity matrix of the audio data. We show that over the dimensions of the NMF decomposition, structural parts can easily be modeled. Based on that observation, we introduce a clustering algorithm that can explain the structure of the whole music piece. The preliminary evaluation we report in the the paper shows very encouraging results.", "title": "" } ]
[ { "docid": "328860ae6cccc7530de9aab8a1a58c5e", "text": "Electrochemical approaches have played crucial roles in bio sensing because of their Potential in achieving sensitive, specific and low-cost detection of biomolecules and other bio evidences. Engineering the electrochemical sensing interface with nanomaterials tends to new generations of label-free biosensors with improved performances in terms of sensitive area and response signals. Here we applied Silicon Nanowire (SiNW) array electrodes (in an integrated architecture of working, counter and reference electrodes) grown by low pressure chemical vapor deposition (LPCVD) system with VLS procedure to electrochemically diagnose the presence of breast cancer cells as well as their response to anticancer drugs. Mebendazole (MBZ), has been used as antitubulin drug. It perturbs the anodic/cathodic response of the cell covered biosensor by releasing Cytochrome C in cytoplasm. Reduction of cytochrome C would change the ionic state of the cells monitored by SiNW biosensor. By applying well direct bioelectrical contacts with cancer cells, SiNWs can detect minor signal transduction and bio recognition events, resulting in precise biosensing. Our device detected the trace of MBZ drugs (with the concentration of 2nM) on electrochemical activity MCF-7 cells. Also, experimented biological analysis such as confocal and Flowcytometry assays confirmed the electrochemical results.", "title": "" }, { "docid": "5c8ed4f3831ce864cbdaea07171b5a57", "text": "Hyper-beta-alaninemia is a rare metabolic condition that results in elevated plasma and urinary β-alanine levels and is characterized by neurotoxicity, hypotonia, and respiratory distress. It has been proposed that at least some of the symptoms are caused by oxidative stress; however, only limited information is available on the mechanism of reactive oxygen species generation. The present study examines the hypothesis that β-alanine reduces cellular levels of taurine, which are required for normal respiratory chain function; cellular taurine depletion is known to reduce respiratory function and elevate mitochondrial superoxide generation. To test the taurine hypothesis, isolated neonatal rat cardiomyocytes and mouse embryonic fibroblasts were incubated with medium lacking or containing β-alanine. β-alanine treatment led to mitochondrial superoxide accumulation in conjunction with a decrease in oxygen consumption. The defect in β-alanine-mediated respiratory function was detected in permeabilized cells exposed to glutamate/malate but not in cells utilizing succinate, suggesting that β-alanine leads to impaired complex I activity. Taurine treatment limited mitochondrial superoxide generation, supporting a role for taurine in maintaining complex I activity. Also affected by taurine is mitochondrial morphology, as β-alanine-treated fibroblasts undergo fragmentation, a sign of unhealthy mitochondria that is reversed by taurine treatment. If left unaltered, β-alanine-treated fibroblasts also undergo mitochondrial apoptosis, as evidenced by activation of caspases 3 and 9 and the initiation of the mitochondrial permeability transition. Together, these data show that β-alanine mediates changes that reduce ATP generation and enhance oxidative stress, factors that contribute to heart failure.", "title": "" }, { "docid": "6ac231de51b69685fcb45d4ef2b32051", "text": "This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.", "title": "" }, { "docid": "0ff159433ed8958109ba8006822a2d67", "text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.", "title": "" }, { "docid": "2bc0102fdc3a66ca5262bdaa90a94187", "text": "Visual localization enables autonomous vehicles to navigate in their surroundings and Augmented Reality applications to link virtual to real worlds. In order to be practically relevant, visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on the quality of 6 degree-of-freedom (6DOF) camera pose estimation through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions and propose promising avenues for future work. We will eventually make our two novel benchmarks publicly available.", "title": "" }, { "docid": "033fae2e8e219fb74ae8f39b5c176f25", "text": "Wireless Sensor Networks (WSNs) have become a leading solution in many important applications such as intrusion detection, target tracking, industrial automation, smart building and so on. Typically, a WSN consists of a large number of small, low-cost sensor nodes that are distributed in the target area for collecting data of interest. For a WSN to provide high throughput in an energy-efficient way, designing an efficient Medium Access Control (MAC) protocol is of paramount importance because the MAC layer coordinates nodes' access to the shared wireless medium. To show the evolution of WSN MAC protocols, this article surveys the latest progresses in WSN MAC protocol designs over the period 2002-2011. In the early development stages, designers were mostly concerned with energy efficiency because sensor nodes are usually limited in power supply. Recently, new protocols are being developed to provide multi-task support and efficient delivery of bursty traffic. Therefore, research attention has turned back to throughput and delay. This article details the evolution of WSN MAC protocols in four categories: asynchronous, synchronous, frame-slotted, and multichannel. These designs are evaluated in terms of energy efficiency, data delivery performance, and overhead needed to maintain a protocol's mechanisms. With extensive analysis of the protocols many future directions are stated at the end of this survey. The performance of different classes of protocols could be substantially improved in future designs by taking into consideration the recent advances in technologies and application demands.", "title": "" }, { "docid": "1d4c583da38709054140152fe328294c", "text": "This paper analyzes the assumptions of the decision making models in the context of artificial general intelligence (AGI). It is argued that the traditional approaches, exemplified by decision theory and reinforcement learning, are inappropriate for AGI, because their fundamental assumptions on available knowledge and resource cannot be satisfied here. The decision making process in the AGI system NARS is introduced and compared with the traditional approaches. It is concluded that realistic decision-making models must acknowledge the insufficiency of knowledge and resources, and make assumptions accordingly. 1 Formalizing decision-making An AGI system needs to make decisions from time to time. To achieve its goals, the system must execute certain operations, which are chosen from all possible operations, according to the system’s beliefs on the relations between the operations and the goals, as well as their applicability to the current situation. On this topic, the dominating normative model is decision theory [12, 3]. According to this model, “decision making” means to choose one action from a finite set of actions that is applicable at the current state. Each action leads to some consequent states according to a probability distribution, and each consequent state is associated with a utility value. The rational choice is the action that has the maximum expected utility (MEU). When the decision extends from single actions to action sequences, it is often formalized as a Markov decision process (MDP), where the utility function is replaced by a reward value at each state, and the optimal policy, as a collection of decisions, is the one that achieves the maximum expected total reward (usually with a discount for future rewards) in the process. In AI, the best-known approach toward solving this problem is reinforcement learning [4, 16], which uses various algorithms to approach the optimal policy. Decision theory and reinforcement learning have been widely considered as setting the theoretical foundation of AI research [11], and the recent progress in deep learning [9] is increasing the popularity of these models. In the current AGI research, an influential model in this tradition is AIXI [2], in which reinforcement learning is combined with Solomonoff induction [15] to provide the probability values according to algorithmic complexity of the hypotheses used in prediction. 2 P. Wang and P. Hammer Every formal model is based on some fundamental assumptions to encapsulate certain beliefs about the process to be modeled, so as to provide a coherent foundation for the conclusions derived in the model, and also to set restrictions on the situations where the model can be legally applied. In the following, four major assumptions of the above models are summarized. The assumption on task: The task of “decision making” is to select the best action from all applicable actions at each state of the process. The assumption on belief: The selection is based on the system’s beliefs about the actions, represented as probability distributions among their consequent states. The assumption on desire: The selection is guided by the system’s desires measured by a (utility or reward) value function defined on states, and the best action is the one that with the maximum expectation. The assumption on budget: The system can afford the computational resources demanded by the selection algorithm. There are many situations where the above assumptions can be reasonably accepted, and the corresponding models have been successfully applied [11, 9]. However, there are reasons to argue that artificial general intelligence (AGI) is not such a field, and there are non-trivial issues on each of the four assumptions. Issues on task: For a general-purpose system, it is unrealistic to assume that at any state all the applicable actions are explicitly listed. Actually, in human decision making the evaluation-choice step is often far less significant than diagnosis or design [8]. Though in principle it is reasonable to assume the system’s actions are recursively composed of a set of basic operations, decision makings often do not happen at the level of basic operations, but at the level of composed actions, where there are usually infinite possibilities. So decision making is often not about selection, but selective composition. Issues on belief: For a given action, the system’s beliefs about its possible consequences are not necessarily specified as a probability distribution among following states. Actions often have unanticipated consequences, and even the beliefs about the anticipated consequences usually do not fully specify a “state” of the environment or the system itself. Furthermore, the system’s beliefs about the consequences may be implicitly inconsistent, so does not correspond to a probability distribution. Issues on desire: Since an AGI system typically has multiple goals with conflicting demands, usually no uniform value function can evaluate all actions with respect to all goals within limited time. Furthermore, the goals in an AGI system change over time, and it is unrealistic to expect such a function to be defined on all future states. How desirable a situation is should be taken as part of the problem to be solved, rather than as a given. Issues on budget: An AGI is often expected to handle unanticipated problems in real time with various time requirements. In such a situation, even if the decision-making algorithms are considered as of “tractable” computational complexity, they may still fail to satisfy the requirement on response time in the given situation. Assumptions of Decision-Making Models in AGI 3 None of the above issues is completely unknown, and various attempts have been proposed to extend the traditional models [13, 22, 1], though none of them has rejected the four assumptions altogether. Instead, a typical attitude is to take decision theory and reinforcement learning as idealized models for the actual AGI systems to approximate, as well as to be evaluated accordingly [6]. What this paper explores is the possibility of establishing normative models of decision making without accepting any of the above four assumptions. In the following, such a model is introduced, then compared with the classical models. 2 Decision making in NARS The decision-making model to be introduced comes from the NARS project [17, 18, 20]. The objective of this project is to build an AGI in the framework of a reasoning system. Decision making is an important function of the system, though it is not carried out by a separate algorithm or module, but tightly interwoven with other functions, such as reasoning and learning. Limited by the paper length, the following description only briefly covers the aspects of NARS that are directly related to the current discussion. NARS is designed according to the theory that “intelligence” is the ability for a system to be adaptive while working with insufficient knowledge and resources, that is, the system must depend on finite processing capability, make real-time responses, open to unanticipated problems and events, and learn from its experience. Under this condition, it is impossible for the truth-value of beliefs of the system to be defined either in the model-theoretic style as the extent of agreement with the state of affairs, or in the proof-theoretic style as the extent of agreement with the given axioms. Instead, it is defined as the extent of agreement with the available evidence collected from the system’s experience. Formally, for a given statement S, the amount of its positive evidence and negative evidence are defined in an idealized situation and measured by amounts w and w−, respectively, and the total amount evidence is w = w + w−. The truth-value of S is a pair of real numbers, 〈f, c〉, where f , frequency, is w/w so in [0, 1], and c, confidence, is w/(w + 1) so in (0, 1). Therefore a belief has a form of “S〈f, c〉”. As the content of belief, statement S is a sentence in a formal language Narsese. Each statement expresses a relation among a few concepts. For the current discussion, it is enough to know that a statement may have various internal structures for different types of conceptual relation, and can contain other statements as components. In particular, implication statement P ⇒ Q and equivalence statement P ⇔ Q express “If P then Q” and “P if and only if Q”, respectively, where P and Q are statements themselves. As a reasoning system, NARS can carry out three types of inference tasks: Judgment. A judgment also has the form of “S〈f, c〉”, and represents a piece of new experience to be absorbed into the system’s beliefs. Besides adding it into memory, the system may also use it to revise or update the previous beliefs on statement S, as well as to derive new conclusions using various inference rules (including deduction, induction, abduction, analogy, etc.). Each 4 P. Wang and P. Hammer rule uses a truth-value function to calculate the truth-value of the conclusion according to the evidence provided by the premises. For example, the deduction rule can take P 〈f1, c1〉 and P ⇒ Q 〈f2, c2〉 to derive Q〈f, c〉, where 〈f, c〉 is calculated from 〈f1, c1〉 and 〈f2, c2〉 by the truth-value function for deduction. There is also a revision rule that merges distinct bodies of evidence on the same statement to produce more confident judgments. Question. A question has the form of “S?”, and represents a request for the system to find the truth-value of S according to its current beliefs. A question may contain variables to be instantiated. Besides looking in the memory for a matching belief, the system may also use the inference rules backwards to generate derived questions, whose answers will lead to answers of the original question. For example, from question Q? and belief P ⇒ Q 〈f, c〉, a new question P? can be proposed by the deduction rule. When there are multiple candidate answers, a choice rule ", "title": "" }, { "docid": "dc71729ebd3c2a66c73b16685c8d12af", "text": "A list of related materials, with annotations to guide further exploration of the article's ideas and applications 11 Further Reading A company's bid to rally an industry ecosystem around a new competitive view is an uncertain gambit. But the right strategic approaches and the availability of modern digital infrastructures improve the odds for success.", "title": "" }, { "docid": "df2b5f4edb9631b910da72ee3058fd68", "text": "A method to reduce peak electricity demand in building climate control by using real-time electricity pricing and applying model predictive control (MPC) is investigated. We propose to use a newly developed time-varying, hourly-based electricity tariff for end-consumers, that has been designed to truly reflect marginal costs of electricity provision, based on spot market prices as well as on electricity grid load levels, which is directly incorporated into the MPC cost function. Since this electricity tariff is only available for a limited time window into the future we use least-squares support vector machines for electricity tariff price forecasting and thus provide the MPC controller with the necessary estimated time-varying costs for the whole prediction horizon. In the given context, the hourly pricing provides an economic incentive for a building controller to react sensitively with respect to high spot market electricity prices and high grid loading, respectively. Within the proposed tariff regime, grid-friendly behaviour is rewarded. It can be shown that peak electricity demand of buildings can be significantly reduced. The here presented study is an example for the successful implementation of demand response (DR) in the field of building climate control.", "title": "" }, { "docid": "7c7bec32e3949f3a6c0e1109cacd80f5", "text": "Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of \"reverse ITRACE\" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.", "title": "" }, { "docid": "872bda80d61c5ef4f30f073a69076050", "text": "Given a terabyte click log, can we build an efficient and effective click model? It is commonly believed that web search click logs are a gold mine for search business, because they reflect users' preference over web documents presented by the search engine. Click models provide a principled approach to inferring user-perceived relevance of web documents, which can be leveraged in numerous applications in search businesses. Due to the huge volume of click data, scalability is a must.\n We present the click chain model (CCM), which is based on a solid, Bayesian framework. It is both scalable and incremental, perfectly meeting the computational challenges imposed by the voluminous click logs that constantly grow. We conduct an extensive experimental study on a data set containing 8.8 million query sessions obtained in July 2008 from a commercial search engine. CCM consistently outperforms two state-of-the-art competitors in a number of metrics, with over 9.7% better log-likelihood, over 6.2% better click perplexity and much more robust (up to 30%) prediction of the first and the last clicked position.", "title": "" }, { "docid": "16bf05d14d0f4bed68ecbf2fb60b2cc7", "text": "Amaç: Akıllı telefonlar iletişim amaçlı kullanımları yanında internet, fotoğraf makinesi, video-ses kayıt cihazı, navigasyon, müzik çalar gibi birçok özelliğin bir arada toplandığı günümüzün popüler teknolojik cihazlarıdır. Akıllı telefonların kullanımı hızla artmaktadır. Bu hızlı artış akıllı telefonlara bağımlılığı ve problemli kullanımı beraberinde getirmektedir. Bizim bildiğimiz kadarıyla Türkiye’de akıllı telefonlara bağımlılığı değerlendiren ölçek yoktur. Bu çalışmanın amacı Akıllı Telefon Bağımlılığı Ölçeği’nin Türkçe’ye uyarlanması, geçerlik ve güvenilirliğinin incelenmesidir. Yöntem: Çalışmanın örneklemini Süleyman Demirel Üniversitesi Tıp Fakültesi’nde eğitim gören ve akıllı telefon kullanıcısı olan 301 üniversite öğrencisi oluşturmuştur. Çalışmada veri toplama araçları olarak Akıllı Telefon Bağımlılığı Ölçeği, Bilgi Formu, İnternet Bağımlılığı Ölçeği ve Problemli Cep Telefonu Kullanımı Ölçeği kullanılmıştır. Ölçekler, tüm katılımcılara Bilgi Formu hep ilk sırada olacak şekilde karışık sırayla verilmiştir. Ölçeklerin doldurulması yaklaşık 20 dakika sürmüştür. Test-tekrar-test uygulaması rastgele belirlenmiş 30 öğrenci ile (rumuz yardımıyla) üç hafta sonra yapılmıştır. Ölçeğin faktör yapısı açıklayıcı faktör analizi ve varimaks rotasyonu ile incelenmiştir. Güvenilirlik analizi için iç tutarlılık, iki-yarım güvenilirlik ve test-tekrar test güvenilirlik analizleri uygulanmıştır. Ölçüt bağıntılı geçerlilik analizinde Pearson korelasyon analizi kullanılmıştır. Bulgular: Faktör Analizi yedi faktörlü bir yapı ortaya koymuş, maddelerin faktör yüklerinin 0,349-0,824 aralığında değiştiği belirlenmiştir. Ölçeğin Cronbach alfa iç tutarlılık katsayısı 0,947 bulunmuştur. Ölçeğin diğer ölçeklerle arasındaki korelasyonlar istatistiksel olarak anlamlı bulunmuştur. Test-tekrar test güvenilirliğinin yüksek olduğu (r=0,814) bulunmuştur. İki yarım güvenilirlik analizinde Guttman Splithalf katsayısı 0,893 olarak saptanmıştır. Kız öğrencilerde ölçek toplam puan ortalamasının erkeklerden istatistiksel olarak önemli düzeyde yüksek olduğu bulunmuştur (p=0,03). Yaş ile ölçek toplam puanı arasında anlamlı olmayan negatif ilişki saptanmıştır (r=-0.086, p=0,13). En yüksek ölçek puan ortalaması 16 saat üzeri kullananlarda gözlenmiş olup 4 saatten az kullananlardan istatistiksel olarak önemli derecede fazla bulunmuştur (p=0,01). Ölçek toplam puanı akıllı telefonu en çok kullanım amacına göre karşılaştırıldığında en yüksek ortalamanın oyun kategorisinde olduğu ancak internet (p=0,44) ve sosyal ağ (p=0,98) kategorilerinden farklı olmadığı, ayrıca telefon (p=0,02), SMS (p=0,02) ve diğer kullanım amacı (p=0,04) kategori ortalamalarından istatistiksel olarak önemli derecede fazla olduğu bulunmuştur. Akıllı telefon bağımlısı olduğunu düşünenlerin ve bu konuda emin olmayanların toplam ölçek puanları akıllı telefon bağımlısı olduğunu düşünmeyenlerin toplam ölçek puanlarından anlamlı şekilde yüksek bulunmuştur (p=0,01). Sonuç: Bu çalışmada, Akıllı telefon Bağımlılığı Ölçeği’nin Türkçe formunun akıllı telefon bağımlılığının değerlendirilmesinde geçerli ve güvenilir bir ölçüm aracı olduğu bulunmuştur.", "title": "" }, { "docid": "0db200113ef14c8e88a3388c595148a6", "text": "Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.", "title": "" }, { "docid": "6fe71d8d45fa940f1a621bfb5b4e14cd", "text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.", "title": "" }, { "docid": "1a98b0d00afd29474fb40b76ca2b0ce6", "text": "The intended readership of this volume is the full range of behavioral scientists, mental health professionals, and students aspiring to such roles who work with children. This includes psychologists (applied, clinical, counseling, developmental, school, including academics, researchers, and practitioners), family counselors, psychiatrists, social workers, psychiatric nurses, child protection workers, and any other mental health professionals who work with children, adolescents, and their families.", "title": "" }, { "docid": "9818399b4c119b58723c59e76bbfc1bd", "text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.", "title": "" }, { "docid": "7e0d65fee19baefe31a4e14bf25f42ee", "text": "This paper describes the process for documenting programs using Aspect-Oriented PHP through AOPHPdoc. We discuss some of the problems involved in documenting Aspect-Oriented programs, solutions to these problems, and the creation of documentation with AOPHPdoc. A survey of programmers found no preference for Javadoc-styled documentation over the colored-coded AOPHP documentation.", "title": "" }, { "docid": "ff5d3f4ef4431c7144c12f5da563e347", "text": "Ankle inversion-eversion compliance is an important feature of conventional prosthetic feet, and control of inversion, or roll, in robotic prostheses could improve balance for people with amputation. We designed a tethered ankle-foot prosthesis with two independently-actuated toes that are coordinated to provide plantarflexion and inversion-eversion torques. This configuration allows a simple lightweight structure with a total mass of 0.72 kg. Strain gages on the toes measure torque with less than 2.7% RMS error, while compliance in the Bowden cable tether provides series elasticity. Benchtop tests demonstrated a 90% rise time of less than 33 ms and peak torques of 180 N·m in plantarflexion and ±30 N·m in inversion-eversion. The phase-limited closedloop torque bandwidth is 20 Hz with a 90 N·m amplitude chirp in plantarflexion, and 24 Hz with a 20 N·m amplitude chirp in inversion-eversion. The system has low sensitivity to toe position disturbances at frequencies of up to 18 Hz. Walking trials with five values of constant inversion-eversion torque demonstrated RMS torque tracking errors of less than 3.7% in plantarflexion and less than 5.9% in inversion-eversion. These properties make the platform suitable for haptic rendering of virtual devices in experiments with humans, which may reveal strategies for improving balance or allow controlled comparisons of conventional prosthesis features. A similar morphology may be effective for autonomous devices.", "title": "" }, { "docid": "36c568dd8c860a44aa376db3319f09b9", "text": "Future autonomous vehicles and ADAS (Advanced Driver Assistance Systems) need real-time audio and video transmission together with control data traffic (CDT). Audio/video stream delay analysis has been largely investigated in AVB (Audio Video Bridging) context, but not yet with the presence of the CDT in the new TSN context. In this paper we present a local delay analysis of AVB frames under hierarchical scheduling of credit-based shaping and time-aware shaping on TSN switches. We present the effects of time aware shaping on AVB traffic, how it changes the relative order of transmission of frames leading to bursts and worst case scenarios for lower priority streams. We also show that these bursts are upper-bounded by the Credit-Bases Shaper, hence the worst-case transmissions delay of a given stream is also upper-bounded. We present the analysis to compute the worst case delay for a frame, as well as the feasibility condition necessary for the analysis to be applied. Our methods (analysis and simulation) are applied to an automotive use case, which is defined within the Eurostars RETINA project, and where both control data traffic and AVB traffic must be guaranteed.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
scidocsrr
d666574dab00a7f6a9d30717ee302bd3
Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review
[ { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" } ]
[ { "docid": "ff56bae298b25accf6cd8c2710160bad", "text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.", "title": "" }, { "docid": "cd71e990546785bd9ba0c89620beb8d2", "text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.", "title": "" }, { "docid": "f6e791e85d8570a9f10b45e8f028683d", "text": "We present a smartphone-based system for real-time tele-monitoring of physical activity in patients with chronic heart-failure (CHF). We recently completed a pilot study with 15 subjects to evaluate the feasibility of the proposed monitoring in the real world and examine its requirements, privacy implications, usability, and other challenges encountered by the participants and healthcare providers. Our tele-monitoring system was designed to assess patient activity via minute-by-minute energy expenditure (EE) estimated from accelerometry. In addition, we tracked relative user location via global positioning system (GPS) to track outdoors activity and measure walking distance. The system also administered daily surveys to inquire about vital signs and general cardiovascular symptoms. The collected data were securely transmitted to a central server where they were analyzed in real time and were accessible to the study medical staff to monitor patient health status and provide medical intervention if needed. Although the system was designed for tele-monitoring individuals with CHF, the challenges, privacy considerations, and lessons learned from this pilot study apply to other chronic health conditions, such as diabetes and hypertension, that would benefit from continuous monitoring through mobile-health (mHealth) technologies.", "title": "" }, { "docid": "64cefd949f61afe81fbbb9ca1159dd4a", "text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR", "title": "" }, { "docid": "1d949b64320fce803048b981ae32ce38", "text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.", "title": "" }, { "docid": "f61d5c1b0c17de6aab8a0eafedb46311", "text": "The use of social media creates the opportunity to turn organization-wide knowledge sharing in the workplace from an intermittent, centralized knowledge management process to a continuous online knowledge conversation of strangers, unexpected interpretations and re-uses, and dynamic emergence. We theorize four affordances of social media representing different ways to engage in this publicly visible knowledge conversations: metavoicing, triggered attending, network-informed associating, and generative role-taking. We further theorize mechanisms that affect how people engage in the knowledge conversation, finding that some mechanisms, when activated, will have positive effects on moving the knowledge conversation forward, but others will have adverse consequences not intended by the organization. These emergent tensions become the basis for the implications we draw.", "title": "" }, { "docid": "ea87bfc0d6086e367e8950b445529409", "text": " Queue stability (Chapter 2.1)  Scheduling for stability, capacity regions (Chapter 2.3)  Linear programs (Chapter 2.3, Chapter 3)  Energy optimality (Chapter 3.2)  Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6)  Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3)  Inequality constraints and virtual queues (Chapter 4.4)  Drift-plus-penalty algorithm (Chapter 4.5)  Performance and delay tradeoffs (Chapter 3.2, 4.5)  Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)", "title": "" }, { "docid": "b52322509c5bed43b0de04847dd947a9", "text": "Chapter 1 presented a description of the ECG in terms of its etiology and clinical features, and Chapter 2 an overview of the possible sources of error introduced in the hardware collection and data archiving stages. With this groundwork in mind, this chapter is intended to introduce the reader to the ECG using a signal processing approach. The ECG typically exhibits both persistent features (such as the average PQRS-T morphology and the short-term average heart rate, or average RR interval), and nonstationary features (such as the individual RR and QT intervals, and longterm heart rate trends). Since changes in the ECG are quasi-periodic (on a beatto-beat, daily, and perhaps even monthly basis), the frequency can be quantified in both statistical terms (mean, variance) and via spectral estimation methods. In essence, all these statistics quantify the power or degree to which an oscillation is present in a particular frequency band (or at a particular scale), often expressed as a ratio to power in another band. Even for scale-free approaches (such as wavelets), the process of feature extraction tends to have a bias for a particular scale which is appropriate for the particular data set being analyzed. ECG statistics can be evaluated directly on the ECG signal, or on features extracted from the ECG. The latter category can be broken down into either morphology-based features (such as ST level) or timing-based statistics (such as heart rate variability). Before discussing these derived statistics, an overview of the ECG itself is given.", "title": "" }, { "docid": "bc66c4c480569a21fdb593500c7e76cf", "text": "Smallholder subsistence agriculture in the rural Eastern Cape Province is recognised as one of the major contributors to food security among the resourced-poor household. However, subsistence agriculture is thought to be unsustainable in the ever changing social, economic and political environment, and climate. This has contributed greatly to stagnate and widespread poverty among smallholder farmers in the Eastern Cape. For a sustainable transition from subsistence to smallholder commercial farming, strategies like accumulated social capital through rural farmer groups/cooperatives have been employed by the government and NGOs. These strategies have yielded mixed results of failed and successful farmer groups/cooperatives. Therefore, this study was aimed at establishing the impact of social capital on farmers’ household commercialization level of maize in addition to farm/farmer characteristics. The findings of this study established that smallholders’ average household commercialization index (HCI) of maize was 45%. Household size, crop sales, source of irrigation water, and bonding social capital had a positive and significant impact on HCI of maize while off-farm incomes and social values had a negative and significant impact on the same. Thus, innovation, adoption and use of labour saving technology, improved access to irrigation water and farmers’ access to trainings in relation to strengthening group cohesion are crucial in promoting smallholder commercial farming of maize in the study area.", "title": "" }, { "docid": "10f46999738c0d47ed16326631086933", "text": "We describe JAX, a domain-specific tracing JIT compiler for generating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forwardand reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily programmable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs.", "title": "" }, { "docid": "9332c32039cf782d19367a9515768e42", "text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.", "title": "" }, { "docid": "ec7f20169de673cc14b31e8516937df2", "text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "title": "" }, { "docid": "e97c0bbb74534a16c41b4a717eed87d5", "text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.", "title": "" }, { "docid": "840a8befafbf6fc43d19b890431f3953", "text": "The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.", "title": "" }, { "docid": "98ca25396ccd0e7faf0d00b46a2ab470", "text": "Smart glasses, such as Google Glass, provide always-available displays not offered by console and mobile gaming devices, and could potentially offer a pervasive gaming experience. However, research on input for games on smart glasses has been constrained by the available sensors to date. To help inform design directions, this paper explores user-defined game input for smart glasses beyond the capabilities of current sensors, and focuses on the interaction in public settings. We conducted a user-defined input study with 24 participants, each performing 17 common game control tasks using 3 classes of interaction and 2 form factors of smart glasses, for a total of 2448 trials. Results show that users significantly preferred non-touch and non-handheld interaction over using handheld input devices, such as in-air gestures. Also, for touch input without handheld devices, users preferred interacting with their palms over wearable devices (51% vs 20%). In addition, users preferred interactions that are less noticeable due to concerns with social acceptance, and preferred in-air gestures in front of the torso rather than in front of the face (63% vs 37%).", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" }, { "docid": "557864265ba9fe38bb4d9e4d70e40a06", "text": "Standard word embeddings lack the possibility to distinguish senses of a word by projecting them to exactly one vector. This has a negative effect particularly when computing similarity scores between words using standard vector-based similarity measures such as cosine similarity. We argue that minor senses play an important role in word similarity computations, hence we use an unsupervised sense inventory resource to retrofit monolingual word embeddings, producing sense-aware embeddings. Using retrofitted sense-aware embeddings, we show improved word similarity and relatedness results on multiple word embeddings and multiple established word similarity tasks, sometimes up to an impressive margin of +0.15 Spearman correlation score.", "title": "" }, { "docid": "39ebc7cc1a2cb50fb362804b6ae0f768", "text": "We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm to combine pages into a book. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser.", "title": "" }, { "docid": "824fbd2fe175b4b179226d249792b87a", "text": "While historically software validation focused on the functional requirements, recent approaches also encompass the validation of quality requirements; for example, system reliability, performance or usability. Application development for mobile platforms opens an additional area of qual i ty-power consumption. In PDAs or mobile phones, power consumption varies depending on the hardware resources used, making it possible to specify and validate correct or incorrect executions. Consider an application that downloads a video stream from the network and displays it on the mobile device's display. In the test scenario the viewing of the video is paused at a certain point. If the specification does not allow video prefetching, the user expects the network card activity to stop when video is paused. How can a test engineer check this expectation? Simply running a test suite or even tracing the software execution does not detect the network activity. However, the extraneous network activity can be detected by power measurements and power model application (Figure 1). Tools to find the power inconsistencies and to validate software from the energy point of view are needed.", "title": "" }, { "docid": "52d2004c762d4701ab275d9757c047fc", "text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.", "title": "" } ]
scidocsrr
32980997ad6f37a110ae57463c388881
Quantitative Analysis of the Full Bitcoin Transaction Graph
[ { "docid": "cdefeefa1b94254083eba499f6f502fb", "text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by", "title": "" } ]
[ { "docid": "13774d2655f2f0ac575e11991eae0972", "text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.", "title": "" }, { "docid": "64389907530dd26392e037f1ab2d1da5", "text": "Most current license plate (LP) detection and recognition approaches are evaluated on a small and usually unrepresentative dataset since there are no publicly available large diverse datasets. In this paper, we introduce CCPD, a large and comprehensive LP dataset. All images are taken manually by workers of a roadside parking management company and are annotated carefully. To our best knowledge, CCPD is the largest publicly available LP dataset to date with over 250k unique car images, and the only one provides vertices location annotations. With CCPD, we present a novel network model which can predict the bounding box and recognize the corresponding LP number simultaneously with high speed and accuracy. Through comparative experiments, we demonstrate our model outperforms current object detection and recognition approaches in both accuracy and speed. In real-world applications, our model recognizes LP numbers directly from relatively high-resolution images at over 61 fps and 98.5% accuracy.", "title": "" }, { "docid": "7645c6a0089ab537cb3f0f82743ce452", "text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.", "title": "" }, { "docid": "9175794d83b5f110fb9f08dc25a264b8", "text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.", "title": "" }, { "docid": "0c24b767705b3a88acf9fe128c0e3477", "text": "The studied camera is basically just a line of pixel sensors, which can be rotated on a full circle, describing a cylindrical surface this way. During a rotation we take individual shots, line by line. All these line images define a panoramic image on a cylindrical surface. This camera architecture (in contrast to the plane segment of the pinhole camera) comes with new challenges, and this report is about a classification of different models of such cameras and their calibration. Acknowledgment. The authors acknowledge comments, collaboration or support by various students and colleagues at CITR Auckland and DLR Berlin-Adlershof. report1_HWK.tex; 22/03/2006; 9:47; p.1", "title": "" }, { "docid": "c42aaf64a6da2792575793a034820dcb", "text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.", "title": "" }, { "docid": "f6383e814999744b24e6a1ce6507e47b", "text": "We propose a new approach, CCRBoost, to identify the hierarchical structure of spatio-temporal patterns at different resolution levels and subsequently construct a predictive model based on the identified structure. To accomplish this, we first obtain indicators within different spatio-temporal spaces from the raw data. A distributed spatio-temporal pattern (DSTP) is extracted from a distribution, which consists of the locations with similar indicators from the same time period, generated by multi-clustering. Next, we use a greedy searching and pruning algorithm to combine the DSTPs in order to form an ensemble spatio-temporal pattern (ESTP). An ESTP can represent the spatio-temporal pattern of various regularities or a non-stationary pattern. To consider all the possible scenarios of a real-world ST pattern, we then build a model with layers of weighted ESTPs. By evaluating all the indicators of one location, this model can predict whether a target event will occur at this location. In the case study of predicting crime events, our results indicate that the predictive model can achieve 80 percent accuracy in predicting residential burglary, which is better than other methods.", "title": "" }, { "docid": "cc6cf6557a8be12d8d3a4550163ac0a9", "text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.", "title": "" }, { "docid": "8bbf5cc2424e0365d6968c4c465fe5f7", "text": "We describe a method for assigning English tense and aspect in a system that realizes surface text for symbolically encoded narratives. Our testbed is an encoding interface in which propositions that are attached to a timeline must be realized from several temporal viewpoints. This involves a mapping from a semantic encoding of time to a set of tense/aspect permutations. The encoding tool realizes each permutation to give a readable, precise description of the narrative so that users can check whether they have correctly encoded actions and statives in the formal representation. Our method selects tenses and aspects for individual event intervals as well as subintervals (with multiple reference points), quoted and unquoted speech (which reassign the temporal focus), and modal events such as conditionals.", "title": "" }, { "docid": "e0cf83bcc9830f2a94af4822576e4167", "text": "Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio. Disciplines Engineering | Science and Technology Studies Publication Details Liu, X., Wang, L., Yin, J., Dou, Y. & Zhang, J. (2015). Absent multiple kernel learning. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2807-2813). United States: IEEE. This conference paper is available at Research Online: http://ro.uow.edu.au/eispapers/5373 Absent Multiple Kernel Learning Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jianping Yin, Yong Dou School of Computer National University of Defense Technology Changsha, China, 410073 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007", "title": "" }, { "docid": "4b049e3fee1adfba2956cb9111a38bd2", "text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.", "title": "" }, { "docid": "616354e134820867698abd3257606e62", "text": "Supplementary to the description of diseases at symptom level, the International Classification of Functioning, Disability and Health (ICF), edited by the WHO, for the first time enables a systematic description also at the level of disabilities and impairments. The Mini-ICF-Rating for Mental Disorders (Mini-ICF-P) is a short observer rating instrument for the assessment of disabilities, especially with regard to occupational functioning. The Mini-ICF-P was first evaluated empirically in 125 patients of a Department of Behavioural Medicine and Psychosomatics. Parallel-test reliability was r = 0.59. Correlates were found with cognitive and motivational variables and duration of sick leave from work. In summary, the Mini-ICF-P is a quick and practicable instrument.", "title": "" }, { "docid": "03e1ede18dcc78409337faf265940a4d", "text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.", "title": "" }, { "docid": "c399b42e2c7307a5b3c081e34535033d", "text": "The Internet of Things (IoT) plays an ever-increasing role in enabling smart city applications. An ontology-based semantic approach can help improve interoperability between a variety of IoT-generated as well as complementary data needed to drive these applications. While multiple ontology catalogs exist, using them for IoT and smart city applications require significant amount of work. In this paper, we demonstrate how can ontology catalogs be more effectively used to design and develop smart city applications? We consider four ontology catalogs that are relevant for IoT and smart cities: 1) READY4SmartCities; 2) linked open vocabulary (LOV); 3) OpenSensingCity (OSC); and 4) LOVs for IoT (LOV4IoT). To support semantic interoperability with the reuse of ontology-based smart city applications, we present a methodology to enrich ontology catalogs with those ontologies. Our methodology is generic enough to be applied to any other domains as is demonstrated by its adoption by OSC and LOV4IoT ontology catalogs. Researchers and developers have completed a survey-based evaluation of the LOV4IoT catalog. The usefulness of ontology catalogs ascertained through this evaluation has encouraged their ongoing growth and maintenance. The quality of IoT and smart city ontologies have been evaluated to improve the ontology catalog quality. We also share the lessons learned regarding ontology best practices and provide suggestions for ontology improvements with a set of software tools.", "title": "" }, { "docid": "19e2eaf78ec2723289e162503453b368", "text": "Printing sensors and electronics over flexible substrates are an area of significant interest due to low-cost fabrication and possibility of obtaining multifunctional electronics over large areas. Over the years, a number of printing technologies have been developed to pattern a wide range of electronic materials on diverse substrates. As further expansion of printed technologies is expected in future for sensors and electronics, it is opportune to review the common features, the complementarities, and the challenges associated with various printing technologies. This paper presents a comprehensive review of various printing technologies, commonly used substrates and electronic materials. Various solution/dry printing and contact/noncontact printing technologies have been assessed on the basis of technological, materials, and process-related developments in the field. Critical challenges in various printing techniques and potential research directions have been highlighted. Possibilities of merging various printing methodologies have been explored to extend the lab developed standalone systems to high-speed roll-to-roll production lines for system level integration.", "title": "" }, { "docid": "9a9d4d1d482333734d9b0efe87d1e53e", "text": "Following acute therapeutic interventions, the majority of stroke survivors are left with a poorly functioning hemiparetic hand. Rehabilitation robotics has shown promise in providing patients with intensive therapy leading to functional gains. Because of the hand's crucial role in performing activities of daily living, attention to hand therapy has recently increased. This paper introduces a newly developed Hand Exoskeleton Rehabilitation Robot (HEXORR). This device has been designed to provide full range of motion (ROM) for all of the hand's digits. The thumb actuator allows for variable thumb plane of motion to incorporate different degrees of extension/flexion and abduction/adduction. Compensation algorithms have been developed to improve the exoskeleton's backdrivability by counteracting gravity, stiction and kinetic friction. We have also designed a force assistance mode that provides extension assistance based on each individual's needs. A pilot study was conducted on 9 unimpaired and 5 chronic stroke subjects to investigate the device's ability to allow physiologically accurate hand movements throughout the full ROM. The study also tested the efficacy of the force assistance mode with the goal of increasing stroke subjects' active ROM while still requiring active extension torque on the part of the subject. For 12 of the hand digits'15 joints in neurologically normal subjects, there were no significant ROM differences (P > 0.05) between active movements performed inside and outside of HEXORR. Interjoint coordination was examined in the 1st and 3rd digits, and no differences were found between inside and outside of the device (P > 0.05). Stroke subjects were capable of performing free hand movements inside of the exoskeleton and the force assistance mode was successful in increasing active ROM by 43 ± 5% (P < 0.001) and 24 ± 6% (P = 0.041) for the fingers and thumb, respectively. Our pilot study shows that this device is capable of moving the hand's digits through nearly the entire ROM with physiologically accurate trajectories. Stroke subjects received the device intervention well and device impedance was minimized so that subjects could freely extend and flex their digits inside of HEXORR. Our active force-assisted condition was successful in increasing the subjects' ROM while promoting active participation.", "title": "" }, { "docid": "64d9f6973697749b6e2fa330101cbc77", "text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.", "title": "" }, { "docid": "2950e3c1347c4adeeb2582046cbea4b8", "text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.", "title": "" }, { "docid": "3566e18518d80b2431c4fba34f790a82", "text": "The aim of this paper is to present a nonlinear dynamic model for Voltage Source Converter-based HVDC (VSC-HVDC) links that can be used for dynamic studies. It includes the main physical elements and is controlled by PI controllers with antiwindup. A linear control model is derived for efficient tuning of the controllers of the nonlinear dynamic model. The nonlinear dynamic model is then tuned according to the performance of an ABB HVDC Light model.", "title": "" }, { "docid": "f6227013273d148321cab1eef83c40e5", "text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.", "title": "" } ]
scidocsrr
0a8ffc3e525a9e15863c7e0d84c7a2d0
SPECTRAL BASIS NEURAL NETWORKS FOR REAL-TIME TRAVEL TIME FORECASTING
[ { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" } ]
[ { "docid": "b01b7d382f534812f07faaaa1442b3f9", "text": "In this paper, we first establish new relationships in matrix forms among discrete Fourier transform (DFT), generalized DFT (GDFT), and various types of discrete cosine transform (DCT) and discrete sine transform (DST) matrices. Two new independent tridiagonal commuting matrices for each of DCT and DST matrices of types I, IV, V, and VIII are then derived from the existing commuting matrices of DFT and GDFT. With these new commuting matrices, the orthonormal sets of Hermite-like eigenvectors for DCT and DST matrices can be determined and the discrete fractional cosine transform (DFRCT) and the discrete fractional sine transform (DFRST) are defined. The relationships among the discrete fractional Fourier transform (DFRFT), fractional GDFT, and various types of DFRCT and DFRST are developed to reduce computations for DFRFT and fractional GDFT.", "title": "" }, { "docid": "d60fb42ca7082289c907c0e2e2c343fc", "text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to", "title": "" }, { "docid": "7380419cc9c5eac99e8d46e73df78285", "text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.", "title": "" }, { "docid": "793d41551a918a113f52481ff3df087e", "text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.", "title": "" }, { "docid": "8c0d117602ecadee24215f5529e527c6", "text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.", "title": "" }, { "docid": "478aa46b9dafbc111c1ff2cdb03a5a77", "text": "This paper presents results from recent work using structured light laser profile imaging to create high resolution bathymetric maps of underwater archaeological sites. Documenting the texture and structure of submerged sites is a difficult task and many applicable acoustic and photographic mapping techniques have recently emerged. This effort was completed to evaluate laser profile imaging in comparison to stereo imaging and high frequency multibeam mapping. A ROV mounted camera and inclined 532 nm sheet laser were used to create profiles of the bottom that were then merged into maps using platform navigation data. These initial results show very promising resolution in comparison to multibeam and stereo reconstructions, particularly in low contrast scenes. At the test sites shown here there were no significant complications related to scattering or attenuation of the laser sheet by the water. The resulting terrain was gridded at 0.25 cm and shows overall centimeter level definition. The largest source of error was related to the calibration of the laser and camera geometry. Results from three small areas show the highest resolution 3D models of a submerged archaeological site to date and demonstrate that laser imaging will be a viable method for accurate three dimensional site mapping and documentation.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "d362b36e0c971c43856a07b7af9055f3", "text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,", "title": "" }, { "docid": "47ac4b546fe75f2556a879d6188d4440", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" }, { "docid": "587f1510411636090bc192b1b9219b58", "text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.", "title": "" }, { "docid": "cdf2235bea299131929700406792452c", "text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.", "title": "" }, { "docid": "e33d34d0fbc19dbee009134368e40758", "text": "Quantum metrology exploits quantum phenomena to improve the measurement sensitivity. Theoretical analysis shows that quantum measurement can break through the standard quantum limits and reach super sensitivity level. Quantum radar systems based on quantum measurement can fufill not only conventional target detection and recognition tasks but also capable of detecting and identifying the RF stealth platform and weapons systems. The theoretical basis, classification, physical realization of quantum radar is discussed comprehensively in this paper. And the technology state and open questions of quantum radars is reviewed at the end.", "title": "" }, { "docid": "06b4bfebe295e3dceadef1a842b2e898", "text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.", "title": "" }, { "docid": "3cae5c0440536b95cf1d0273071ad046", "text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.", "title": "" }, { "docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9", "text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.", "title": "" }, { "docid": "cf506587f2699d88e4a2e0be36ccac41", "text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.", "title": "" }, { "docid": "89c85642fc2e0b1f10c9a13b19f1d833", "text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.", "title": "" }, { "docid": "fee96195e50e7418b5d63f8e6bd07907", "text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.", "title": "" }, { "docid": "704d729295cddd358eba5eefdf0bdee4", "text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.", "title": "" }, { "docid": "e05b1b6e1ca160b06e36b784df30b312", "text": "The vision of the MDSD is an era of software engineering where modelling completely replaces programming i.e. the systems are entirely generated from high-level models, each one specifying a different view of the same system. The MDSD can be seen as the new generation of visual programming languages which provides methods and tools to streamline the process of software engineering. Productivity of the development process is significantly improved by the MDSD approach and it also increases the quality of the resulting software system. The MDSD is particularly suited for those software applications which require highly specialized technical knowledge due to the involvement of complex technologies and the large number of complex and unmanageable standards. In this paper, an overview of the MDSD is presented; the working styles and the main concepts are illustrated in detail.", "title": "" } ]
scidocsrr
54739b925463523a5fa7e2294e6749a3
Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode - Developments and challenges in empirical aesthetics.
[ { "docid": "78c3573511176ba63e2cf727e09c7eb4", "text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "57fbb5bf0e7fe4b8be21fae87f027572", "text": "Android and iOS devices are leading the mobile device market. While various user experiences have been reported from the general user community about their differences, such as battery lifetime, display, and touchpad control, few in-depth reports can be found about their comparative performance when receiving the increasingly popular Internet streaming services. Today, video traffic starts to dominate the Internet mobile data traffic. In this work, focusing on Internet streaming accesses, we set to analyze and compare the performance when Android and iOS devices are accessing Internet streaming services. Starting from the analysis of a server-side workload collected from a top mobile streaming service provider, we find Android and iOS use different approaches to request media content, leading to different amounts of received traffic on Android and iOS devices when a same video clip is accessed. Further studies on the client side show that different data requesting approaches (standard HTTP request vs. HTTP range request) and different buffer management methods (static vs. dynamic) are used in Android and iOS mediaplayers, and their interplay has led to our observations. Our empirical results and analysis provide some insights for the current Android and iOS users, streaming service providers, and mobile mediaplayer developers.", "title": "" }, { "docid": "85f67ab0e1adad72bbe6417d67fd4c81", "text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.", "title": "" }, { "docid": "619c905f7ef5fa0314177b109e0ec0e6", "text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.", "title": "" }, { "docid": "d135e72c317ea28a64a187b17541f773", "text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.", "title": "" }, { "docid": "689f7aad97d36f71e43e843a331fcf5d", "text": "Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) \" and so more explicitly by", "title": "" }, { "docid": "5e240ad1d257a90c0ca414ce8e7e0949", "text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.", "title": "" }, { "docid": "4d040791f63af5e2ff13ff2b705dc376", "text": "The frequency and severity of forest fires, coupled with changes in spatial and temporal precipitation and temperature patterns, are likely to severely affect the characteristics of forest and permafrost patterns in boreal eco-regions. Forest fires, however, are also an ecological factor in how forest ecosystems form and function, as they affect the rate and characteristics of tree recruitment. A better understanding of fire regimes and forest recovery patterns in different environmental and climatic conditions will improve the management of sustainable forests by facilitating the process of forest resilience. Remote sensing has been identified as an effective tool for preventing and monitoring forest fires, as well as being a potential tool for understanding how forest ecosystems respond to them. However, a number of challenges remain before remote sensing practitioners will be able to better understand the effects of forest fires and how vegetation responds afterward. This article attempts to provide a comprehensive review of current research with respect to remotely sensed data and methods used to model post-fire effects and forest recovery patterns in boreal forest regions. The review reveals that remote sensing-based monitoring of post-fire effects and forest recovery patterns in boreal forest regions is not only limited by the gaps in both field data and remotely sensed data, but also the complexity of far-northern fire regimes, climatic conditions and environmental conditions. We expect that the integration of different remotely sensed data coupled with field campaigns can provide an important data source to support the monitoring of post-fire effects and forest recovery patterns. Additionally, the variation and stratification of preand post-fire vegetation and environmental conditions should be considered to achieve a reasonable, operational model for monitoring post-fire effects and forest patterns in boreal regions. OPEN ACCESS Remote Sens. 2014, 6 471", "title": "" }, { "docid": "807e008d5c7339706f8cfe71e9ced7ba", "text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future", "title": "" }, { "docid": "4ed74450320dfef4156013292c1d2cbb", "text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.", "title": "" }, { "docid": "0105070bd23400083850627b1603af0b", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "e3299737a0fb3cd3c9433f462565b278", "text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.", "title": "" }, { "docid": "c87cc578b4a74bae4ea1e0d0d68a6038", "text": "Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.", "title": "" }, { "docid": "505aff71acf5469dc718b8168de3e311", "text": "We propose two suffix array inspired full-text indexes. One, called SAhash, augments the suffix array with a hash table to speed up pattern searches due to significantly narrowed search interval before the binary search phase. The other, called FBCSA, is a compact data structure, similar to Mäkinen’s compact suffix array, but working on fixed sized blocks. Experiments on the Pizza & Chili 200MB datasets show that SA-hash is about 2–3 times faster in pattern searches (counts) than the standard suffix array, for the price of requiring 0.2n− 1.1n bytes of extra space, where n is the text length, and setting a minimum pattern length. FBCSA is relatively fast in single cell accesses (a few times faster than related indexes at about the same or better compression), but not competitive if many consecutive cells are to be extracted. Still, for the task of extracting, e.g., 10 successive cells its time-space relation remains attractive.", "title": "" }, { "docid": "efd2843175ad0b860ad1607f337addc5", "text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.", "title": "" }, { "docid": "ab15d55e8308843c526aed0c32db1cb2", "text": "ix Chapter 1: Introduction 1 1.1 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Human-Robot Communication . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Background and Related Work 11 2.1 Manual Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Task-Level Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Learning from Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Demonstration Approaches . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Policy Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Learning from Demonstration 19 3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Role of the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Role of the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Human-Robot Communication . . . . . . . . . . . . . . . . . . . 24 3.4.2 System Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Learning a Task Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30", "title": "" }, { "docid": "e5eb79b313dad91de1144cd0098cde15", "text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.", "title": "" }, { "docid": "f18833c40f6b15bb588eec3bbe52cdd4", "text": "Presented here is a cladistic analysis of the South American and some North American Camelidae. This analysis shows that Camelini and Lamini are monophyletic groups, as are the genera Palaeolama and Vicugna, while Hemiauchenia and Lama are paraphyletic. Some aspects of the migration and distribution of South American camelids are also discussed, confirming in part the propositions of other authors. According to the cladistic analysis and previous propositions, it is possible to infer that two Camelidae migration events occurred in America. In the first one, Hemiauchenia arrived in South America and, this was related to the speciation processes that originated Lama and Vicugna. In the second event, Palaeolama migrated from North America to the northern portion of South America. It is evident that there is a need for larger studies about fossil Camelidae, mainly regarding older ages and from the South American austral region. This is important to better undertand the geographic and temporal distribution of Camelidae and, thus, the biogeographic aspects after the Great American Biotic Interchange.", "title": "" }, { "docid": "de061c5692bf11876c03b9b5e7c944a0", "text": "The purpose of this article is to summarize several change theories and assumptions about the nature of change. The author shows how successful change can be encouraged and facilitated for long-term success. The article compares the characteristics of Lewin’s Three-Step Change Theory, Lippitt’s Phases of Change Theory, Prochaska and DiClemente’s Change Theory, Social Cognitive Theory, and the Theory of Reasoned Action and Planned Behavior to one another. Leading industry experts will need to continually review and provide new information relative to the change process and to our evolving society and culture. here are many change theories and some of the most widely recognized are briefly summarized in this article. The theories serve as a testimony to the fact that change is a real phenomenon. It can be observed and analyzed through various steps or phases. The theories have been conceptualized to answer the question, “How does successful change happen?” Lewin’s Three-Step Change Theory Kurt Lewin (1951) introduced the three-step change model. This social scientist views behavior as a dynamic balance of forces working in opposing directions. Driving forces facilitate change because they push employees in the desired direction. Restraining forces hinder change because they push employees in the opposite direction. Therefore, these forces must be analyzed and Lewin’s three-step model can help shift the balance in the direction of the planned change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). T INTERNATIONAL JOURNAL OF MNAGEMENT, BUSINESS, AND ADMINISTRATION 2_____________________________________________________________________________________ According to Lewin, the first step in the process of changing behavior is to unfreeze the existing situation or status quo. The status quo is considered the equilibrium state. Unfreezing is necessary to overcome the strains of individual resistance and group conformity. Unfreezing can be achieved by the use of three methods. First, increase the driving forces that direct behavior away from the existing situation or status quo. Second, decrease the restraining forces that negatively affect the movement from the existing equilibrium. Third, find a combination of the two methods listed above. Some activities that can assist in the unfreezing step include: motivate participants by preparing them for change, build trust and recognition for the need to change, and actively participate in recognizing problems and brainstorming solutions within a group (Robbins 564-65). Lewin’s second step in the process of changing behavior is movement. In this step, it is necessary to move the target system to a new level of equilibrium. Three actions that can assist in the movement step include: persuading employees to agree that the status quo is not beneficial to them and encouraging them to view the problem from a fresh perspective, work together on a quest for new, relevant information, and connect the views of the group to well-respected, powerful leaders that also support the change (http://www.csupomona.edu/~jvgrizzell/best_practices/bctheory.html). The third step of Lewin’s three-step change model is refreezing. This step needs to take place after the change has been implemented in order for it to be sustained or “stick” over time. It is high likely that the change will be short lived and the employees will revert to their old equilibrium (behaviors) if this step is not taken. It is the actual integration of the new values into the community values and traditions. The purpose of refreezing is to stabilize the new equilibrium resulting from the change by balancing both the driving and restraining forces. One action that can be used to implement Lewin’s third step is to reinforce new patterns and institutionalize them through formal and informal mechanisms including policies and procedures (Robbins 564-65). Therefore, Lewin’s model illustrates the effects of forces that either promote or inhibit change. Specifically, driving forces promote change while restraining forces oppose change. Hence, change will occur when the combined strength of one force is greater than the combined strength of the opposing set of forces (Robbins 564-65). Lippitt’s Phases of Change Theory Lippitt, Watson, and Westley (1958) extend Lewin’s Three-Step Change Theory. Lippitt, Watson, and Westley created a seven-step theory that focuses more on the role and responsibility of the change agent than on the evolution of the change itself. Information is continuously exchanged throughout the process. The seven steps are:", "title": "" }, { "docid": "bda04f2eaee74979d7684681041e19bd", "text": "In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.", "title": "" } ]
scidocsrr
c07d4a86e4df42f37ddcc115c4eac8f2
NaCl on 8-Bit AVR Microcontrollers
[ { "docid": "7c93ceb1f71e5ac65c2c0d22f8a36afe", "text": "NEON is a vector instruction set included in a large fraction of new ARM-based tablets and smartphones. This paper shows that NEON supports high-security cryptography at surprisingly high speeds; normally data arrives at lower speeds, giving the CPU time to handle tasks other than cryptography. In particular, this paper explains how to use a single 800MHz Cortex A8 core to compute the existing NaCl suite of high-security cryptographic primitives at the following speeds: 5.60 cycles per byte (1.14 Gbps) to encrypt using a shared secret key, 2.30 cycles per byte (2.78 Gbps) to authenticate using a shared secret key, 527102 cycles (1517/second) to compute a shared secret key for a new public key, 650102 cycles (1230/second) to verify a signature, and 368212 cycles (2172/second) to sign a message. These speeds make no use of secret branches and no use of secret memory addresses.", "title": "" } ]
[ { "docid": "630e8f538d566af9375c231dd5195a99", "text": "The investigation of the human microbiome is the most rapidly expanding field in biomedicine. Early studies were undertaken to better understand the role of microbiota in carbohydrate digestion and utilization. These processes include polysaccharide degradation, glycan transport, glycolysis, and short-chain fatty acid production. Recent research has demonstrated that the intricate axis between gut microbiota and the host metabolism is much more complex. Gut microbiota—depending on their composition—have disease-promoting effects but can also possess protective properties. This review focuses on disorders of metabolic syndrome, with special regard to obesity as a prequel to type 2 diabetes, type 2 diabetes itself, and type 1 diabetes. In all these conditions, differences in the composition of the gut microbiota in comparison to healthy people have been reported. Mechanisms of the interaction between microbiota and host that have been characterized thus far include an increase in energy harvest, modulation of free fatty acids—especially butyrate—of bile acids, lipopolysaccharides, gamma-aminobutyric acid (GABA), an impact on toll-like receptors, the endocannabinoid system and “metabolic endotoxinemia” as well as “metabolic infection.” This review will also address the influence of already established therapies for metabolic syndrome and diabetes on the microbiota and the present state of attempts to alter the gut microbiota as a therapeutic strategy.", "title": "" }, { "docid": "6d2667dd550e14d4d46b24d9c8580106", "text": "Deficits in gratification delay are associated with a broad range of public health problems, such as obesity, risky sexual behavior, and substance abuse. However, 6 decades of research on the construct has progressed less quickly than might be hoped, largely because of measurement issues. Although past research has implicated 5 domains of delay behavior, involving food, physical pleasures, social interactions, money, and achievement, no published measure to date has tapped all 5 components of the content domain. Existing measures have been criticized for limitations related to efficiency, reliability, and construct validity. Using an innovative Internet-mediated approach to survey construction, we developed the 35-item 5-factor Delaying Gratification Inventory (DGI). Evidence from 4 studies and a large, diverse sample of respondents (N = 10,741) provided support for the psychometric properties of the measure. Specifically, scores on the DGI demonstrated strong internal consistency and test-retest reliability for the 35-item composite, each of the 5 domains, and a 10-item short form. The 5-factor structure fit the data well and had good measurement invariance across subgroups. Construct validity was supported by correlations with scores on closely related self-control measures, behavioral ratings, Big Five personality trait measures, and measures of adjustment and psychopathology, including those on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. DGI scores also showed incremental validity in accounting for well-being and health-related variables. The present investigation holds implications for improving public health, accelerating future research on gratification delay, and facilitating survey construction research more generally by demonstrating the suitability of an Internet-mediated strategy.", "title": "" }, { "docid": "6cf4297e4c87f8e55d59867ac137e56d", "text": "We present a novel approach to RTE that exploits a structure-oriented sentence representation followed by a similarity function. The structural features are automatically acquired from tree skeletons that are extracted and generalized from dependency trees. Our method makes use of a limited size of training data without any external knowledge bases (e.g. WordNet) or handcrafted inference rules. We have achieved an accuracy of 71.1% on the RTE-3 development set performing a 10-fold cross validation and 66.9% on the RTE-3 test data.", "title": "" }, { "docid": "49ff711b6c91c9ec42e16ce2f3bb435b", "text": "In this letter, a wideband three-section branch-line hybrid with harmonic suppression is designed using a novel transmission line model. The proposed topology is constructed using a coupled line, two series transmission lines, and open-ended stubs. The required design equations are obtained by applying even- and odd-mode analysis. To support these equations, a three-section branch-line hybrid working at 0.9 GHz is fabricated and tested. The physical area of the prototype is reduced by 87.7% of the conventional hybrid and the fractional bandwidth is greater than 52%. In addition, the proposed technique can eliminate second harmonic by a level better than 15 dB.", "title": "" }, { "docid": "96704e139fd4d72cb64b0acbfb887475", "text": "Project Failure is the major problem undergoing nowadays as seen by software project managers. Imprecision of the estimation is the reason for this problem. As software grew in size and importance it also grew in complexity, making it very difficult to accurately predict the cost of software development. This was the dilemma in past years. The greatest pitfall of software industry was the fast changing nature of software development which has made it difficult to develop parametric models that yield high accuracy for software development in all domains. Development of useful models that accurately predict the cost of developing a software product. It is a very important objective of software industry. In this paper, several existing methods for software cost estimation are illustrated and their aspects will be discussed. This paper summarizes several classes of software cost estimation models and techniques. To achieve all these goals we implement the simulators. No single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" }, { "docid": "4d3de2d03431e8f06a5b8b31a784ecaa", "text": "For medical students, virtual patient dialogue systems can provide useful training opportunities without the cost of employing actors to portray standardized patients. This work utilizes wordand character-based convolutional neural networks (CNNs) for question identification in a virtual patient dialogue system, outperforming a strong wordand characterbased logistic regression baseline. While the CNNs perform well given sufficient training data, the best system performance is ultimately achieved by combining CNNs with a hand-crafted pattern matching system that is robust to label sparsity, providing a 10% boost in system accuracy and an error reduction of 47% as compared to the pattern-matching system alone.", "title": "" }, { "docid": "52f912cd5a8def1122d7ce6ba7f47271", "text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "320c5bf641fa348cd1c8fb806558fe68", "text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.", "title": "" }, { "docid": "a0ca6986d59905cea49ed28fa378c69e", "text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.", "title": "" }, { "docid": "ca64effff681149682be21b512f0e3c9", "text": "In this paper, a grip-force control of an elastic object is proposed based on a visual slip margin feedback. When an elastic object is pressed and slid slightly on a rigid plate, a partial slip, called \"incipient slip\" occurs on the contact surface. The slip margin between an elastic object and a rigid plate is estimated based on the analytic solution of Hertzian contact model. A 1 DOF gripper consists of a camera and a force sensor is developed. The slip margin can be estimated from the tangential force measured by a force sensor, the deformation of the elastic object and the radius on the contact area both measured by a camera. In the proposed method, the friction coefficient is not explicitly needed. The grip force is controlled by a direct feedback of the estimated slip margin, whose stability is analytically guaranteed. As a result, the slip margin is maintained to a desired value without occurring the gross slip against a disturbance load force to the object.", "title": "" }, { "docid": "4d1f7ca631304e03b720c501d7e9a227", "text": "Due to the open and distributed characteristics of web service, its access control becomes a challenging problem which has not been addressed properly. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. We follow the Role-based Access Control model and extend it with credential attributes. The access control model is represented by a semantic ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. The basic access control architecture based on the semantic proposal for web service is presented. Finally, a prototype of the system is implemented to validate the proposal.", "title": "" }, { "docid": "0bd30308a11711f1dc71b8ff8ae8e80c", "text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.", "title": "" }, { "docid": "53acdb714d51d9eca25f1e635f781afa", "text": "Research in several areas provides scientific guidance for use of graphical encoding to convey information in an information visualization display. By graphical encoding we mean the use of visual display elements such as icon color, shape, size, or position to convey information about objects represented by the icons. Literature offers inconclusive and often conflicting viewpoints, including the suggestion that the effectiveness of a graphical encoding depends on the type of data represented. Our empirical study suggests that the nature of the users’ perceptual task is more indicative of the effectiveness of a graphical encoding than the type of data represented. 1. Overview of Perceptual Issues In producing a design to visualize search results for a digital library called Envision [12, 13, 19], we found that choosing graphical devices and document attributes to be encoded with each graphical device is a surprisingly difficult task. By graphical devices we mean those visual display elements (e.g., icon color hue, color saturation, flash rate, shape, size, alphanumeric identifiers, position, etc.) used to convey encoded information. Providing access to graphically encoded information requires attention to a range of human cognitive activities, explored by researchers under at least three rubrics: psychophysics of visual search and identification tasks, graphical perception, and graphical language development. Research in these areas provides scientific guidance for design and evaluation of graphical encoding that might otherwise be reduced to opinion and personal taste. Because of space limits, we discuss here only a small portion of the research on graphical encoding that has been conducted. Additional information is in [20]. Ware [29] provides a broader review of perceptual issues pertaining to information visualization. Especially useful for designers are rankings by effectiveness of various graphical devices in communicating different types of data (e.g., nominal, ordinal, or quantitative). Christ [6] provides such rankings in the context of visual search and identification tasks and provides some empirical evidence to support his findings. Mackinlay [17] suggests rankings of graphical devices for conveying nominal, ordinal, and quantitative data in the context of graphical language design, but these rankings have not been empirically validated [personal communication]. Cleveland and McGill [8, 9] have empirically validated their ranking of graphical devices for quantitative data. The rankings suggested by Christ, Mackinlay, and Cleveland and McGill are not the same, while other literature offers more conflicting viewpoints, suggesting the need for further research. 1.1 Visual Search and Identification Tasks Psychophysics is a branch of psychology concerned with the \"relationship between characteristics of physical stimuli and the psychological experience they produce\" [28]. Studies in the psychophysics of visual search and identification tasks have roots in signal detection theory pertaining to air traffic control, process control, and cockpit displays. These studies suggest rankings of graphical devices [6, 7] described later in this paper and point out significant perceptual interactions among graphical devices used in multidimensional displays. Visual search tasks require visual scanning to locate one or more targets [6, 7, 31]. With a scatterplotlike display (sometimes known as a starfield display [1]), users perform a visual search task when they scan the display to determine the presence of one or more symbols meeting some specific criterion and to locate those symbols if present. For identification tasks, users go beyond visual search to report semantic data about symbols of interest, typically by answering true/false questions or by noting facts about encoded data [6, 7]. Measures of display effectiveness for visual search and identification tasks include time, accuracy, and cognitive workload. A more thorough introduction to signal detection theory may be found in Wickens’ book [31]. Issues involved in studies that influenced the Envision design are complex and findings are sometimes contradictory. Following is a representative overview, but many imProceedings of the IEEE Symposium on Information Visualization 2002 (InfoVis’02) 1522-404X/02 $17.00 © 2002 IEEE portant details are necessarily omitted due to space limitations. 1.1.1 Unidimensional Displays. For unidimensional displays — those involving a single graphical code — Christ’s [6, 7] meta-analysis of 42 prior studies suggests the following ranking of graphical devices by effectiveness: color, size, brightness or alphanumeric, and shape. Other studies confirm that color is the most effective graphical device for reducing display search time [7, 14, 25] but find it followed by shape and then letters or digits [7]. Benefits of color-coding increase for high-density displays [15, 16], but using shapes too similar to one another actually increases search time [22]. For identification tasks measuring accuracy with unidimensional displays, Christ’s work [6, 7] suggests the following ranking of graphical devices by effectiveness: alphanumeric, color, brightness, size, and shape. In a later study, Christ found that digits gave the most accurate results but that color, letters, and familiar geometric shapes all produced equal results with experienced subjects [7]. However, Jubis [14] found that shape codes yielded faster mean reaction times than color codes, while Kopala [15] found no significant difference among codes for identification tasks. 1.1.2 Multidimensional Displays. For multidimensional displays — those using multiple graphical devices combined in one visual object to encode several pieces of information — codes may be either redundant or non-redundant. A redundant code using color and shape to encode the same information yields average search speeds even faster than non-redundant color or shape encoding [7]. Used redundantly with other codes, color yields faster results than shape, and either color or shape is superior as a redundant code to both letters and digits [7]. Jubis [14] confirms that a redundant code involving both color and shape is superior to shape coding but is approximately equal to non-redundant color-coding. For difficult tasks, using redundant color-coding may significantly reduce reaction time and increase accuracy [15]. Benefits of redundant color-coding increase as displays become more cluttered or complex [15]. 1.1.3 Interactions Among Graphical Devices . Significant interactions among graphical devices complicate design for multidimensional displays. Color-coding interferes with all achromatic codes, reducing accuracy by as much as 43% [6]. Indeed, Luder [16] suggests that color has such cognitive dominance that it should only be used to encode the most important data and in situations where dependence on color-coding does not increase risk. While we found no supporting empirical evidence, we believe size and shape interact, causing the shape of very small objects to be perceived less accurately. 1.1.4 Ranges of Graphical Devices. The number of instances of each graphical device (e.g., how many colors or shapes are used in the code) is significant because it limits the range or number of values encoded using that device [3]. The conservative recommendation is to use only five or six distinct colors or shapes [3, 7, 27, 31]. However, some research suggests that 10 [3] to 18 [24] colors may be used for search tasks. 1.1.5 Integration vs. Non-integration Tasks. Later research has focused on how humans extract information from a multidimensional display to perform both integration and non-integration tasks [4, 26, 27]. An integration task uses information encoded non-redundantly with two or more graphical devices to reach a single decision or action, while a non-integration task bases decisions or actions on information encoded in only one graphical device. Studies [4, 30] provide evidence that object displays, in which multiple visual attributes of a single object present information about multiple characteristics, facilitate integration tasks, especially where multiple graphical encodings all convey information relevant to the task at hand. However, object displays hinder non-integration tasks, as additional effort is required to filter out unwanted information communicated by the objects. 1.2 Graphical Perception Graphical perception is “the visual decoding of the quantitative and qualitative information encoded on graphs,” where visual decoding means “instantaneous perception of the visual field that comes without apparent mental effort” [9, p. 828]. Cleveland and McGill studied the perception of quantitative data such as “numerical values of a variable...that are not highly discrete...” [9, p. 828]. They have identified and empirically validated a ranking of graphical devices for displaying quantitative data, ordered as follows from most to least accurately perceived [9, p. 830]: Position along a common scale; Position on identical but non-aligned scales; Length; Angle or Slope; Area; Volume, Density, and/or Color saturation; Color hue. 1.3 Graphical Language Development Graphical language development is based on the assertion that graphical devices communicate information equivalent to sentences [17] and thus call for attention to appropriate use of each graphical device. In his discussion of graphical languages, Mackinlay [17] suggests three different rankings of the effectiveness of various graphical devices in communicating quantitative (numerical), ordinal (ranked), and nominal (non-ordinal textual) data about objects. Although based on psychophysical and graphical perception research, Mackinlay's rankings have not been experimentally validated [personal communication]. 1.4 Observations on Prior Research These studies make it clear that no single graphical device works equally well for all users, nor does an", "title": "" }, { "docid": "17253a37e4f26cb6dabf1e1eb4e9a878", "text": "The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.", "title": "" }, { "docid": "fb4fcc4d5380c4123b24467c1ca2a8e3", "text": "Deep neural networks are traditionally trained using humandesigned stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as HyperAdam, is proposed that combines the idea of “learning to optimize” and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.", "title": "" }, { "docid": "cdb7380ca1a4b5a8059e3e4adc6b7ea2", "text": "In this paper, tunable microstrip bandpass filters with two adjustable transmission poles and compensable coupling are proposed. The fundamental structure is based on a half-wavelength (λ/2) resonator with a center-tapped open-stub. Microwave varactors placed at various internal nodes separately adjust the filter's center frequency and bandwidth over a wide tuning range. The constant absolute bandwidth is achieved at different center frequencies by maintaining the distance between the in-band transmission poles. Meanwhile, the coupling strength could be compensable by tuning varactors that are side and embedding loaded in the parallel coupled microstrip lines (PCMLs). As a demonstrator, a second-order filter with seven tuning varactors is implemented and verified. A frequency range of 0.58-0.91 GHz with a 1-dB bandwidth tuning from 115 to 315 MHz (i.e., 12.6%-54.3% fractional bandwidth) is demonstrated. Specifically, the return loss of passbands with different operating center frequencies can be achieved with same level, i.e., about 13.1 and 11.6 dB for narrow and wide passband responses, respectively. To further verify the etch-tolerance characteristics of the proposed prototype filter, another second-order filter with nine tuning varactors is proposed and fabricated. The measured results exhibit that the tunable fitler with the embedded varactor-loaded PCML has less sensitivity to fabrication tolerances. Meanwhile, the passband return loss can be achieved with same level of 20 dB for narrow and wide passband responses, respectively.", "title": "" }, { "docid": "24e943940f1bd1328dba1de2e15d3137", "text": "The use of external databases to generate training data, also known as Distant Supervision, has become an effective way to train supervised relation extractors but this approach inherently suffers from noise. In this paper we propose a method for noise reduction in distantly supervised training data, using a discriminative classifier and semantic similarity between the contexts of the training examples. We describe an active learning strategy which exploits hierarchical clustering of the candidate training samples. To further improve the effectiveness of this approach, we study the use of several methods for dimensionality reduction of the training samples. We find that semantic clustering of training data combined with cluster-based active learning allows filtering the training data, hence facilitating the creation of a clean training set for relation extraction, at a reduced manual labeling cost.", "title": "" }, { "docid": "ebb4c6a7f74ca3cede615542bcb0b11b", "text": "The proposed system of the digitally emulated current mode control for a DC-DC boost converter using the FPGA is implemented by the emulation technique to generate PWM pulse. A reasonable A/D converter with a few MSPS conversion rate is good enough to control the DC-DC converter with 100 kHz switching frequency. It is found the experimental data show the good static and dynamic-response characteristics, which means that the proposed system can be integrated into one chip digital IC for power-source-control with reasonable price.", "title": "" }, { "docid": "01a4b2be52e379db6ace7fa8ed501805", "text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.", "title": "" } ]
scidocsrr
7f94a0e839dbdd0cb698f1f04f9f83c1
Design for 5G Mobile Network Architecture
[ { "docid": "4412bca4e9165545e4179d261828c85c", "text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.", "title": "" } ]
[ { "docid": "bda4bdc27e9ea401abb214c3fb7c9813", "text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.", "title": "" }, { "docid": "a727d28ed4153d9d9744b3e2b5e47251", "text": "Darts is enjoyed both as a pub game and as a professional competitive activity.Yet most players aim for the highest scoring region of the board, regardless of their level of skill. By modelling a dart throw as a two-dimensional Gaussian random variable, we show that this is not always the optimal strategy.We develop a method, using the EM algorithm, for a player to obtain a personalized heat map, where the bright regions correspond to the aiming locations with high (expected) pay-offs. This method does not depend in any way on our Gaussian assumption, and we discuss alternative models as well.", "title": "" }, { "docid": "9a4fc12448d166f3a292bfdf6977745d", "text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.", "title": "" }, { "docid": "850e9c1beae0635e629fbb44bda14dc7", "text": "Power law distribution seems to be an important characteristic of web graphs. Several existing web graph models generate power law graphs by adding new vertices and non-uniform edge connectivities to existing graphs. Researchers have conjectured that preferential connectivity and incremental growth are both required for the power law distribution. In this paper, we propose a different web graph model with power law distribution that does not require incremental growth. We also provide a comparison of our model with several others in their ability to predict web graph clustering behavior.", "title": "" }, { "docid": "e7664a3c413f86792b98912a0241a6ac", "text": "Seq2seq learning has produced promising results on summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence. The approach naturally combines source dependency structure with the copy mechanism of an abstractive sentence summarizer. Experimental results demonstrate the effectiveness of incorporating source-side syntactic information in the system, and our proposed approach compares favorably to state-of-the-art methods.", "title": "" }, { "docid": "55658c75bcc3a12c1b3f276050f28355", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "a871176628b28af28f630c447236a2d9", "text": "More than 70 years ago, the filamentous ascomycete Trichoderma reesei was isolated on the Solomon Islands due to its ability to degrade and thrive on cellulose containing fabrics. This trait that relies on its secreted cellulases is nowadays exploited by several industries. Most prominently in biorefineries which use T. reesei enzymes to saccharify lignocellulose from renewable plant biomass in order to produce biobased fuels and chemicals. In this review we summarize important milestones of the development of T. reesei as the leading production host for biorefinery enzymes, and discuss emerging trends in strain engineering. Trichoderma reesei has very recently also been proposed as a consolidated bioprocessing organism capable of direct conversion of biopolymeric substrates to desired products. We therefore cover this topic by reviewing novel approaches in metabolic engineering of T. reesei.", "title": "" }, { "docid": "101ecfb3d6a20393d147cd2061414369", "text": "In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.", "title": "" }, { "docid": "988c161ceae388f5dbcdcc575a9fa465", "text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.", "title": "" }, { "docid": "0c420c064519e15e071660c750c0b7e3", "text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.", "title": "" }, { "docid": "22b1974fa802c9ea224e6b0b6f98cedb", "text": "This paper presents a human-inspired control approach to bipedal robotic walking: utilizing human data and output functions that appear to be intrinsic to human walking in order to formally design controllers that provably result in stable robotic walking. Beginning with human walking data, outputs-or functions of the kinematics-are determined that result in a low-dimensional representation of human locomotion. These same outputs can be considered on a robot, and human-inspired control is used to drive the outputs of the robot to the outputs of the human. The main results of this paper are that, in the case of both under and full actuation, the parameters of this controller can be determined through a human-inspired optimization problem that provides the best fit of the human data while simultaneously provably guaranteeing stable robotic walking for which the initial condition can be computed in closed form. These formal results are demonstrated in simulation by considering two bipedal robots-an underactuated 2-D bipedal robot, AMBER, and fully actuated 3-D bipedal robot, NAO-for which stable robotic walking is automatically obtained using only human data. Moreover, in both cases, these simulated walking gaits are realized experimentally to obtain human-inspired bipedal walking on the actual robots.", "title": "" }, { "docid": "f409eace05cd617355440509da50d685", "text": "Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.", "title": "" }, { "docid": "16ce10ae21b7ef66746937ba6c9bf321", "text": "Recent years, deep learning is increasingly prevalent in the field of Software Engineering (SE). However, many open issues still remain to be investigated. How do researchers integrate deep learning into SE problems? Which SE phases are facilitated by deep learning? Do practitioners benefit from deep learning? The answers help practitioners and researchers develop practical deep learning models for SE tasks. To answer these questions, we conduct a bibliography analysis on 98 research papers in SE that use deep learning techniques. We find that 41 SE tasks in all SE phases have been facilitated by deep learning integrated solutions. In which, 84.7% papers only use standard deep learning models and their variants to solve SE problems. The practicability becomes a concern in utilizing deep learning techniques. How to improve the effectiveness, efficiency, understandability, and testability of deep learning based solutions may attract more SE researchers in the future. Introduction Driven by the success of deep learning in data mining and pattern recognition, recent years have witnessed an increasing trend for industrial practitioners and academic researchers to integrate deep learning into SE tasks [1]-[3]. For typical SE tasks, deep learning helps SE participators extract requirements from natural language text [1], generate source code [2], predict defects in software [3], etc. As an initial statistics of research papers in SE in this study, deep learning has achieved competitive performance against previous algorithms on about 40 SE tasks. There are at least 98 research papers published or accepted in 66 venues, integrating deep learning into SE tasks. Despite the encouraging amount of papers and venues, there exists little overview analysis on deep learning in SE, e.g., the common way to integrate deep learning into SE, the SE phases facilitated by deep learning, the interests of SE practitioners on deep learning, etc. Understanding these questions is important. On the one hand, it helps practitioners and researchers get an overview understanding of deep learning in SE. On the other hand, practitioners and researchers can develop more practical deep learning models according to the analysis. For this purpose, this study conducts a bibliography analysis on research papers in the field of SE that use deep learning techniques. In contrast to literature reviews,", "title": "" }, { "docid": "986279f6f47189a6d069c0336fa4ba94", "text": "Compared to the traditional single-phase-shift control, dual-phase-shift (DPS) control can greatly improve the performance of the isolated bidirectional dual-active-bridge dc-dc converter (IBDC). This letter points out some wrong knowledge about transmission power of IBDC under DPS control in the earlier studies. On this basis, this letter gives the detailed theoretical and experimental analyses of the transmission power of IBDC under DPS control. And the experimental results showed agreement with theoretical analysis.", "title": "" }, { "docid": "19792ab5db07cd1e6cdde79854ba8cb7", "text": "Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.", "title": "" }, { "docid": "220a0be60be41705a95908df8180cf95", "text": "Since the introduction of the first power module by Semikron in 1975, many innovations have been made to improve the thermal, electrical, and mechanical performance of power modules. These innovations in packaging technology focus on the enhancement of the heat dissipation and thermal cycling capability of the modules. Thermal cycles, caused by varying load and environmental operating conditions, induce high mechanical stress in the interconnection layers of the power module due to the different coefficients of thermal expansion (CTE), leading to fatigue and growth of microcracks in the bonding materials. As a result, the lifetime of power modules can be severely limited in practical applications. Furthermore, to reduce the size and weight of converters, the semiconductors are being operated at higher junction temperatures. Higher temperatures are especially of great interest for use of wide-?bandgap materials, such as SiC and GaN, because these materials leverage their material characteristics, particularly at higher temperatures. To satisfy these tightened requirements, on the one hand, conventional power modules, i.e., direct bonded Cu (DBC)-based systems with bond wire contacts, have been further improved. On the other hand, alternative packaging techniques, e.g., chip embedding into printed circuit boards (PCBs) and power module packaging based on the selective laser melting (SLM) technique, have been developed, which might constitute an alternative to conventional power modules in certain applications.", "title": "" }, { "docid": "06f1c7daafcf59a8eb2ddf430d0d7f18", "text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.", "title": "" }, { "docid": "deb3ac73ec2e8587371c6078dc4b2205", "text": "Natural antimicrobials as well as essential oils (EOs) have gained interest to inhibit pathogenic microorganisms and to control food borne diseases. Campylobacter spp. are one of the most common causative agents of gastroenteritis. In this study, cardamom, cumin, and dill weed EOs were evaluated for their antibacterial activities against Campylobacter jejuni and Campylobacter coli by using agar-well diffusion and broth microdilution methods, along with the mechanisms of antimicrobial action. Chemical compositions of EOs were also tested by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results showed that cardamom and dill weed EOs possess greater antimicrobial activity than cumin with larger inhibition zones and lower minimum inhibitory concentrations. The permeability of cell membrane and cell membrane integrity were evaluated by determining relative electric conductivity and release of cell constituents into supernatant at 260 nm, respectively. Moreover, effect of EOs on the cell membrane of Campylobacter spp. was also investigated by measuring extracellular ATP concentration. Increase of relative electric conductivity, extracellular ATP concentration, and cell constituents' release after treatment with EOs demonstrated that tested EOs affected the membrane integrity of Campylobacter spp. The results supported high efficiency of cardamom, cumin, and dill weed EOs to inhibit Campylobacter spp. by impairing the bacterial cell membrane.", "title": "" } ]
scidocsrr
2255e1fb003f3cc7b3e6c8030276c8f9
Non-contact video-based pulse rate measurement on a mobile service robot
[ { "docid": "2531d8d05d262c544a25dbffb7b43d67", "text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.", "title": "" } ]
[ { "docid": "44672e9dc60639488800ad4ae952f272", "text": "The GPS technology and new forms of urban geography have changed the paradigm for mobile services. As such, the abundant availability of GPS traces has enabled new ways of doing taxi business. Indeed, recent efforts have been made on developing mobile recommender systems for taxi drivers using Taxi GPS traces. These systems can recommend a sequence of pick-up points for the purpose of maximizing the probability of identifying a customer with the shortest driving distance. However, in the real world, the income of taxi drivers is strongly correlated with the effective driving hours. In other words, it is more critical for taxi drivers to know the actual driving routes to minimize the driving time before finding a customer. To this end, in this paper, we propose to develop a cost-effective recommender system for taxi drivers. The design goal is to maximize their profits when following the recommended routes for finding passengers. Specifically, we first design a net profit objective function for evaluating the potential profits of the driving routes. Then, we develop a graph representation of road networks by mining the historical taxi GPS traces and provide a Brute-Force strategy to generate optimal driving route for recommendation. However, a critical challenge along this line is the high computational cost of the graph based approach. Therefore, we develop a novel recursion strategy based on the special form of the net profit function for searching optimal candidate routes efficiently. Particularly, instead of recommending a sequence of pick-up points and letting the driver decide how to get to those points, our recommender system is capable of providing an entire driving route, and the drivers are able to find a customer for the largest potential profit by following the recommendations. This makes our recommender system more practical and profitable than other existing recommender systems. Finally, we carry out extensive experiments on a real-world data set collected from the San Francisco Bay area and the experimental results clearly validate the effectiveness of the proposed recommender system.", "title": "" }, { "docid": "6224f4f3541e9cd340498e92a380ad3f", "text": "A personal story: From philosophy to software.", "title": "" }, { "docid": "da0de29348f5414f33bacad850fa79d1", "text": "This paper presents a construction algorithm for the short block irregular low-density parity-check (LDPC) codes. By applying a magic square theorem as a part of the matrix construction, a newly developed algorithm, the so-called Magic Square Based Algorithm (MSBA), is obtained. The modified array codes are focused on in this study since the reduction of 1s can lead to simple encoding and decoding schemes. Simulation results based on AWGN channels show that with the code rate of 0.8 and SNR 5 dB, the BER of 10 can be obtained whilst the number of decoding iteration is relatively low.", "title": "" }, { "docid": "d8272965f75b55bafb29c0eb4892f813", "text": "One expensive step when defining crowdsourcing tasks is to define the examples and control questions for instructing the crowd workers. In this paper, we introduce a self-training strategy for crowdsourcing. The main idea is to use an automatic classifier, trained on weakly supervised data, to select examples associated with high confidence. These are used by our automatic agent to explain the task to crowd workers with a question answering approach. We compared our relation extraction system trained with data annotated (i) with distant supervision and (ii) by workers instructed with our approach. The analysis shows that our method relatively improves the relation extraction system by about 11% in F1.", "title": "" }, { "docid": "2841406ba32b534bb85fb970f2a00e58", "text": "We present WHATSUP, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority. WHATSUP constructs an implicit social network based on user profiles that express the opinions of users about the news items they receive (like-dislike). Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a novel heterogeneous gossip protocol that (1) biases the orientation of its targets towards those with similar interests, and (2) amplifies dissemination based on the level of interest in every news item. We report on an extensive evaluation of WHATSUP through (a) simulations, (b) a ModelNet emulation on a cluster, and (c) a PlanetLab deployment based on real datasets. We show that WHATSUP outperforms various alternatives in terms of accurate and complete delivery of relevant news items while preserving the fundamental advantages of standard gossip: namely, simplicity of deployment and robustness.", "title": "" }, { "docid": "ecc31d1d7616e014a3a032d14e149e9b", "text": "It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.", "title": "" }, { "docid": "63dc375e505ceb5488a06306775969ba", "text": "N-Methyl-d-aspartate (NMDA) receptors belong to the family of ionotropic glutamate receptors, which mediate most excitatory synaptic transmission in mammalian brains. Calcium permeation triggered by activation of NMDA receptors is the pivotal event for initiation of neuronal plasticity. Here, we show the crystal structure of the intact heterotetrameric GluN1-GluN2B NMDA receptor ion channel at 4 angstroms. The NMDA receptors are arranged as a dimer of GluN1-GluN2B heterodimers with the twofold symmetry axis running through the entire molecule composed of an amino terminal domain (ATD), a ligand-binding domain (LBD), and a transmembrane domain (TMD). The ATD and LBD are much more highly packed in the NMDA receptors than non-NMDA receptors, which may explain why ATD regulates ion channel activity in NMDA receptors but not in non-NMDA receptors.", "title": "" }, { "docid": "6d227bbf8df90274f44a26d9c269c663", "text": "Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8% correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80% correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles. The system selects the category whose profile has the smallest distance to the document’s profile. The profiles involved are quite small, typically 10K bytes for a category training set, and less than 4K bytes for an individual document. Using N-gram frequency profiles provides a simple and reliable way to categorize documents in a wide range of classification tasks.", "title": "" }, { "docid": "86c0547368eb9003beed2ba7eefc75a4", "text": "Electronic social media offers new opportunities for informal communication in written language, while at the same time, providing new datasets that allow researchers to document dialect variation from records of natural communication among millions of individuals. The unprecedented scale of this data enables the application of quantitative methods to automatically discover the lexical variables that distinguish the language of geographical areas such as cities. This can be paired with the segmentation of geographical space into dialect regions, within the context of a single joint statistical model — thus simultaneously identifying coherent dialect regions and the words that distinguish them. Finally, a diachronic analysis reveals rapid changes in the geographical distribution of these lexical features, suggesting that statistical analysis of social media may offer new insights on the diffusion of lexical change.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "1b030e734e3ddfb5e612b1adc651b812", "text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.", "title": "" }, { "docid": "8d8e5c06269e366044f0e3d5c3be19d0", "text": "A social network (SN) is a network containing nodes – social entities (people or groups of people) and links between these nodes. Social networks are examples of more general concept of complex networks and SNs are usually free-scale and have power distribution of node degree. Overall, several types of social networks can be enumerated: (i) simple SNs, (ii) multi-layered SNs (with many links between a pair of nodes), (iii) bipartite or multi-modal, heterogeneous SNs (with two or many different types of nodes), (iv) multidimensional SNs (reflecting the data warehousing multidimensional modelling concept), and some more specific like (v) temporal SNs, (vi) large scale SNs, and (vii) virtual SNs. For all these social networks suitable analytical methods may be applied commonly called social network analysis (SNA). They cover in particular: appropriate structural measures, efficient algorithms for their calculation, statistics and data mining methods, e.g. extraction of social communities (clustering). Some types of social networks have their own measures and methods developed. Several real application domains of SNA may be distinguished: classification of nodes for the purpose of marketing, evaluation of organizational structure versus communication structures in companies, recommender systems for hidden knowledge acquisition and for user support in web 2.0, analysis of social groups on web forums and prediction of their evolution. The above SNA methods and applications will be discussed in some details. J. Pokorný, V. Snášel, K. Richta (Eds.): Dateso 2012, pp. 151–151, ISBN 978-80-7378-171-2.", "title": "" }, { "docid": "2f23d51ffd54a6502eea07883709d016", "text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.", "title": "" }, { "docid": "9998497c000fa194bf414604ff0d69b2", "text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.", "title": "" }, { "docid": "c340cbb5f6b062caeed570dc2329e482", "text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.", "title": "" }, { "docid": "0fca0826e166ddbd4c26fe16086ff7ec", "text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.", "title": "" }, { "docid": "a5296748b0a93696e7b15f7db9d68384", "text": "Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.", "title": "" }, { "docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3", "text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.", "title": "" }, { "docid": "904c8b4be916745c7d1f0777c2ae1062", "text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.", "title": "" }, { "docid": "8fd3c6231e8c8522157439edc7b7344f", "text": "We are implementing ADAPT, a cognitive architecture for a Pioneer mobile robot, to give the robot the full range of cognitive abilities including perception, use of natural language, learning and the ability to solve complex problems. Our perspective is that an architecture based on a unified theory of robot cognition has the best chance of attaining human-level performance. Existing work in cognitive modeling has accomplished much in the construction of such unified cognitive architectures in areas other than robotics; however, there are major respects in which these architectures are inadequate for robot cognition. This paper examines two major inadequacies of current cognitive architectures for robotics: the absence of support for true concurrency and for active", "title": "" } ]
scidocsrr
01a347689589ebb9a65937b2e7956c34
Dual Polarized Dual Antennas for 1.7–2.1 GHz LTE Base Stations
[ { "docid": "2cebd2fd12160d2a3a541989293f10be", "text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.", "title": "" } ]
[ { "docid": "ff36b5154e0b85faff09a5acbb39bb0a", "text": "During a frequent survey in the northwest Indian Himalayan region, a new species-Cordyceps macleodganensis-was encountered. This species is described on the basis of its macromorphological features, microscopic details, and internal transcribed spacer sequencing. This species showed only 90% resemblance to Cordyceps gracilis. The chemical composition of the mycelium showed protein (14.95 ± 0.2%) and carbohydrates (59.21 ± 3.8%) as the major nutrients. This species showed appreciable amounts of P-carotene, lycopene, phenolic compounds, polysaccharides, and flavonoids. Mycelial culture of this species showed higher effectiveness for ferric-reducing antioxidant power, DPPH radical scavenging activity, ferrous ion-chelating activity, and scavenging ability on superoxide anion-derived radicals, calculated by half-maximal effective concentrations.", "title": "" }, { "docid": "8eb96ae8116a16e24e6a3b60190cc632", "text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.", "title": "" }, { "docid": "3f06fc0b50a1de5efd7682b4ae9f5a46", "text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.", "title": "" }, { "docid": "4353dc9fb9d8228d4d6c38d5f94ce068", "text": "In this paper we generalize the quantum algorithm for computing short discrete logarithms previously introduced by Eker̊a [2] so as to allow for various tradeoffs between the number of times that the algorithm need be executed on the one hand, and the complexity of the algorithm and the requirements it imposes on the quantum computer on the other hand. Furthermore, we describe applications of algorithms for computing short discrete logarithms. In particular, we show how other important problems such as those of factoring RSA integers and of finding the order of groups under side information may be recast as short discrete logarithm problems. This immediately gives rise to an algorithm for factoring RSA integers that is less complex than Shor’s general factoring algorithm in the sense that it imposes smaller requirements on the quantum computer. In both our algorithm and Shor’s algorithm, the main hurdle is to compute a modular exponentiation in superposition. When factoring an n bit integer, the exponent is of length 2n bits in Shor’s algorithm, compared to slightly more than n/2 bits in our algorithm.", "title": "" }, { "docid": "d2761d58c3197817be0fa89cf6da62fb", "text": "The proper restraint of the destructive potential of the immune system is essential for maintaining health. Regulatory T (Treg) cells ensure immune homeostasis through their defining ability to suppress the activation and function of other leukocytes. The expression of the transcription factor forkhead box protein P3 (FOXP3) is a well-recognized characteristic of Treg cells, and FOXP3 is centrally involved in the establishment and maintenance of the Treg cell phenotype. In this Review, we summarize how the expression and activity of FOXP3 are regulated across multiple layers by diverse factors. The therapeutic implications of these topics for cancer and autoimmunity are also discussed.", "title": "" }, { "docid": "79729b8f7532617015cbbdc15a876a5c", "text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.", "title": "" }, { "docid": "060ba80e2f3aeef5a3a8d69a14005645", "text": "This paper presents an application of dynamically driven recurrent networks (DDRNs) in online electric vehicle (EV) battery analysis. In this paper, a nonlinear autoregressive with exogenous inputs (NARX) architecture of the DDRN is designed for both state of charge (SOC) and state of health (SOH) estimation. Unlike other techniques, this estimation strategy is subject to the global feedback theorem (GFT) which increases both computational intelligence and robustness while maintaining reasonable simplicity. The proposed technique requires no model or knowledge of battery's internal parameters, but rather uses the battery's voltage, charge/discharge currents, and ambient temperature variations to accurately estimate battery's SOC and SOH simultaneously. The presented method is evaluated experimentally using two different batteries namely lithium iron phosphate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LiFePO}_4$</tex-math></inline-formula>) and lithium titanate (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{LTO}$</tex-math></inline-formula>) both subject to dynamic charge and discharge current profiles and change in ambient temperature. Results highlight the robustness of this method to battery's nonlinear dynamic nature, hysteresis, aging, dynamic current profile, and parametric uncertainties. The simplicity and robustness of this method make it suitable and effective for EVs’ battery management system (BMS).", "title": "" }, { "docid": "a26d98c1f9cb219f85153e04120053a7", "text": "The purpose of this paper is to examine the academic and athletic motivation and identify the factors that determine the academic performance among university students in the Emirates of Dubai. The study examined motivation based on non-traditional measure adopting a scale to measure both academic as well as athletic motivation. Keywords-academic performance, academic motivation, athletic performance, university students, business management, academic achievement, career motivation, sports motivation", "title": "" }, { "docid": "19f96525e1e3dcc563a7b2138c8b1547", "text": "The state of the art in bidirectional search has changed significantly a very short time period; we now can answer questions about unidirectional and bidirectional search that until very recently we were unable to answer. This paper is designed to provide an accessible overview of the recent research in bidirectional search in the context of the broader efforts over the last 50 years. We give particular attention to new theoretical results and the algorithms they inspire for optimal and nearoptimal node expansions when finding a shortest path. Introduction and Overview Shortest path algorithms have a long history dating to Dijkstra’s algorithm (DA) (Dijkstra 1959). DA is the canonical example of a best-first search which prioritizes state expansions by their g-cost (distance from the start state). Historically, there were two enhancements to DA developed relatively quickly: bidirectional search and the use of heuristics. Nicholson (1966) suggested bidirectional search where the search proceeds from both the start and the goal simultaneously. In a two dimensional search space a search to radius r will visit approximately r states. A bidirectional search will perform two searches of approximately (r/2) states, a reduction of a factor of two. In exponential state spaces the reduction is from b to 2b, an exponential gain in both memory and time. This is illustrated in Figure 1, where the large circle represents a unidirectional search towards the goal, while the smaller circles represent the two parts of a bidirectional search. Just two years later, DA was independently enhanced with admissible heuristics (distance estimates to the goal) that resulted in the A* algorithm (Hart, Nilsson, and Raphael 1968). A* is goal directed – the search is focused towards the goal by the heuristic. This significantly reduces the search effort required to find a path to the goal. The obvious challenge was whether these two enhancements could be effectively combined into bidirectional heuristic search (Bi-HS). Pohl (1969) first addressed this challenge showing that in practice unidirectional heuristic search (Uni-HS) seemed to beat out Bi-HS. Many Bi-HS Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. algorithms were developed over the years (see a short survey below), but no such algorithm was shown to consistently outperform Uni-HS. Barker and Korf (2015) recently hypothesized that in most cases one should either use bidirectional brute-force search (Bi-BS) or Uni-HS (e.g. A*), but that Bi-HS is never the best approach. This work spurred further research into Bi-HS, and has lead to new theoretical understanding on the nature of Bi-HS as well as new Bi-HS algorithms (e.g., MM, fMM and NBS described below) with strong theoretical guarantees. The purpose of this paper is to provide a high-level picture of this new line of work while placing it in the larger context of previous work on bidirectional search. While there are still many questions yet to answer, we have, for the first time, the full suite of analytic tools necessary to determine whether bidirectional search will be useful on a given problem instance. This is coupled with a Bi-HS algorithm that is guaranteed to expand no more than twice the minimum number of the necessary state expansions in practice. With these tools we can illustrate use-cases for bidirectional search and point to areas of future research. Terminology and Background We define a shortest-path problem as a n-tuple (start, goal, expF , expB , hF , hB), where the goal is to find the least-cost path between start and goal in a graph G. G is not provided a priori, but is provided implicitly through the expF and expB functions that can expand and return the forward (backwards) successors of any state. Bidirectional search algorithms interleave two separate searches, a search forward from start and a search backward from goal. We use fF , gF and hF to indicate f -, g-, and h-costs in the forward search and fB , gB and hB similarly in the backward search. Likewise, OpenF and OpenB store states generated in the forward and backward directions, respectively. Finally, gminF , gminB , fminF and fminB denote the minimal gand f -values in OpenF and OpenB respectively. d(x, y) denotes the shortest distance between x and y. Front-to-end algorithms use two heuristic functions. The forward heuristic, hF , is forward admissible iff hF (u) ≤ d(u, goal) for all u in G and is forward consistent iff hF (u) ≤ d(u, u′) + hF (u′) for all u and u′ in G. The backward heuristic, hB , is backward admissible iff hB(v) ≤", "title": "" }, { "docid": "52a3cfb08e434560cd0638c682fca7de", "text": "This paper focuses on routing for vehicles getting access to infrastructure either directly or via multiple hops though other vehicles. We study Routing Protocol for Low power and lossy networks (RPL), a tree-based routing protocol designed for sensor networks. Many design elements from RPL are transferable to the vehicular environment. We provide a simulation performance study of RPL and RPL tuning in VANETs. More specifically, we seek to study the impact of RPL's various parameters and external factors (e.g., various timers and speeds) on its performance and obtain insights on RPL tuning for its use in VANETs.", "title": "" }, { "docid": "875e165e70000d15b11d724607be1917", "text": "Internet-based Chat environments such as Internet relay Chat and instant messaging pose a challenge for data mining and information retrieval systems due to the multi-threaded, overlapping nature of the dialog and the nonstandard usage of language. In this paper we present preliminary methods of topic detection and topic thread extraction that augment a typical TF-IDF-based vector space model approach with temporal relationship information between posts of the Chat dialog combined with WordNet hypernym augmentation. We show results that promise better performance than using only a TF-IDF bag-of-words vector space model.", "title": "" }, { "docid": "2049d654e8293ee3470834e3a9aeea5f", "text": "In this paper, we analyze the influence of Twitter users in sharing news articles that may affect the readers’ mood. We collected data of more than 2000 Twitter users who shared news articles from Corriere.it, a daily newspaper that provides mood metadata annotated by readers on a voluntary basis. We automatically annotated personality types and communication styles of Twitter users and analyzed the correlations between personality, communication style, Twitter metadata (such as followig and folllowers) and the type of mood associated to the articles they shared. We also run a feature selection task, to find the best predictors of positive and negative mood sharing, and a classification task. We automatically predicted positive and negative mood sharers with 61.7% F1-measure. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b633fbaab6e314535312709557ef1139", "text": "The purification of recombinant proteins by affinity chromatography is one of the most efficient strategies due to the high recovery yields and purity achieved. However, this is dependent on the availability of specific affinity adsorbents for each particular target protein. The diversity of proteins to be purified augments the complexity and number of specific affinity adsorbents needed, and therefore generic platforms for the purification of recombinant proteins are appealing strategies. This justifies why genetically encoded affinity tags became so popular for recombinant protein purification, as these systems only require specific ligands for the capture of the fusion protein through a pre-defined affinity tag tail. There is a wide range of available affinity pairs \"tag-ligand\" combining biological or structural affinity ligands with the respective binding tags. This review gives a general overview of the well-established \"tag-ligand\" systems available for fusion protein purification and also explores current unconventional strategies under development.", "title": "" }, { "docid": "ab4abd9033f87e08656f4363499bc09c", "text": "It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization. On the other hand, large minibatches are of great practical interest as they allow for a better exploitation of modern GPUs. Previous literature on the subject concentrated on how to adjust the main SGD parameters (in particular, the learning rate) when using large minibatches. In this work we introduce an additional feature, that we call minibatch persistency, that consists in reusing the same minibatch for K consecutive SGD iterations. The computational conjecture here is that a large minibatch contains a significant sample of the training set, so one can afford to slightly overfitting it without worsening generalization too much. The approach is intended to speedup SGD convergence, and also has the advantage of reducing the overhead related to data loading on the internal GPU memory. We present computational results on CIFAR-10 with an AlexNet architecture, showing that even small persistency values (K = 2 or 5) already lead to a significantly faster convergence and to a comparable (or even better) generalization than the standard “disposable minibatch” approach (K = 1), in particular when large minibatches are used. The lesson learned is that minibatch persistency can be a simple yet effective way to deal with large minibatches.", "title": "" }, { "docid": "20710cf5fac30800217c5b9568d3541a", "text": "BACKGROUND\nAcne scarring is treatable by a variety of modalities. Ablative carbon dioxide laser (ACL), while effective, is associated with undesirable side effect profiles. Newer modalities using the principles of fractional photothermolysis (FP) produce modest results than traditional carbon dioxide (CO(2)) lasers but with fewer side effects. A novel ablative CO(2) laser device use a technique called ablative fractional resurfacing (AFR), combines CO(2) ablation with a FP system. This study was conducted to compare the efficacy of Q-switched 1064-nm Nd: YAG laser and that of fractional CO(2) laser in the treatment of patients with moderate to severe acne scarring.\n\n\nMETHODS\nSixty four subjects with moderate to severe facial acne scars were divided randomly into two groups. Group A received Q-Switched 1064-nm Nd: YAG laser and group B received fractional CO(2) laser. Two groups underwent four session treatment with laser at one month intervals. Results were evaluated by patients based on subjective satisfaction and physicians' assessment and photo evaluation by two blinded dermatologists. Assessments were obtained at baseline and at three and six months after final treatment.\n\n\nRESULTS\nPost-treatment side effects were mild and transient in both groups. According to subjective satisfaction (p = 0.01) and physicians' assessment (p < 0.001), fractional CO(2) laser was significantly more effective than Q- Switched 1064- nm Nd: YAG laser.\n\n\nCONCLUSIONS\nFractional CO2 laser has the most significant effect on the improvement of atrophic facial acne scars, compared with Q-Switched 1064-nm Nd: YAG laser.", "title": "" }, { "docid": "7f05bd51c98140417ff73ec2d4420d6a", "text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.", "title": "" }, { "docid": "b8ac61e2026f3dd7e775d440dcb43772", "text": "This paper presents a design methodology of a highly efficient power link based on Class-E driven, inductively coupled coil pair. An optimal power link design for retinal prosthesis and/or other implants must take into consideration the allowable safety limits of magnetic fields, which in turn govern the inductances of the primary and secondary coils. In retinal prosthesis, the optimal coil inductances have to deal with the constraints of the coil sizes, the tradeoffs between the losses, H-field limitation and dc supply voltage required by the Class-E driver. Our design procedure starts with the formation of equivalent circuits, followed by the analysis of the loss of the rectifier and coils and the H-field for induced voltage and current. Both linear and nonlinear models for the analysis are presented. Based on the procedure, an experimental power link is implemented with an overall efficiency of 67% at the optimal distance of 7 mm between the coils. In addition to the coil design methodology, we are also presenting a closed-loop control of Class-E amplifier for any duty cycle and any value of the systemQ.", "title": "" }, { "docid": "b53bd3f4a0d8933d9af0f5651a445800", "text": "Requirements for implemented system can be extracted and reused for a production of a new similar system. Extraction of common and variable features from requirements leverages the benefits of the software product lines engineering (SPLE). Although various approaches have been proposed in feature extractions from natural language (NL) requirements, no related literature review has been published to date for this topic. This paper provides a systematic literature review (SLR) of the state-of-the-art approaches in feature extractions from NL requirements for reuse in SPLE. We have included 13 studies in our synthesis of evidence and the results showed that hybrid natural language processing approaches were found to be in common for overall feature extraction process. A mixture of automated and semi-automated feature clustering approaches from data mining and information retrieval were also used to group common features, with only some approaches coming with support tools. However, most of the support tools proposed in the selected studies were not made available publicly and thus making it hard for practitioners’ adoption. As for the evaluation, this SLR reveals that not all studies employed software metrics as ways to validate experiments and case studies. Finally, the quality assessment conducted confirms that practitioners’ guidelines were absent in the selected studies. © 2015 Elsevier Inc. All rights reserved. c t t t r c S o r ( l w t r t", "title": "" }, { "docid": "90125582272e3f16a34d5d0c885f573a", "text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.", "title": "" }, { "docid": "03614f11b2b6800384e229c37967030d", "text": "Data Analytics is widely used in many industries and organization to make a better Business decision. By applying analytics to the structured and unstructured data the enterprises brings a great change in their way of planning and decision making. Sentiment analysis (or) opinion mining plays a significant role in our daily decision making process. These decisions may range from purchasing a product such as mobile phone to reviewing the movie to making investments — all the decisions will have a huge impact on the daily life. Sentiment Analysis is dealing with various issues such as Polarity Shift, accuracy related issues, Binary Classification problem and Data sparsity problem. However various methods were introduced for performing sentiment analysis, still that are not efficient in extracting the sentiment features from the given content of text. Naive Bayes, Support Vector Machine, Maximum Entropy are the machine learning algorithms used for sentiment analysis which has only a limited sentiment classification category ranging between positive and negative. Especially supervised and unsupervised algorithms have only limited accuracy in handling polarity shift and binary classification problem. Even though the advancement in sentiment Analysis technique there are various issues still to be noticed and make the analysis not accurately and efficiently. So this paper presents the survey on various sentiment Analysis methodologies and approaches in detailed. This will be helpful to earn clear knowledge about sentiment analysis methodologies. At last the comparison is made between various paper's approach and issues addressed along with the metrics used.", "title": "" } ]
scidocsrr
1608d1659bce2829d1904ea73d1fdb06
A Fully Attention-Based Information Retriever
[ { "docid": "fe89c8a17676b7767cfa40e7822b8d25", "text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.", "title": "" }, { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" }, { "docid": "640e1bf49e1205077898eddcdcbc5906", "text": "Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.", "title": "" } ]
[ { "docid": "757441e95be19ca4569c519fb35adfb7", "text": "Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.", "title": "" }, { "docid": "d6e565c0123049b9e11692b713674ccf", "text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.", "title": "" }, { "docid": "8210e2eec6a7a6905bdf57e685289d92", "text": "Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.", "title": "" }, { "docid": "b0356ab3a4a3917386bfe928a68031f5", "text": "Even when Ss fail to recall a solicited target, they can provide feeling-of-knowing (FOK) judgments about its availability in memory. Most previous studies addressed the question of FOK accuracy, only a few examined how FOK itself is determined, and none asked how the processes assumed to underlie FOK also account for its accuracy. The present work examined all 3 questions within a unified model, with the aim of demystifying the FOK phenomenon. The model postulates that the computation of FOK is parasitic on the processes involved in attempting to retrieve the target, relying on the accessibility of pertinent information. It specifies the links between memory strength, accessibility of correct and incorrect information about the target, FOK judgments, and recognition memory. Evidence from 3 experiments is presented. The results challenge the view that FOK is based on a direct, privileged access to an internal monitor.", "title": "" }, { "docid": "6247c827c6fdbc976b900e69a9eb275c", "text": "Despite the fact that commercial computer systems have been in existence for almost three decades, many systems in the process of being implemented may be classed as failures. One of the factors frequently cited as important to successful system development is involving users in the design and implementation process. This paper reports the results of a field study, conducted on data from forty-two systems, that investigates the role of user involvement and factors affecting the employment of user involvement on the success of system development. Path analysis was used to investigate both the direct effects of the contingent variables on system success and the effect of user involvement as a mediating variable between the contingent variables and system success. The results show that high system complexity and constraints on the resources available for system development are associated with less successful systems.", "title": "" }, { "docid": "0a75a45141a7f870bba32bed890da782", "text": "Surveillance systems for public security are going beyond the conventional CCTV. A new generation of systems relies on image processing and computer vision techniques, deliver more ready-to-use information, and provide assistance for early detection of unusual events. Crowd density is a useful source of information because unusual crowdedness is often related to unusual events. Previous works on crowd density estimation either ignore perspective distortion or perform the correction based on incorrect formulation. Also there is no investigation on whether the geometric correction derived for the ground plane can be applied to human objects standing upright to the plane. This paper derives the relation for geometric correction for the ground plane and proves formally that it can be directly applied to all the foreground pixels. We also propose a very efficient implementation because it is important for a real-time application. Finally a time-adaptive criterion for unusual crowdedness detection is described.", "title": "" }, { "docid": "7e1608bfd1f0256d0873de4f54ce6bfb", "text": "A fully integrated system for the automatic detection and characterization of cracks in road flexible pavement surfaces, which does not require manually labeled samples, is proposed to minimize the human subjectivity resulting from traditional visual surveys. The first task addressed, i.e., crack detection, is based on a learning from samples paradigm, where a subset of the available image database is automatically selected and used for unsupervised training of the system. The system classifies nonoverlapping image blocks as either containing crack pixels or not. The second task deals with crack type characterization, for which another classification system is constructed, to characterize the detected cracks' connect components. Cracks are labeled according to the types defined in the Portuguese Distress Catalog, with each different crack present in a given image receiving the appropriate label. Moreover, a novel methodology for the assignment of crack severity levels is introduced, computing an estimate for the width of each detected crack. Experimental crack detection and characterization results are presented based on images captured during a visual road pavement surface survey over Portuguese roads, with promising results. This is shown by the quantitative evaluation methodology introduced for the evaluation of this type of system, including a comparison with human experts' manual labeling results.", "title": "" }, { "docid": "026feb0023df9676d0711b2bdb2823bf", "text": "Humans crave the company of others and suffer profoundly if temporarily isolated from society. Much of the brain must have evolved to deal with social communication and we are increasingly learning more about the neurophysiological basis of social cognition. Here, we explore some of the reasons why social cognitive neuroscience is captivating the interest of many researchers. We focus on its future, and what we believe are priority areas for further research.", "title": "" }, { "docid": "36f068b9579788741f23c459570694fe", "text": "One of the difficulties in learning Chinese characters is distinguishing similar characters. This can cause misunderstanding and miscommunication in daily life. Thus, it is important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, the authors propose a game style framework to train students to distinguish similar characters. A major component in this framework is the search for similar Chinese characters in the system. From the authors’ prior work, they find the similar characters by the radical information and stroke correspondence determination. This paper improves the stroke correspondence determination by using the attributed relational graph (ARG) matching algorithm that considers both the stroke and spatial relationship during matching. The experimental results show that the new proposed method is more accurate in finding similar Chinese characters. Additionally, the authors have implemented online educational games to train students to distinguish similar Chinese characters and made use of the improved matching method for creating the game content automatically. DOI: 10.4018/jdet.2010070103 32 International Journal of Distance Education Technologies, 8(3), 31-46, July-September 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. iNtroDUCtioN The evolution of computer technologies makes a big impact on traditional learning. Shih et al. (2007) and Chen et al. (2005) studied the impact of distant e-learning compared with traditional learning. Distant e-learning has many advantages over traditional learning such as no learning barrier in location, allowing more people to learn and providing an interactive learning environment. There is a great potential in adopting distant e-learning in areas with a sparse population. For example, in China, it is impractical to build schools in every village. As a result, some students have to spend a lot of time for travelling to school that may be quite far away from their home. If computers can be used for e-learning in this location, the students can save a lot of time for other learning activities. Moreover, there is a limit in the number of students whom a school can physically accommodate. Distant e-learning is a solution that gives the chance for more people to learn in their own pace without the physical limitation. In addition, distant e-learning allows certain levels of interactivity. The learners can get the immediate feedback from the e-learning system and enhance the efficiency in their learning. E-learning has been applied in different areas such as engineering by Sziebig (2008), maritime education by Jurian (2006), etc. Some researchers study the e-learning in Chinese handwriting education. Nowadays there exist many e-learning applications to help students learn their native or a foreign language. This paper is focused on the learning of the Chinese language. Some researchers (Tan, 2002; Teo et al., 2002) provide an interactive interface for students to practice Chinese character handwriting. These e-learning methods help students improve their handwriting skill by providing them a framework to repeat some handwriting exercises just like in the traditional learning. However, they have not considered how to maintain students’ motivation to complete the tasks. Green et al. (2007) suggested that game should be introduced for learning because games bring challenges to students, stimulate their curiosity, develop their creativity and let them have fun. One of the common problems in Chinese students’ handwriting is mixing up similar characters in the structure (e.g., 困, 因) or sound (e.g., 木, 目), and misusing them. Chinese characters are logographs and there are about 3000 commonly used characters. Learners have to memorize a lot of writing structures and their related meanings. It is difficult to distinguish similar Chinese characters with similar structure or sound even for people whose native language is Chinese. For training people in distinguishing similar characters, teachers often make some questions by presenting the similar characters and ask the students to find out the correct one under each case. There are some web-based games that aim to help students differentiate similar characters (The Academy of Chinese Studies & Erroneous Character Arena). These games work in a similar fashion in which they show a few choices of similar characters to the players and ask them to pick the correct one that should be used in a phrase. These games suffer from the drawback that the question-answer set is limited thus players feel bored easily and there is little replay value. On the other hand, creating a large set of question-answer pairs is time consuming if it is done manually. It is beneficial to have a system to generate the choices automatically.", "title": "" }, { "docid": "8327d691bfde061e80782a038c7531cc", "text": "A novel wideband planar endfire circularly polarized (CP) antenna with wide 3 dB axial ratio (AR) beamwidth is presented. The operation principle of the proposed CP antenna is described at first through the combination of a planar magnetic dipole and a V-shape open loop. Then, it is demonstrated how its AR beamwidth can be precisely controlled with resorting to the shape and thickness of the open loop element. Finally, the theoretical design approach is numerically verified, and the results are then experimentally validated. It is observed that a 3 dB AR beamwidth is achieved in an extremely wide angular region up to 250° within the principal elevation plane over a frequency range of 2.41-2.51 GHz. The proposed antenna has exhibited a wide impedance bandwidth of 22.23% in fraction, a 3 dB AR bandwidth of no less than 8.00% in fraction, and a simple planar profile in geometry. In addition, its endfire beam is in parallel with its plane.", "title": "" }, { "docid": "a1415ce5c2d5c8669e67d73c12db4fa9", "text": "The IoT paradigm holds the promise to revolutionize the way we live and work by means of a wealth of new services, based on seamless interactions between a large amount of heterogeneous devices. After decades of conceptual inception of the IoT, in recent years a large variety of communication technologies has gradually emerged, reflecting a large diversity of application domains and of communication requirements. Such heterogeneity and fragmentation of the connectivity landscape is currently hampering the full realization of the IoT vision, by posing several complex integration challenges. In this context, the advent of 5G cellular systems, with the availability of a connectivity technology, which is at once truly ubiquitous, reliable, scalable, and cost-efficient, is considered as a potentially key driver for the yet-to emerge global IoT. In the present paper, we analyze in detail the potential of 5G technologies for the IoT, by considering both the technological and standardization aspects. We review the present-day IoT connectivity landscape, as well as the main 5G enablers for the IoT. Last but not least, we illustrate the massive business shifts that a tight link between IoT and 5G may cause in the operator and vendors ecosystem.", "title": "" }, { "docid": "8e5f2b976dfe8883e419fdc49bf53c78", "text": "This paper studies the object transfiguration problem in wild images. The generative network in classical GANs for object transfiguration often undertakes a dual responsibility: to detect the objects of interests and to convert the object from source domain to target domain. In contrast, we decompose the generative network into two separat networks, each of which is only dedicated to one particular sub-task. The attention network predicts spatial attention maps of images, and the transformation network focuses on translating objects. Attention maps produced by attention network are encouraged to be sparse, so that major attention can be paid to objects of interests. No matter before or after object transfiguration, attention maps should remain constant. In addition, learning attention network can receive more instructions, given the available segmentation annotations of images. Experimental results demonstrate the necessity of investigating attention in object transfiguration, and that the proposed algorithm can learn accurate attention to improve quality of generated images.", "title": "" }, { "docid": "a7e2538186ce04325d24842c72ff41c6", "text": "Omics refers to a field of study in biology such as genomics, proteomics, and metabolomics. Investigating fundamental biological problems based on omics data would increase our understanding of bio-systems as a whole. However, omics data is characterized with high-dimensionality and unbalance between features and samples, which poses big challenges for classical statistical analysis and machine learning methods. This paper studies a minimal-redundancy-maximal-relevance (MRMR) feature selection for omics data classification using three different relevance evaluation measures including mutual information (MI), correlation coefficient (CC), and maximal information coefficient (MIC). A linear forward search method is used to search the optimal feature subset. The experimental results on five real-world omics datasets indicate that MRMR feature selection with CC is more robust to obtain better (or competitive) classification accuracy than the other two measures.", "title": "" }, { "docid": "0c5c83cfb63b335b327f044973514d23", "text": "With the explosion of healthcare information, there has been a tremendous amount of heterogeneous textual medical knowledge (TMK), which plays an essential role in healthcare information systems. Existing works for integrating and utilizing the TMK mainly focus on straightforward connections establishment and pay less attention to make computers interpret and retrieve knowledge correctly and quickly. In this paper, we explore a novel model to organize and integrate the TMK into conceptual graphs. We then employ a framework to automatically retrieve knowledge in knowledge graphs with a high precision. In order to perform reasonable inference on knowledge graphs, we propose a contextual inference pruning algorithm to achieve efficient chain inference. Our algorithm achieves a better inference result with precision and recall of 92% and 96%, respectively, which can avoid most of the meaningless inferences. In addition, we implement two prototypes and provide services, and the results show our approach is practical and effective.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "7819d359e169ae18f9bb50f464e1233c", "text": "As large amount of data is generated in medical organizations (hospitals, medical centers) but this data is not properly used. There is a wealth of hidden information present in the datasets. The healthcare environment is still “information rich” but “knowledge poor”. There is a lack of effective analysis tools to discover hidden relationships and trends in data. Advanced data mining techniques can help remedy this situation. For this purpose we can use different data mining techniques. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today’s medical research particularly in Heart Disease Prediction. This research has developed a prototype Heart Disease Prediction System (HDPS) using data mining techniques namely, Decision Trees, Naïve Bayes and Neural Network. This Heart disease prediction system can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established.", "title": "" }, { "docid": "fc5a04c795fbfdd2b6b53836c9710e4d", "text": "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.", "title": "" }, { "docid": "1c91ba0010bdc1763ad8feaaf0cac0e5", "text": "Machine-to-machine (M2M) solutions are rolling out worldwide and across all industries - possibly being a key enabler of applications and services covering a broad range of vertical markets (e.g., health-care, utilities, transport, education-research and development, logistics etc.). M2M - as a part of the Internet of Things (IoT) - is, at the present, one of the main drivers behind the growth in mobile subscribers, with all of the world's largest electronic communications operators now having several million M2M subscribers in their mobile networks. The M2M communication model is linked to the interaction between IoT endpoints, which represent - from the point of view of business process and end user applications - service end points that can be easily embedded in emerging service oriented enterprise systems and service delivery platforms. To capitalize on the projected expansion of the M2M market, both regulators and telecommunications operators (as well as service providers) will have to be agile and flexible. The challenge of embedding real world information into networks, services and applications through the convergence of domains like Technologies, Electronic Communications and Intelligence - enabled by context aware technologies, nanoelectronics, sensors and cloud computing - will bring about the development of novel services, innovative products, new interfaces and new applications.", "title": "" }, { "docid": "bab429bf74fe4ce3f387a716964a867f", "text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "title": "" }, { "docid": "73bbec41d27db7b660bd49f3a1046905", "text": "U sing robots in industrial welding operations is common but far from being a streamlined technological process. The problems are with the robots, still in their early design stages and difficult to use and program by regular operators; the welding process, which is complex and not really well known; and the human-machine interfaces, which are nonnatural and not really working. In this article, these problems are discussed, and a system designed with the double objective of serving R&D efforts on welding applications and to assist industrial partners working with welding setups is presented. The system is explained in some detail and demonstrated using two test cases that reproduce two situations common in industry: multilayer butt welding, used on big structures requiring very strong welds, and multipoint fillet welding, used, for example, on structural pieces in the construction industry.", "title": "" } ]
scidocsrr
4c3020ee8f4bcf2fbafb71a0f0a880be
Principled Uncertainty Estimation for Deep Neural Networks
[ { "docid": "c5efe5fe7c945e48f272496e7c92bb9c", "text": "Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.", "title": "" }, { "docid": "142b1f178ade5b7ff554eae9cad27f69", "text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.", "title": "" }, { "docid": "3cdab5427efd08edc4f73266b7ed9176", "text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.", "title": "" } ]
[ { "docid": "36e531c34dd8f714f481c6ab9ed1a375", "text": "Generating informative responses in end-toend neural dialogue systems attracts a lot of attention in recent years. Various previous work leverages external knowledge and the dialogue contexts to generate such responses. Nevertheless, few has demonstrated their capability on incorporating the appropriate knowledge in response generation. Motivated by this, we propose a novel open-domain conversation generation model in this paper, which employs the posterior knowledge distribution to guide knowledge selection, therefore generating more appropriate and informative responses in conversations. To the best of our knowledge, we are the first one who utilize the posterior knowledge distribution to facilitate conversation generation. Our experiments on both automatic and human evaluation clearly verify the superior performance of our model over the state-of-the-art baselines.", "title": "" }, { "docid": "6b8942948b3f23971254ba7b90dac6f0", "text": "An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.", "title": "" }, { "docid": "61940aa7a2454ca43612b7657733c9f5", "text": "One of the sources of difficulty in speech recognition and understanding is the variability due to alternate pronunciations of words. To address the issue we have investigated the use of multiple-pronunciation models (MPMs) in the decoding stage of a speaker-independent speech understanding system. In this paper we address three important issues regarding MPMs: (a) Model construction: How can MPMs be built from available data without human intervention? (b) Model embedding: How should MPM construction interact with the training of the sub-word unit models on which they are based? (c) Utility: Do they help in speech recognition? Automatic, data-driven MPM construction is accomplished using a structural HMM induction algorithm. The resulting MPMs are trained jointlywith a multi-layer perceptron functioningas a phonetic likelihood estimator. The experiments reported here demonstrate that MPMs can significantly improve speech recognition results over standard single pronunciation models.", "title": "" }, { "docid": "2bb0b89491015f124e4b244954508234", "text": "In recent years, deep neural networks have achieved significant success in Chinese word segmentation and many other natural language processing tasks. Most of these algorithms are end-to-end trainable systems and can effectively process and learn from large scale labeled datasets. However, these methods typically lack the capability of processing rare words and data whose domains are different from training data. Previous statistical methods have demonstrated that human knowledge can provide valuable information for handling rare cases and domain shifting problems. In this paper, we seek to address the problem of incorporating dictionaries into neural networks for the Chinese word segmentation task. Two different methods that extend the bi-directional long short-term memory neural network are proposed to perform the task. To evaluate the performance of the proposed methods, state-of-the-art supervised models based methods and domain adaptation approaches are compared with our methods on nine datasets from different domains. The experimental results demonstrate that the proposed methods can achieve better performance than other state-of-the-art neural network methods and domain adaptation approaches in most cases.", "title": "" }, { "docid": "9423dcfc04f57be48adddc88e40f1963", "text": "Presynaptic Ca(V)2.2 (N-type) calcium channels are subject to modulation by interaction with syntaxin 1 and by a syntaxin 1-sensitive Galpha(O) G-protein pathway. We used biochemical analysis of neuronal tissue lysates and a new quantitative test of colocalization by intensity correlation analysis at the giant calyx-type presynaptic terminal of the chick ciliary ganglion to explore the association of Ca(V)2.2 with syntaxin 1 and Galpha(O). Ca(V)2.2 could be localized by immunocytochemistry (antibody Ab571) in puncta on the release site aspect of the presynaptic terminal and close to synaptic vesicle clouds. Syntaxin 1 coimmunoprecipitated with Ca(V)2.2 from chick brain and chick ciliary ganglia and was widely distributed on the presynaptic terminal membrane. A fraction of the total syntaxin 1 colocalized with the Ca(V)2.2 puncta, whereas the bulk colocalized with MUNC18-1. Galpha(O,) whether in its trimeric or monomeric state, did not coimmunoprecipitate with Ca(V)2.2, MUNC18-1, or syntaxin 1. However, the G-protein exhibited a punctate staining on the calyx membrane with an intensity that varied in synchrony with that for both Ca channels and syntaxin 1 but only weakly with MUNC18-1. Thus, syntaxin 1 appears to be a component of two separate complexes at the presynaptic terminal, a minor one at the transmitter release site with Ca(V)2.2 and Galpha(O), as well as in large clusters remote from the release site with MUNC18-1. These syntaxin 1 protein complexes may play distinct roles in presynaptic biology.", "title": "" }, { "docid": "1d3192e66e042e67dabeae96ca345def", "text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.", "title": "" }, { "docid": "b2e5a2395641c004bdc84964d2528b13", "text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.", "title": "" }, { "docid": "bb29a8e942c69cdb6634faa563cddb3a", "text": "Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA implementation. Distinct from the conventional convolution algorithm, Winograd algorithm uses less computing resources but puts more pressure on the memory bandwidth. We first propose a fusion architecture that can fuse multiple layers naturally in CNNs, reusing the intermediate data. Based on this fusion architecture, we explore heterogeneous algorithms to maximize the throughput of a CNN. We design an optimal algorithm to determine the fusion and algorithm strategy for each layer. We also develop an automated toolchain to ease the mapping from Caffe model to FPGA bitstream using Vivado HLS. Experiments using widely used VGG and AlexNet demonstrate that our design achieves up to 1.99X performance speedup compared to the prior fusion-based FPGA accelerator for CNNs.", "title": "" }, { "docid": "3cf4ef33356720e55748c7f14383830d", "text": "Article history: Received 7 September 2015 Received in revised form 15 February 2016 Accepted 27 March 2016 Available online 14 April 2016 For many organizations, managing both economic and environmental performance has emerged as a key challenge. Further,with expanding globalization organizations are finding itmore difficult tomaintain adequate supplier relations to balance both economic and environmental performance initiatives. Drawing on transaction cost economics, this study examines how novel information technology like cloud computing can help firms not only maintain adequate supply chain collaboration, but also balance both economic and environmental performance. We analyze survey data from 247 IT and supply chain professionals using structural equation modeling and partial least squares to verify the robustness of our results. Our analyses yield several interesting findings. First, contrary to other studies we find that collaboration does not necessarily affect environmental performance and only partiallymediates the relationship between cloud computing and economic performance. Secondly, the results of our survey provide evidence of the direct effect of cloud computing on both economic and environmental performance. Published by Elsevier B.V.", "title": "" }, { "docid": "13f7b5a92e830bff44c14c77056f9743", "text": "Many pneumatic energy sources are available for use in autonomous and wearable soft robotics, but it is often not obvious which options are most desirable or even how to compare them. To address this, we compare pneumatic energy sources and review their relative merits. We evaluate commercially available battery-based microcompressors (singly, in parallel, and in series) and cylinders of high-pressure fluid (air and carbon dioxide). We identify energy density (joules/gram) and flow capacity (liters/gram) normalized by the mass of the entire fuel system (versus net fuel mass) as key metrics for soft robotic power systems. We also review research projects using combustion (methane and butane) and monopropellant decomposition (hydrogen peroxide), citing theoretical and experimental values. Comparison factors including heat, effective energy density, and working pressure/flow rate are covered. We conclude by comparing the key metrics behind each technology. Battery-powered microcompressors provide relatively high capacity, but maximum pressure and flow rates are low. Cylinders of compressed fluid provide high pressures and flow rates, but their limited capacity leads to short operating times. While methane and butane possess the highest net fuel energy densities, they typically react at speeds and pressures too high for many soft robots and require extensive system-level development. Hydrogen peroxide decomposition requires not only few additional parts (no pump or ignition system) but also considerable system-level development. We anticipate that this study will provide a framework for configuring fuel systems in soft robotics.", "title": "" }, { "docid": "53e8333b3e4e9874449492852d948ea2", "text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.", "title": "" }, { "docid": "aac3060a199b016e38be800c213c9dba", "text": "In this paper, we investigate the use of electroencephalograhic signals for the purpose of recognizing unspoken speech. The term unspoken speech refers to the process in which a subject imagines speaking a given word without moving any articulatory muscle or producing any audible sound. Early work by Wester (Wester, 2006) presented results which were initially interpreted to be related to brain activity patterns due to the imagination of pronouncing words. However, subsequent investigations lead to the hypothesis that the good recognition performance might instead have resulted from temporal correlated artifacts in the brainwaves since the words were presented in blocks. In order to further investigate this hypothesis, we run a study with 21 subjects, recording 16 EEG channels using a 128 cap montage. The vocabulary consists of 5 words, each of which is repeated 20 times during a recording session in order to train our HMM-based classifier. The words are presented in blockwise, sequential, and random order. We show that the block mode yields an average recognition rate of 45.50%, but it drops to chance level for all other modes. Our experiments suggest that temporal correlated artifacts were recognized instead of words in block recordings and back the above-mentioned hypothesis.", "title": "" }, { "docid": "6aa38687ebed443ea0068547d24acb6d", "text": "In this paper, a digital signal processing (DSP) software development process is described. It starts from the conceptual algorithm design and computer simulation using MATLAB, Simulink, or floating-point C programs. The finite-word-length analysis using MATLAB fixed-point functions or Simulink follows with fixed-point blockset. After verification of the algorithm, a fixed-point C program is developed for a specific fixed-point DSP processor. Software efficiency can be further improved by using mixed C-and-assembly programs, intrinsic functions, and optimized assembly routines in DSP libraries. This integrated software-development process enables students and engineers to understand and appreciate the important differences between floating-point simulations and fixed-point implementation considerations and applications.", "title": "" }, { "docid": "1ec395dbe807ff883dab413419ceef56", "text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.", "title": "" }, { "docid": "62b4eb1d0db3cf02d1412fe99690ac61", "text": "In requirements engineering, there are several approaches for requirements modeling such as goal-oriented, aspect-driven, and system requirements modeling. In practice, companies often customize a given approach to their specific needs. Thus, we seek a solution that allows customization in a systematic way. In this paper, we propose a metamodel for requirements models (called core metamodel) and an approach for customizing this metamodel in order to support various requirements modeling approaches. The core metamodel represents the common concepts extracted from some prevalent approaches. We define the semantics of the concepts and the relations in the core metamodel. Based on this formalization, we can perform reasoning on requirements that may detect implicit relations and inconsistencies. Our approach for customization keeps the semantics of the core concepts intact and thus allows reuse of tools and reasoning over the customized metamodel. We illustrate the customization of our core metamodel with SysML concepts. As a case study, we apply the reasoning on requirements of an industrial mobile service application based on this customized core requirements metamodel.", "title": "" }, { "docid": "59bfb330b9ca7460280fecca78383857", "text": "Big data poses many facets and challenges when analyzing data, often described with the five big V’s of Volume, Variety, Velocity, Veracity, and Value. However, the most important V – Value can only be achieved when knowledge can be derived from the data. The volume of nowadays datasets make a manual investigation of all data records impossible and automated analysis techniques from data mining or machine learning often cannot be applied in a fully automated fashion to solve many real world analysis problems, and hence, need to be manually trained or adapted. Visual analytics aims to solve this problem with a “human-in-the-loop” approach that provides the analyst with a visual interface that tightly integrates automated analysis techniques with human interaction. However, a holistic understanding of these analytic processes is currently an under-explored research area. A major contribution of this dissertation is a conceptual model-driven approach to visual analytics that focuses on the human-machine interplay during knowledge generation. At its core, it presents the knowledge generation model which is subsequently specialized for human analytic behavior, visual interactive machine learning, and dimensionality reduction. These conceptual processes extend and combine existing conceptual works that aim to establish a theoretical foundation for visual analytics. In addition, this dissertation contributes novel methods to investigate and support human knowledge generation processes, such as semi-automation and recommendation, analytic behavior and trust building, or visual interaction with machine learning. These methods are investigated in close collaboration with real experts from different application domains (such as soccer analysis, linguistic intonation research, and criminal intelligence analysis) and hence, different data characteristics (geospatial movement, time series, and high-dimensional). The results demonstrate that this conceptual approach leads to novel, more tightly integrated, methods that support the analyst in knowledge generation. In a final broader discussion, this dissertation reflects the conceptual and methodological contributions and enumerates research areas at the intersection of data mining, machine learning, visualization, and human-computer interaction research, with the ultimate goal to make big data exploration more effective, efficient, and transparent.", "title": "" }, { "docid": "5ddcfa43a488ee92dbf13f0a91310d5a", "text": "We present in this chapter an overview of the Mumford and Shah model for image segmentation. We discuss its various formulations, some of its properties, the mathematical framework, and several approximations. We also present numerical algorithms and segmentation results using the Ambrosio–Tortorelli phase-field approximations on one hand, and using the level set formulations on the other hand. Several applications of the Mumford–Shah problem to image restoration are also presented. . Introduction: Description of theMumford and Shah Model An important problem in image analysis and computer vision is the segmentation one, that aims to partition a given image into its constituent objects, or to find boundaries of such objects. This chapter is devoted to the description, analysis, approximations, and applications of the classical Mumford and Shah functional proposed for image segmentation. In [–], David Mumford and Jayant Shah have formulated an energy minimization problem that allows to compute optimal piecewise-smooth or piecewise-constant approximations u of a given initial image g. Since then, their model has been analyzed and considered in depth by many authors, by studying properties of minimizers, approximations, and applications to image segmentation, image partition, image restoration, and more generally to image analysis and computer vision. We denote by Ω ⊂ Rd the image domain (an interval if d = , or a rectangle in the plane if d = ). More generally, we assume that Ω is open, bounded, and connected. Let g : Ω → R be a given gray-scale image (a signal in one dimension, a planar image in two dimensions, or a volumetric image in three dimensions). It is natural and without losing any generality to assume that g is a bounded function in Ω, g ∈ L(Ω). As formulated byMumford and Shah [], the segmentation problem in image analysis and computer vision consists in computing a decomposition Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K of the domain of the image g such that (a) The image g varies smoothly and/or slowly within each Ω i . (b) The image g varies discontinuously and/or rapidly across most of the boundary K between different Ω i . From the point of view of approximation theory, the segmentation problem may be restated as seeking ways to define and compute optimal approximations of a general function g(x) by piecewise-smooth functions u(x), i.e., functions u whose restrictions ui to the pieces Ω i of a decomposition of the domain Ω are continuous or differentiable.   Mumford and ShahModel and its Applications to Image Segmentation and Image Restoration In what follows, Ω i will be disjoint connected open subsets of a domain Ω, each one with a piecewise-smooth boundary, and K will be a closed set, as the union of boundaries of Ω i inside Ω, thus Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K, K = Ω ∩ (∂Ω ∪ . . . ∪ ∂Ωn). The functional E to be minimized for image segmentation is defined by [–], E(u,K) = μ ∫ Ω (u − g)dx + ∫ Ω/K ∣∇u∣dx + ∣K∣, (.) where u : Ω → R is continuous or even differentiable inside each Ω i (or u ∈ H(Ω i)) and may be discontinuous across K. Here, ∣K∣ stands for the total surface measure of the hypersurface K (the counting measure if d = , the length measure if d = , the area measure if d = ). Later, we will define ∣K∣ byHd−(K), the d −  dimensional Hausdorff measure in Rd . As explained by Mumford and Shah, dropping any of these three terms in (> .), inf E = : without the first, take u = , K = /; without the second, take u = g, K = /; without the third, take for example, in the discrete case K to be the boundary of all pixels of the image g, each Ω i be a pixel and u to be the average (value) of g over each pixel. The presence of all three terms leads to nontrivial solutions u, and an optimal pair (u,K) can be seen as a cartoon of the actual image g, providing a simplification of g. An important particular case is obtained when we restrict E to piecewise-constant functions u, i.e., u = constant ci on each open set Ω i . Multiplying E by μ−, we have μ−E(u,K) = ∑ i ∫ Ω i (g − ci)dx + ∣K∣, where  = /μ. It is easy to verify that this is minimized in the variables ci by setting ci = meanΩ i (g) = ∫Ω i g(x)dx ∣Ω i ∣ , where ∣Ω i ∣ denotes here the Lebesgue measure of Ω i (e.g., area if d = , volume if d = ), so it is sufficient to minimize E(K) = ∑ i ∫ Ω i (g −meanΩ i g) dx + ∣K∣. It is possible to interpret E as the limit functional of E as μ →  []. Finally, the Mumford and Shah model can also be seen as a deterministic refinement of Geman and Geman’s image restoration model []. . Background: The First Variation In order to better understand, analyze, and use the minimization problem (> .), it is useful to compute its first variation with respect to each of the unknowns. Mumford and Shah Model and its Applications to Image Segmentation and Image Restoration   We first recall the definition of Sobolev functions u ∈ W ,(U) [], necessary to properly define a minimizer u when K is fixed. Definition  LetU ⊂ Rd be an open set. We denote byW ,(U) (or by H(U)) the set of functions u ∈ L(Ω), whose first-order distributional partial derivatives belong to L(U). This means that there are functions u, . . . ,ud ∈ L(U) such that ∫ U u(x) ∂φ ∂xi (x)dx = − ∫ U ui(x)φ(x)dx for  ≤ i ≤ d and for all functions φ ∈ C∞c (U). We may denote by ∂u ∂xi the distributional derivative ui of u and by∇u = ( ∂u ∂x , . . . , ∂u ∂xd ) its distributional gradient. In what follows, we denote by ∣∇u∣(x) the Euclidean norm of the gradient vector at x. H(U) = W ,(U) becomes a Banach space endowed with the norm ∥u∥W ,(U) = ∫ U udx + d ∑ i= ∫ U ( ∂u ∂xi )  dx] / . .. Minimizing in uwith K Fixed Let us assume first that K is fixed, as a closed subset of the open and bounded set Ω ⊂ Rd , and denote by E(u) = μ ∫ Ω/K (u − g)dx + ∫ Ω/K ∣∇u∣dx, for u ∈ W ,(Ω/K), where Ω/K is open and bounded, and g ∈ L(Ω/K). We have the following classical results obtained as a consequence of the standard method of calculus of variations. Proposition  There is a unique minimizer of the problem inf u∈W ,(Ω/K) E(u). (.) Proof [] First, we note that  ≤ inf E < +∞, since we can choose u ≡  and E(u) = μ ∫Ω/K g  (x)dx < +∞. Thus, we can denote by m = inf u E(u) and let {uj} j≥ ∈ W ,(Ω/K) be a minimizing sequence such that lim j→∞ E(uj) = m. Recall that for u, v ∈ L,", "title": "" }, { "docid": "91f718a69532c4193d5e06bf1ea19fd3", "text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.", "title": "" }, { "docid": "bc6c7fcd98160c48cd3b72abff8fad02", "text": "A new concept of formality of linguistic expressions is introduced and argued to be the most important dimension of variation between styles or registers. Formality is subdivided into \"deep\" formality and \"surface\" formality. Deep formality is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. This is achieved by explicit and precise description of the elements of the context needed to disambiguate the expression. A formal style is characterized by detachment, accuracy, rigidity and heaviness; an informal style is more flexible, direct, implicit, and involved, but less informative. An empirical measure of formality, the F-score, is proposed, based on the frequencies of different word classes in the corpus. Nouns, adjectives, articles and prepositions are more frequent in formal styles; pronouns, adverbs, verbs and interjections are more frequent in informal styles. It is shown that this measure, though coarse-grained, adequately distinguishes more from less formal genres of language production, for some available corpora in Dutch, French, Italian, and English. A factor similar to the F-score automatically emerges as the most important one from factor analyses applied to extensive data in 7 different languages. Different situational and personality factors are examined which determine the degree of formality in linguistic expression. It is proposed that formality becomes larger when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated. Some empirical evidence and a preliminary theoretical explanation for these propositions is discussed. Short Abstract: The concept of \"deep\" formality is proposed as the most important dimension of variation between language registers or styles. It is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. An empirical measure, the F-score, is proposed, based on the frequencies of different word classes. This measure adequately distinguishes different genres of language production using data for Dutch, French, Italian, and English. Factor analyses applied to data in 7 different languages produce a similar factor as the most important one. Both the data and the theoretical model suggest that formality increases when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated.", "title": "" }, { "docid": "18d4a0b3b6eceb110b6eb13fde6981c7", "text": "We simulate the growth of a benign avascular tumour embedded in normal tissue, including cell sorting that occurs between tumour and normal cells, due to the variation of adhesion between diierent cell types. The simulation uses the Potts Model, an energy minimisation method. Trial random movements of cell walls are checked to see if they reduce the adhesion energy of the tissue. These trials are then accepted with Boltzmann weighted probability. The simulated tumour initially grows exponentially, then forms three concentric shells as the nutrient level supplied to the core by diiusion decreases: the outer shell consists of live proliferating cells, the middle of quiescent cells and the centre is a necrotic core, where the nutrient concentration is below the critical level that sustains life. The growth rate of the tumour decreases at the onset of shell formation in agreement with experimental observation. The tumour eventually approaches a steady state, where the increase in volume due to the growth of the proliferating cells equals the loss of volume due to the disintegration of cells in the necrotic core. The nal thickness of the shells also agrees with experiment.", "title": "" } ]
scidocsrr
d98f68cc59d1386a2b1207517090fc87
Improving Question Answering with External Knowledge
[ { "docid": "e79679c3ed82c1c7ab83cfc4d6e0280e", "text": "Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).", "title": "" }, { "docid": "d5d03cdfd3a6d6c2b670794d76e91c8e", "text": "We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ̃glai1/data/race/ and the code is available at https://github.com/ cheezer/RACE_AR_baselines.", "title": "" }, { "docid": "fe39547650623fbf86be3da46a6c5a8b", "text": "This paper describes our system for SemEval2018 Task 11: Machine Comprehension using Commonsense Knowledge (Ostermann et al., 2018b). We use Threeway Attentive Networks (TriAN) to model interactions between the passage, question and answers. To incorporate commonsense knowledge, we augment the input with relation embedding from the graph of general knowledge ConceptNet (Speer et al., 2017). As a result, our system achieves state-of-the-art performance with 83.95% accuracy on the official test data. Code is publicly available at https://github.com/ intfloat/commonsense-rc.", "title": "" }, { "docid": "8f3d86a21b8a19c7d3add744c2e5e202", "text": "Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers. We illustrate the importance of essential question terms by showing that humans’ ability to answer questions drops significantly when essential terms are eliminated from questions. We then develop a classifier that reliably (90% mean average precision) identifies and ranks essential terms in questions. Finally, we use the classifier to demonstrate that the notion of question term essentiality allows state-of-the-art QA solvers for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%. We also introduce a new dataset of over 2,200 crowd-sourced essential terms annotated science questions.", "title": "" } ]
[ { "docid": "69e87ea7f07f96088486b7dd9105841b", "text": "When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.", "title": "" }, { "docid": "432ff163e4dded948aa5a27aa440cd30", "text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.", "title": "" }, { "docid": "e37b3a68c850d1fb54c9030c22b5792f", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" }, { "docid": "602ccb25257c6ce6c0bca2cb81c00628", "text": "The detection and tracking of moving vehicles is a necessity for collision-free navigation. In natural unstructured environments, motion-based detection is challenging due to low signal to noise ratio. This paper describes our approach for a 14 km/h fast autonomous outdoor robot that is equipped with a Velodyne HDL-64E S2 for environment perception. We extend existing work that has proven reliable in urban environments. To overcome the unavailability of road network information for background separation, we introduce a foreground model that incorporates geometric as well as temporal cues. Local shape estimates successfully guide vehicle localization. Extensive evaluation shows that the system works reliably and efficiently in various outdoor scenarios without any prior knowledge about the road network. Experiments with our own sensor as well as on publicly available data from the DARPA Urban Challenge revealed more than 96% correctly identified vehicles.", "title": "" }, { "docid": "8b7f931e800cd1ae810453ecbc35b225", "text": "In this paper we present empirical results from a study examining the effects of antenna diversity and placement on vehicle-to-vehicle link performance in vehicular ad hoc networks. The experiments use roof- and in-vehicle mounted omni-directional antennas and IEEE 802.11a radios operating in the 5 GHz band, which is of interest for planned inter-vehicular communication standards. Our main findings are two-fold. First, we show that radio reception performance is sensitive to antenna placement in the 5 Ghz band. Second, our results show that, surprisingly, a packet level selection diversity scheme using multiple antennas and radios, multi-radio packet selection (MRPS), improves performance not only in a fading channel but also in line-of-sight conditions. This is due to propagation being affected by car geometry, leading to the highly non-uniform antenna patterns. These patterns are very sensitive to the exact antenna position on the roof, for example at a transmit power of 40 mW the line-of-sight communication range varied between 50 and 250 m depending on the orientation of the cars. These findings have implications for vehicular MAC protocol design. Protocols may have to cope with an increased number of hidden nodes due to the directional antenna patterns. However, car makers can reduce these effects through careful antenna placement and diversity.", "title": "" }, { "docid": "fe48a551dfbe397b7bcf52e534dfcf00", "text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.", "title": "" }, { "docid": "73e4f93a46d8d66599aaaeaf71c8efe2", "text": "The galvanometer-based scanners (GS) are oscillatory optical systems utilized in high-end biomedical technologies. From a control point-of-view the GSs are mechatronic systems (mainly positioning servo-systems) built usually in a close loop structure and controlled by different control algorithms. The paper presents a Model based Predictive Control (MPC) solution for the mobile equipment (moving magnet and galvomirror) of a GS. The development of a high-performance control solution is based to a basic closed loop GS which consists of a PD-L1 controller and a servomotor. The mathematical model (MM) and the parameters of the basic construction are identified using a theoretical approach followed by an experimental identification. The equipment is used in our laboratory for better dynamical performances for biomedical imaging systems. The control solutions proposed are supported by simulations carried out in Matlab/Simulink.", "title": "" }, { "docid": "cb7dda8f4059e5a66e4a6e26fcda601e", "text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.", "title": "" }, { "docid": "1ea21d88740aa6b2712205823f141e57", "text": "AIM\nOne of the critical aspects of esthetic dentistry is creating geometric or mathematical proportions to relate the successive widths of the anterior teeth. The golden proportion, the recurring esthetic dental (RED) proportion, and the golden percentage are theories introduced in this field. The aim of this study was to investigate the existence of the golden proportion, RED proportion, and the golden percentage between the widths of the maxillary anterior teeth in individuals with natural dentition.\n\n\nMETHODS AND MATERIALS\nStandardized frontal images of 376 dental student smiles were captured. The images were transferred to a personal computer, the widths of the maxillary anterior teeth were measured, and calculations were made according to each of the above mentioned theories. The data were statistically analyzed using paired student T-test (level of significance P<0.05).\n\n\nRESULTS\nThe golden proportion was found to be accurate between the width of the right central and lateral incisors in 31.3% of men and 27.1% of women. The values of the RED proportion were not constant, and the farther the one moves distally from the midline the higher the values. Furthermore, the results revealed the golden percentage was rather constant in terms of relative tooth width. The width of the central incisor represents 23%, the lateral incisor 15%, and the canine 12% of the width of the six maxillary anterior teeth as viewed from the front.\n\n\nCONCLUSIONS\nBoth the golden proportion and the RED proportion are unsuitable methods to relate the successive widths of the maxillary anterior teeth. However, the golden percentage theory seems to be applicable to relate the successive widths of the maxillary anterior teeth if percentages are adjusted taking into consideration the ethnicity of the population.", "title": "" }, { "docid": "543a0cd5ac9aae173a1af5c3215b002f", "text": "Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P ), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.", "title": "" }, { "docid": "bbc984f02b81ee66d7dc617ed34a7e98", "text": "Packet losses are common in data center networks, may be caused by a variety of reasons (e.g., congestion, blackhole), and have significant impacts on application performance and network operations. Thus, it is important to provide fast detection of packet losses independent of their root causes. We also need to capture both the locations and packet header information of the lost packets to help diagnose and mitigate these losses. Unfortunately, existing monitoring tools that are generic in capturing all types of network events often fall short in capturing losses fast with enough details and low overhead. Due to the importance of loss in data centers, we propose a specific monitoring system designed for loss detection. We propose LossRadar, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale. Our extensive evaluation on prototypes and simulations demonstrates that LossRadar is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individual lost packets. We also build a loss analysis tool that demonstrates the usefulness of LossRadar with a few example applications.", "title": "" }, { "docid": "ee532e8bb51a7b49506df59bd9ad3282", "text": "People learn from tests. Providing tests often enhances retention more than additional study opportunities, but is this testing effect mediated by processes related to retrieval that are fundamentally different from study processes? Some previous studies have reported that testing enhances retention relative to additional studying, but only after a relatively long retention interval. To the extent that this interaction with retention interval dissociates the effects of studying and testing, it may provide crucial evidence for different underlying processes. However, these findings can be questioned because of methodological differences between the study and the test conditions. In two experiments, we eliminated or minimized the confounds that rendered the previous findings equivocal and still obtained the critical interaction. Our results strengthen the evidence for the involvement of different processes underlying the effects of studying and testing, and support the hypothesis that the testing effect is grounded in retrieval-related processes.", "title": "" }, { "docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5", "text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" }, { "docid": "16fec520bf539ab23a5164ffef5561b4", "text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" }, { "docid": "76502e21fbb777a3442928897ef271f0", "text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract", "title": "" }, { "docid": "ceef658faa94ad655521ece5ac5cba1d", "text": "We propose learning a semantic visual feature representation by training a neural network supervised solely by point and object trajectories in video sequences. Currently, the predominant paradigm for learning visual features involves training deep convolutional networks on an image classification task using very large human-annotated datasets, e.g. ImageNet. Though effective as supervision, semantic image labels are costly to obtain. On the other hand, under high enough frame rates, frame-to-frame associations between the same 3D physical point or an object can be established automatically. By transitivity, such associations grouped into tracks can relate object/point appearance across large changes in pose, illumination and camera viewpoint, providing a rich source of invariance that can be used for training. We train a siamese network we call it AssociationNet to discriminate between correct and wrong associations between patches in different frames of a video sequence. We show that AssociationNet learns useful features when used as pretraining for object recognition in static images, and outperforms random weight initialization and alternative pretraining methods.", "title": "" }, { "docid": "d00957d93af7b2551073ba84b6c0f2a6", "text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn", "title": "" }, { "docid": "1c3cf3ccdb3b7129c330499ca909b193", "text": "Procedural methods for animating turbulent fluid are often preferred over simulation, both for speed and for the degree of animator control. We offer an extremely simple approach to efficiently generating turbulent velocity fields based on Perlin noise, with a formula that is exactly incompressible (necessary for the characteristic look of everyday fluids), exactly respects solid boundaries (not allowing fluid to flow through arbitrarily-specified surfaces), and whose amplitude can be modulated in space as desired. In addition, we demonstrate how to combine this with procedural primitives for flow around moving rigid objects, vortices, etc.", "title": "" } ]
scidocsrr
8a554c3c8fa54e27e80b2a2fb5b22d44
Near-Optimal Algorithms for the Assortment Planning Problem Under Dynamic Substitution and Stochastic Demand
[ { "docid": "650209a7310ce7506f6384ad42db44f3", "text": "In this paper, we examine the nature of optimal inventory policies in a system where a retailer manages substitutable products. We first consider a system with two products 1 and 2 whose total demand is D and individual demand proportions are p and (1-p). A fixed proportion of the unsatisfied customers for 1(2) will purchase item 2 (1), if it is available in inventory. For the single period case, we show that the optimal inventory levels of the two items can be computed easily and follow what we refer to as \"partially decoupled\" policies, i.e. base stock policies that are not state dependent, in certain critical regions of interest both when D is known and random. Furthermore, we show that such a partially decoupled base-stock policy is optimal even in a multi-period version of the problem for known D for a wide range of parameter values. Using a numerical study, we show that heuristics based on the de-coupled inventory policies perform well in conditions more general than the ones assumed to obtain the analytical results. The analytical and numerical results suggest that the approach presented here is most valuable in retail settings for product categories where there is a moderate level of substitution between items in the category, demand variation at the category level is not too high and service levels are high.", "title": "" } ]
[ { "docid": "6b7de13e2e413885e0142e3b6bf61dc9", "text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.", "title": "" }, { "docid": "13091eb3775715269b7bee838f0a6b00", "text": "Smartphones can now connect to a variety of external sensors over wired and wireless channels. However, ensuring proper device interaction can be burdensome, especially when a single application needs to integrate with a number of sensors using different communication channels and data formats. This paper presents a framework to simplify the interface between a variety of external sensors and consumer Android devices. The framework simplifies both application and driver development with abstractions that separate responsibilities between the user application, sensor framework, and device driver. These abstractions facilitate a componentized framework that allows developers to focus on writing minimal pieces of sensor-specific code enabling an ecosystem of reusable sensor drivers. The paper explores three alternative architectures for application-level drivers to understand trade-offs in performance, device portability, simplicity, and deployment ease. We explore these tradeoffs in the context of four sensing applications designed to support our work in the developing world. They highlight a range of sensor usage models for our application-level driver framework that vary data types, configuration methods, communication channels, and sampling rates to demonstrate the framework's effectiveness.", "title": "" }, { "docid": "ce63aad5288d118eb6ca9d99b96e9cac", "text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.", "title": "" }, { "docid": "ee9730fa0fde945d70130bcf33960608", "text": "An operational definition offered in this paper posits learning as a multi-dimensional and multi-phase phenomenon occurring when individuals attempt to solve what they view as a problem. To model someone’s learning accordingly to the definition, it suffices to characterize a particular sequence of that person’s disequilibrium–equilibrium phases in terms of products of a particular mental act, the characteristics of the mental act inferred from the products, and intellectual and psychological needs that instigate or result from these phases. The definition is illustrated by analysis of change occurring in three thinking-aloud interviews with one middle-school teacher. The interviews were about the same task: “Make up a word problem whose solution may be found by computing 4/5 divided by 2/3.” © 2010 Elsevier Inc. All rights reserved. An operational definition is a showing of something—such as a variable, term, or object—in terms of the specific process or set of validation tests used to determine its presence and quantity. Properties described in this manner must be publicly accessible so that persons other than the definer can independently measure or test for them at will. An operational definition is generally designed to model a conceptual definition (Wikipedia)", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "bf0471fc0c513e9771cbedecc39110e1", "text": "For emotion recognition, we selected pitch, log energy, formant, mel-band energies, and mel frequency cepstral coefficients (MFCCs) as the base features, and added velocity/acceleration of pitch and MFCCs to form feature streams. We extracted statistics used for discriminative classifiers, assuming that each stream is a one-dimensional signal. Extracted features were analyzed by using quadratic discriminant analysis (QDA) and support vector machine (SVM). Experimental results showed that pitch and energy were the most important factors. Using two different kinds of databases, we compared emotion recognition performance of various classifiers: SVM, linear discriminant analysis (LDA), QDA and hidden Markov model (HMM). With the text-independent SUSAS database, we achieved the best accuracy of 96.3% for stressed/neutral style classification and 70.1% for 4-class speaking style classification using Gaussian SVM, which is superior to the previous results. With the speaker-independent AIBO database, we achieved 42.3% accuracy for 5-class emotion recognition.", "title": "" }, { "docid": "7ad76f9f584b33ffd85b8e5c3bf50e92", "text": "Deep residual learning (ResNet) (He et al., 2016) is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well understood. In this paper, we design a novel convolutional neural network (CNN) with residual learning, and investigate its impacts on the task of distantly supervised noisy relation extraction. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance for distantly-supervised relation extraction.", "title": "" }, { "docid": "f058b13088ca0f38e350cb8c8ffb0c0f", "text": "In this paper, we propose a representation learning research framework for document-level sentiment analysis. Given a document as the input, document-level sentiment analysis aims to automatically classify its sentiment/opinion (such as thumbs up or thumbs down) based on the textural information. Despite the success of feature engineering in many previous studies, the hand-coded features do not well capture the semantics of texts. In this research, we argue that learning sentiment-specific semantic representations of documents is crucial for document-level sentiment analysis. We decompose the document semantics into four cascaded constitutes: (1) word representation, (2) sentence structure, (3) sentence composition and (4) document composition. Specifically, we learn sentiment-specific word representations, which simultaneously encode the contexts of words and the sentiment supervisions of texts into the continuous representation space. According to the principle of compositionality, we learn sentiment-specific sentence structures and sentence-level composition functions to produce the representation of each sentence based on the representations of the words it contains. The semantic representations of documents are obtained through document composition, which leverages the sentiment-sensitive discourse relations and sentence representations.", "title": "" }, { "docid": "eaf2a943ca3cf2b837eb5c1cae29a37a", "text": "The natural immune system is a subject of great research interest because of its powerful information processing capabilities. From an informationprocessing perspective, the immune system is a highly parallel system. It provides an excellent model of adaptive processes operating at the local level and of useful behavior emerging at the global level. Moreover, it uses learning, memory, and assodative retrieval to salve recognition and classification tasks. This chapter illustrates different immunological mechanisms and their relation to information processing, and provides an overview of the rapidly emerging field called Artificial Immune Systems. These techniques have been successfully used in pattern recognition, fault detection and diagnosis, computer security, and a variety of other applications.", "title": "" }, { "docid": "abe729a351eb9dbc1688abe5133b28c2", "text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.", "title": "" }, { "docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0", "text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.", "title": "" }, { "docid": "ece5b4cecc78b115d6e8824f91a45dc6", "text": "The ability to edit materials of objects in images is desirable by many content creators. However, this is an extremely challenging task as it requires to disentangle intrinsic physical properties of an image. We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task. Specifically, given a single image, the network first predicts intrinsic properties, i.e. shape, illumination, and material, which are then provided to a rendering layer. This layer performs in-network image synthesis, thereby enabling the network to understand the physics behind the image formation process. The proposed rendering layer is fully differentiable, supports both diffuse and specular materials, and thus can be applicable in a variety of problem settings. We demonstrate a rich set of visually plausible material editing examples and provide an extensive comparative study.", "title": "" }, { "docid": "0a2cba5e6d5b6b467e34e79ee099f509", "text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.", "title": "" }, { "docid": "0f2023682deaf2eb70c7becd8b3375dd", "text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.", "title": "" }, { "docid": "78fa87e54c9f6c49101e0079013792e2", "text": "The NCSM Journal of Mathematics Education Leadership is published at least twice yearly, in the spring and fall. Permission to photocopy material from the NCSM Journal of Mathematics Education Leadership is granted for instructional use when the material is to be distributed free of charge (or at cost only), provided that it is duplicated with the full credit given to the authors of the materials and the NCSM Journal of Mathematics Education Leadership. This permission does not apply to copyrighted articles reprinted in the NCSM Journal of Mathematics Education Leadership. The editors of the NCSM Journal of Mathematics Education Leadership are interested in manuscripts that address concerns of leadership in mathematics rather than those of content or delivery. Editors are interested in publishing articles from a broad spectrum of formal and informal leaders who practice at local, regional, national, and international levels. Categories for submittal include: Note: The last two categories are intended for short pieces of 2 to 3 pages in length. Submittal of items should be done electronically to the Journal editor. Do not put any author identification in the body of the item being submitted, but do include author information as you would like to see it in the Journal. Items submitted for publication will be reviewed by two members of the NCSM Review Panel and one editor with comments and suggested revisions sent back to the author at least six weeks before publication. Final copy must be agreed to at least three weeks before publication. Cover image: A spiral vortex generated with fractal algorithms • Strengthening mathematics education leadership through the dissemination of knowledge related to research, issues, trends, programs, policy, and practice in mathematics education • Fostering inquiry into key challenges of mathematics education leadership • Raising awareness about key challenges of mathematics education leadership, in order to influence research, programs, policy, and practice • Engaging the attention and support of other education stakeholders, and business and government, in order to broaden as well as strengthen mathematics education leadership E arlier this year, NCSM released a new mission and vision statement. Our mission speaks to our commitment to \" support and sustain improved student achievement through the development of leadership skills and relationships among current and future mathematics leaders. \" Our vision statement challenges us as the leaders in mathematics education to collaborate with all stakeholders and develop leadership skills that will lead to improved …", "title": "" }, { "docid": "60b1f54b968127c1673fbaae5ae03463", "text": "The wireless networking environment presents formidable challenges to the study of broadcasting and multicasting problems. After addressing the characteristics of wireless networks that distinguish them from wired networks, we introduce and evaluate algorithms for tree construction in infrastructureless, all-wireless applications. The performance metric used to evaluate broadcast and multicast trees is energyefficiency. We develop the Broadcast Incremental Power Algorithm, and adapt it to multicast operation as well. This algorithm exploits the broadcast nature of the wireless communication environment, and addresses the need for energy-efficient operation. We demonstrate that our algorithm provides better performance than algorithms that have been developed for the link-based, wired environment.", "title": "" }, { "docid": "74953f4d53af99b937d8128b7ab8f64c", "text": "This paper presents a force-based control mode for a hand exoskeleton. This device has been developed with focus on support of the rehabilitation process after hand injuries or strokes. As the device is designed for the later use on patients, which have limited hand mobility, fast undesired movements have to be averted. Safety precautions in the hardware and software design of the system must be taken to ensure this. The construction allows controlling motions of the finger joints. However, due to friction in gears and mechanical construction, it is not possible to move finger joints within the construction without help of actuators. Therefore force sensors are integrated into the construction to sense force exchanged between human and exoskeleton. These allow the human to control the movements of the hand exoskeleton, which is useful to teach new trajectories or can be used for diagnostic purposes. The force control scheme presented in this paper uses the force sensor values to generate a trajectory which is executed by a position control loop based on sliding mode control", "title": "" }, { "docid": "5647fc18a3f5b319a2b4c16f7fea3d39", "text": "This paper presents an abstract view of mutation analysis. Mutation was originally thought of as making changes to program source, but similar kinds of changes have been applied to other artifacts, including program specifications, XML, and input languages. This paper argues that mutation analysis is actually a way to modify any software artifact based on its syntactic description, and is in the same family of test generation methods that create inputs from syntactic descriptions. The essential characteristic of mutation is that a syntactic description such as a grammar is used to create tests. We call this abstract view grammar-based testing, and view it as an interface, which mutation analysis implements. This shift in view allows mutation to be defined in a general way, yielding three benefits. First, it provides a simpler way to understand mutation. Second, it makes it easier to develop future applications of mutation analysis, such as finite state machines and use case collaboration diagrams. The third benefit, which due to space limitations is not explored in this paper, is ensuring that existing techniques are complete according to the criteria defined here.", "title": "" }, { "docid": "d1d5d161f342a30c9b811fc90df7345b", "text": "BACKGROUND\nNosocomial infections are widespread and are important contributors to morbidity and mortality. Prevalence studies are useful in revealing the prevalence of hospital-acquired infections.\n\n\nOBJECTIVES\nTo determine the bacterial pathogens associated with hospital acquired surgical site infection (SSI) and urinary tract infection (UTI) and assess their susceptibility patterns in patients admitted in Mekelle Hospital in Ethiopia.\n\n\nMETHODS\nFrom November 2005 to April 2006 a prospective cross sectional study was conducted at Mekelle Hospital, Tigray region, North Ethiopia. The study population comprised of a total of 246 informed and consented adult patients hospitalized for surgical (n = 212) and Gynecology and Obstetrics cases (n = 34).\n\n\nRESULTS\nOf the 246 admitted patients, 68 (27.6%) developed nosocomial infections (SSI and/or nosocomial UTI) based on the clinical evaluations, and positive wound and urine culture results. Gram negative bacteria were predominantly isolated with a rate of 18/34 (53%) and 34/41 (83%) from SSI and UTI respectively. Most of the isolates from UTI have high rates of resistance (> 80%) to the commonly used antibiotics such as ampicillin, amoxicillin, chloramphenicol, gentamicin, streptomycin, and trimethoprim-sulphamethoxazole; and in isolates from SSI to amoxicillin and trimethoprim-sulphamethoxazole.\n\n\nCONCLUSIONS\nThe results showed that the prevalence of HAIs (SSI and nosocomial UTI) in the Hospital is high when compared to previous Ethiopian and other studies despite the use of prophylactic antibiotics. The pathogens causing SSI and UT7 are often resistant to commonly used antimicrobials. The findings underscore the need for an infection control system and surveillance program in the hospital and to monitor antimicrobial resistance pattern for the use of prophylactic and therapeutic antibiotics.", "title": "" }, { "docid": "226607ad7be61174871fcab384ac31b4", "text": "Traditional image stitching using parametric transforms such as homography, only produces perceptually correct composites for planar scenes or parallax free camera motion between source frames. This limits mosaicing to source images taken from the same physical location. In this paper, we introduce a smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms. Our algorithm which jointly estimates both the stitching field and correspondence, permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.", "title": "" } ]
scidocsrr
912991cba9804e1d19cdac74ab16bdd1
Sliding-mode controller for four-wheel-steering vehicle: Trajectory-tracking problem
[ { "docid": "6fdee3d247a36bc7d298a7512a11118a", "text": "Fully automatic driving is emerging as the approach to dramatically improve efficiency (throughput per unit of space) while at the same time leading to the goal of zero accidents. This approach, based on fully automated vehicles, might improve the efficiency of road travel in terms of space and energy used, and in terms of service provided as well. For such automated operation, trajectory planning methods that produce smooth trajectories, with low level associated accelerations and jerk for providing human comfort, are required. This paper addresses this problem proposing a new approach that consists of introducing a velocity planning stage in the trajectory planner. Moreover, this paper presents the design and simulation evaluation of trajectory-tracking and path-following controllers for autonomous vehicles based on sliding mode control. A new design of sliding surface is proposed, such that lateral and angular errors are internally coupled with each other (in cartesian space) in a sliding surface leading to convergence of both variables.", "title": "" } ]
[ { "docid": "2c91e6ca6cf72279ad084c4a51b27b1c", "text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.", "title": "" }, { "docid": "46360fec3d7fa0adbe08bb4b5bb05847", "text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.", "title": "" }, { "docid": "363c1ecd086043311f16b53b20778d51", "text": "One recent development of cultural globalization emerges in the convergence of taste in media consumption within geo-cultural regions, such as Latin American telenovelas, South Asian Bollywood films and East Asian trendy dramas. Originating in Japan, the so-called trendy dramas (or idol dramas) have created a craze for Japanese commodities in its neighboring countries (Ko, 2004). Following this Japanese model, Korea has also developed as a stronghold of regional exports, ranging from TV programs, movies and pop music to food, fashion and tourism. The fondness for all things Japanese and Korean in East Asia has been vividly captured by such buzz phrases as Japan-mania (hari in Chinese) and the Korean wave (hallyu in Korean and hanliu in Chinese). These two phenomena underscore how popular culture helps polish the image of a nation and thus strengthens its economic competitiveness in the global market. Consequently, nationbranding has become incorporated into the project of nation-building in light of globalization. However, Japan’s cultural spread and Korea’s cultural expansion in East Asia are often analysed from angles that are polar opposites. Scholars suggest that Japan-mania is initiated by the ardent consumers of receiving countries (Nakano, 2002), while the Korea wave is facilitated by the Korean state in order to boost its culture industry (Ryoo, 2008). Such claims are legitimate but neglect the analogues of these two phenomena. This article examines the parallel paths through which Japan-mania and the Korean wave penetrate into people’s everyday practices in Taiwan – arguably one of the first countries to be swept by these two trends. My aim is to illuminate the processes in which nation-branding is not only promoted by a nation as an international marketing strategy, but also appropriated by a receiving country as a pattern of consumption. Three seemingly contradictory arguments explain why cultural products ‘sell’ across national borders: cultural transparency, cultural difference and hybridization. First, cultural exports targeting the global market are rarely culturally specific so that they allow worldwide audiences to ‘project [into them] indigenous values, beliefs, rites, and rituals’ Media, Culture & Society 33(1) 3 –18 © The Author(s) 2011 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443710379670 mcs.sagepub.com", "title": "" }, { "docid": "72a1798a864b4514d954e1e9b6089ad8", "text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.", "title": "" }, { "docid": "01edfc6eb157dc8cf2642f58cf3aba25", "text": "Understanding developmental processes, especially in non-model crop plants, is extremely important in order to unravel unique mechanisms regulating development. Chickpea (C. arietinum L.) seeds are especially valued for their high carbohydrate and protein content. Therefore, in order to elucidate the mechanisms underlying seed development in chickpea, deep sequencing of transcriptomes from four developmental stages was undertaken. In this study, next generation sequencing platform was utilized to sequence the transcriptome of four distinct stages of seed development in chickpea. About 1.3 million reads were generated which were assembled into 51,099 unigenes by merging the de novo and reference assemblies. Functional annotation of the unigenes was carried out using the Uniprot, COG and KEGG databases. RPKM based digital expression analysis revealed specific gene activities at different stages of development which was validated using Real time PCR analysis. More than 90% of the unigenes were found to be expressed in at least one of the four seed tissues. DEGseq was used to determine differentially expressing genes which revealed that only 6.75% of the unigenes were differentially expressed at various stages. Homology based comparison revealed 17.5% of the unigenes to be putatively seed specific. Transcription factors were predicted based on HMM profiles built using TF sequences from five legume plants and analyzed for their differential expression during progression of seed development. Expression analysis of genes involved in biosynthesis of important secondary metabolites suggested that chickpea seeds can serve as a good source of antioxidants. Since transcriptomes are a valuable source of molecular markers like simple sequence repeats (SSRs), about 12,000 SSRs were mined in chickpea seed transcriptome and few of them were validated. In conclusion, this study will serve as a valuable resource for improved chickpea breeding.", "title": "" }, { "docid": "b4978b2fbefc79fba6e69ad8fd55ebf9", "text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.", "title": "" }, { "docid": "9c349ef0f3a48eaeaf678b8730d4b82c", "text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks", "title": "" }, { "docid": "de2ed315762d3f0ac34fe0b77567b3a2", "text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.", "title": "" }, { "docid": "67dedca1dbdf5845b32c74e17fc42eb6", "text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.", "title": "" }, { "docid": "fec3feb40d363535955a9ac4234c4126", "text": "This article presents metrics from two Hewlett-Packard (HP) reuse programs that document the improved quality, increased productivity, shortened time-to-market, and enhanced economics resulting from reuse. Work products are the products or by-products of the software-development process: for example, code, design, and test plans. Reuse is the use of these work products without modification in the development of other software. Leveraged reuse is modifying existing work products to meet specific system requirements. A producer is a creator of reusable work products, and the consumer is someone who uses them to create other software. Time-to-market is the time it takes to deliver a product from the time it is conceived. Experience with reuse has been largely positive. Because work products are used multiple times, the accumulated defect fixes result in a higher quality work product. Because the work products have already been created, tested, and documented, productivity increases because consumers of reusable work products need to do less work. However, increased productivity from reuse does not necessarily shorten time-to-market. To reduce time-to-market, reuse must be used effectively on the critical path of a development project. Finally, we have found that reuse allows an organization to use personnel more effectively because it leverages expertise. However, software reuse is not free. It requires resources to create and maintain reusable work products, a reuse library, and reuse tools. To help evaluate the costs and benefits of reuse, we have developed an economic analysis method, which we have applied to multiple reuse programs at HP.<<ETX>>", "title": "" }, { "docid": "bb13ad5b41abbf80f7e7c70a9098cd15", "text": "OBJECTIVE\nThis study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors.\n\n\nPARTICIPANTS AND METHODS\nThe authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and psychological distress was assessed with the Symptom Checklist-90-Revised.\n\n\nRESULTS\nThis sample of college women scored the highest on the depression dimension and the lowest on the phobic anxiety dimension. The sample scored higher than women of the general population on the dimensions of obsessive-compulsive, interpersonal sensitivity, paranoid ideation, psychoticism, and on the Global Severity Index. Scores in the sample significantly differed based on age, relationship status, financial independence, year of study, and area of study.\n\n\nCONCLUSION\nThe results indicated an elevated level of psychological distress among college women, and therefore college health services need to devote more attention to their mental health.", "title": "" }, { "docid": "69d32f5e6a6612770cd50b20e5e7f802", "text": "In this paper we present an approach for efficiently retrieving the most similar image, based on point-to-point correspondences, within a sequence that has been acquired through continuous camera movement. Our approach is entailed to the use of standardized binary feature descriptors and exploits the temporal form of the input data to dynamically adapt the search structure. While being straightforward to implement, our method exhibits very fast response times and its Precision/Recall rates compete with state of the art approaches. Our claims are supported by multiple large scale experiments on publicly available datasets.", "title": "" }, { "docid": "6e00567c5c33d899af9b5a67e37711a3", "text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip", "title": "" }, { "docid": "057621c670a9b7253ba829210c530dca", "text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.", "title": "" }, { "docid": "a6499aad878777373006742778145ddb", "text": "The very term 'Biotechnology' elicits a range of emotions, from wonder and awe to downright fear and hostility. This is especially true among non-scientists, particularly in respect of agricultural and food biotechnology. These emotions indicate just how poorly understood agricultural biotechnology is and the need for accurate, dispassionate information in the public sphere to allow a rational public debate on the actual, as opposed to the perceived, risks and benefits of agricultural biotechnology. This review considers first the current state of public knowledge on agricultural biotechnology, and then explores some of the popular misperceptions and logical inconsistencies in both Europe and North America. I then consider the problem of widespread scientific illiteracy, and the role of the popular media in instilling and perpetuating misperceptions. The impact of inappropriate efforts to provide 'balance' in a news story, and of belief systems and faith also impinges on public scientific illiteracy. Getting away from the abstract, we explore a more concrete example of the contrasting approach to agricultural biotechnology adoption between Europe and North America, in considering divergent approaches to enabling coexistence in farming practices. I then question who benefits from agricultural biotechnology. Is it only the big companies, or is it society at large--and the environment--also deriving some benefit? Finally, a crucial aspect in such a technologically complex issue, ordinary and intelligent non-scientifically trained consumers cannot be expected to learn the intricacies of the technology to enable a personal choice to support or reject biotechnology products. The only reasonable and pragmatic alternative is to place trust in someone to provide honest advice. But who, working in the public interest, is best suited to provide informed and accessible, but objective, advice to wary consumers?", "title": "" }, { "docid": "b86ab15486581bbf8056e4f1d30eb4e5", "text": "Existing peer-to-peer publish-subscribe systems rely on structured-overlays and rendezvous nodes to store and relay group membership information. While conceptually simple, this design incurs the significant cost of creating and maintaining rigid-structures and introduces hotspots in the system at nodes that are neither publishers nor subscribers. In this paper, we introduce Quasar, a rendezvous-less probabilistic publish-subscribe system that caters to the specific needs of social networks. It is designed to handle social networks of many groups; on the order of the number of users in the system. It creates a routing infrastructure based on the proactive dissemination of highly aggregated routing vectors to provide anycast-like directed walks in the overlay. This primitive, when coupled with a novel mechanism for dynamically negating routes, enables scalable and efficient group-multicast that obviates the need for structure and rendezvous nodes. We examine the feasibility of this approach and show in a large-scale simulation that the system is scalable and efficient.", "title": "" }, { "docid": "e2f6cd2a6b40c498755e0daf98cead19", "text": "According to an estimate several billion smart devices will be connected to the Internet by year 2020. This exponential increase in devices is a challenge to the current Internet architecture, where connectivity is based on host-to-host communication. Information-Centric Networking is a novel networking paradigm in which data is addressed by its name instead of location. Several ICN architecture proposals have emerged from research communities to address challenges introduced by the current Internet Protocol (IP) regarding e.g. scalability. Content-Centric Networking (CCN) is one of the proposals. In this paper we present a way to use CCN in an Internet of Things (IoT) context. We quantify the benefits from hierarchical content naming, transparent in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.", "title": "" }, { "docid": "3a314a72ea2911844a5a3462d052f4e7", "text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.", "title": "" }, { "docid": "6d41b17506d0e8964f850c065b9286cb", "text": "Representation learning is a key issue for most Natural Language Processing (NLP) tasks. Most existing representation models either learn little structure information or just rely on pre-defined structures, leading to degradation of performance and generalization capability. This paper focuses on learning both local semantic and global structure representations for text classification. In detail, we propose a novel Sandwich Neural Network (SNN) to learn semantic and structure representations automatically without relying on parsers. More importantly, semantic and structure information contribute unequally to the text representation at corpus and instance level. To solve the fusion problem, we propose two strategies: Adaptive Learning Sandwich Neural Network (AL-SNN) and Self-Attention Sandwich Neural Network (SA-SNN). The former learns the weights at corpus level, and the latter further combines attention mechanism to assign the weights at instance level. Experimental results demonstrate that our approach achieves competitive performance on several text classification tasks, including sentiment analysis, question type classification and subjectivity classification. Specifically, the accuracies are MR (82.1%), SST-5 (50.4%), TREC (96%) and SUBJ (93.9%).", "title": "" }, { "docid": "06f1c7daafcf59a8eb2ddf430d0d7f18", "text": "OBJECTIVES\nWe aimed to evaluate the efficacy of reinforcing short-segment pedicle screw fixation with polymethyl methacrylate (PMMA) vertebroplasty in patients with thoracolumbar burst fractures.\n\n\nMETHODS\nWe enrolled 70 patients with thoracolumbar burst fractures for treatment with short-segment pedicle screw fixation. Fractures in Group A (n = 20) were reinforced with PMMA vertebroplasty during surgery. Group B patients (n = 50) were not treated with PMMA vertebroplasty. Kyphotic deformity, anterior vertebral height, instrument failure rates, and neurological function outcomes were compared between the two groups.\n\n\nRESULTS\nKyphosis correction was achieved in Group A (PMMA vertebroplasty) and Group B (Group A, 6.4 degrees; Group B, 5.4 degrees). At the end of the follow-up period, kyphosis correction was maintained in Group A but lost in Group B (Group A, 0.33-degree loss; Group B, 6.20-degree loss) (P = 0.0001). After surgery, greater anterior vertebral height was achieved in Group A than in Group B (Group A, 12.9%; Group B, 2.3%) (P < 0.001). During follow-up, anterior vertebral height was maintained only in Group A (Group A, 0.13 +/- 4.06%; Group B, -6.17 +/- 1.21%) (P < 0.001). Patients in both Groups A and B demonstrated good postoperative Denis Pain Scale grades (P1 and P2), but Group A had better results than Group B in terms of the control of severe and constant pain (P4 and P5) (P < 0.001). The Frankel Performance Scale scores increased by nearly 1 in both Groups A and B. Group B was subdivided into Group B1 and B2. Group B1 consisted of patients who experienced instrument failure, including screw pullout, breakage, disconnection, and dislodgement (n = 11). Group B2 comprised patients from Group B who did not experience instrument failure (n = 39). There were no instrument failures among patients in Group A. Preoperative kyphotic deformity was greater in Group B1 (23.5 +/- 7.9 degrees) than in Group B2 (16.8 +/- 8.40 degrees), P < 0.05. Severe and constant pain (P4 and P5) was noted in 36% of Group B1 patients (P < 0.001), and three of these patients required removal of their implants.\n\n\nCONCLUSION\nReinforcement of short-segment pedicle fixation with PMMA vertebroplasty for the treatment of patients with thoracolumbar burst fracture may achieve and maintain kyphosis correction, and it may also increase and maintain anterior vertebral height. Good Denis Pain Scale grades and improvement in Frankel Performance Scale scores were found in patients without instrument failure (Groups A and B2). Patients with greater preoperative kyphotic deformity had a higher risk of instrument failure if they did not undergo reinforcement with vertebroplasty. PMMA vertebroplasty offers immediate spinal stability in patients with thoracolumbar burst fractures, decreases the instrument failure rate, and provides better postoperative pain control than without vertebroplasty.", "title": "" } ]
scidocsrr
7441e5c76b17cf1f246c3efebf0dd644
PROBLEMS OF EMPLOYABILITY-A STUDY OF JOB – SKILL AND QUALIFICATION MISMATCH
[ { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "c08e9731b9a1135b7fb52548c5c6f77e", "text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "9c97a3ea2acfe09e3c60cbcfa35bab7d", "text": "In comparison with document summarization on the articles from social media and newswire, argumentative zoning (AZ) is an important task in scientific paper analysis. Traditional methodology to carry on this task relies on feature engineering from different levels. In this paper, three models of generating sentence vectors for the task of sentence classification were explored and compared. The proposed approach builds sentence representations using learned embeddings based on neural network. The learned word embeddings formed a feature space, to which the examined sentence is mapped to. Those features are input into the classifiers for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on the Argumentative-Zoning (AZ) annotated articles. The results showed that simply averaging the word vectors in a sentence works better than the paragraph to vector algorithm and by integrating specific cuewords into the loss function of the neural network can improve the classification performance. In comparison with the hand-crafted features, the word2vec method won for most of the categories. However, the hand-crafted features showed their strength on classifying some of the categories.", "title": "" }, { "docid": "11e2ec2aab62ba8380e82a18d3fcb3d8", "text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.", "title": "" }, { "docid": "c38c2d8f7c21acc3fcb9b7d9ecc6d2d1", "text": "In this paper we proposed new technique for human identification using fusion of both face and speech which can substantially improve the rate of recognition as compared to the single biometric identification for security system development. The proposed system uses principal component analysis (PCA) as feature extraction techniques which calculate the Eigen vectors and Eigen values. These feature vectors are compared using the similarity measure algorithm like Mahalanobis Distances for the decision making. The Mel-Frequency cestrum coefficients (MFCC) feature extraction techniques are used for speech recognition in our project. Cross correlation coefficients are considered as primary features. The Hidden Markov Model (HMM) is used to calculate the like hoods in the MFCC extracted features to make the decision about the spoken wards.", "title": "" }, { "docid": "c8984cf950244f0d300c6446bcb07826", "text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "abec336a59db9dd1fdea447c3c0ff3d3", "text": "Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.", "title": "" }, { "docid": "8c95392ab3cc23a7aa4f621f474d27ba", "text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.", "title": "" }, { "docid": "2062b94ee661e5e50cbaa1c952043114", "text": "The harsh operating environment of the automotive application makes the semi-permanent connector susceptible to intermittent high contact resistance which eventually leads to failure. Fretting corrosion is often the cause of these failures. However, laboratory testing of sample contact materials produce results that do not correlate with commercially tested connectors. A multicontact (M-C) reliability model is developed to bring together the fundamental studies and studies conducted on commercially available connector terminals. It is based on fundamental studies of the single contact interfaces and applied to commercial multicontact terminals. The model takes into consideration firstly, that a single contact interface may recover to low contact resistance after attaining a high value and secondly, that a terminal consists of more than one contact interface. For the connector to fail, all contact interfaces have to be in the failed state at the same time.", "title": "" }, { "docid": "d8a7ab2abff4c2e5bad845a334420fe6", "text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.", "title": "" }, { "docid": "d0cdbd1137e9dca85d61b3d90789d030", "text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).", "title": "" }, { "docid": "79425b2b27a8f80d2c4012c76e6eb8f6", "text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.", "title": "" }, { "docid": "b591b75b4653c01e3525a0889e7d9b90", "text": "The concept of isogeometric analysis is proposed. Basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element hand p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is introduced. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD (Computer Aided Design) description. In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. A k-refinement strategy is shown to converge toward monotone solutions for advection–diffusion processes with sharp internal and boundary layers, a very surprising result. It is argued that isogeometric analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses several advantages. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b7c0864be28d70d49ae4a28fb7d78f04", "text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.", "title": "" }, { "docid": "dc883936f3cc19008983c9a5bb2883f3", "text": "Laparoscopic surgery provides patients with less painful surgery but is more demanding for the surgeon. The increased technological complexity and sometimes poorly adapted equipment have led to increased complaints of surgeon fatigue and discomfort during laparoscopic surgery. Ergonomic integration and suitable laparoscopic operating room environment are essential to improve efficiency, safety, and comfort for the operating team. Understanding ergonomics can not only make life of surgeon comfortable in the operating room but also reduce physical strains on surgeon.", "title": "" }, { "docid": "e9b438cfe853e98f05b661f9149c0408", "text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.", "title": "" }, { "docid": "cf5829d1bfa1ae243bbf67776b53522d", "text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "title": "" }, { "docid": "018b25742275dd628c58208e5bd5a532", "text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.", "title": "" }, { "docid": "6ef04225b5f505a48127594a12fef112", "text": "For differential operators of order 2, this paper presents a new method that combines generalized exponents to find those solutions that can be represented in terms of Bessel functions.", "title": "" } ]
scidocsrr
76a5a76952894002e0ee7e28cba3cdcf
Shall I Compare Thee to a Machine-Written Sonnet? An Approach to Algorithmic Sonnet Generation
[ { "docid": "112a1483acf7fae119036ea231fcbe85", "text": "Part of the long lasting cultural heritage of China is the classical ancient Chinese poems which follow strict formats and complicated linguistic rules. Automatic Chinese poetry composition by programs is considered as a challenging problem in computational linguistics and requires high Artificial Intelligence assistance, and has not been well addressed. In this paper, we formulate the poetry composition task as an optimization problem based on a generative summarization framework under several constraints. Given the user specified writing intents, the system retrieves candidate terms out of a large poem corpus, and then orders these terms to fit into poetry formats, satisfying tonal and rhythm requirements. The optimization process under constraints is conducted via iterative term substitutions till convergence, and outputs the subset with the highest utility as the generated poem. For experiments, we perform generation on large datasets of 61,960 classic poems from Tang and Song Dynasty of China. A comprehensive evaluation, using both human judgments and ROUGE scores, has demonstrated the effectiveness of our proposed approach.", "title": "" }, { "docid": "faa60bb1166c83893fabf82c815b4820", "text": "We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated.", "title": "" }, { "docid": "dd4820b9c90ea6e6bb4e40566396c0d1", "text": "Vision is a common source of inspiration for poetry. The objects and the sentimental imprints that one perceives from an image may lead to various feelings depending on the reader. In this paper, we present a system of poetry generation from images to mimic the process. Given an image, we first extract a few keywords representing objects and sentiments perceived from the image. These keywords are then expanded to related ones based on their associations in human written poems. Finally, verses are generated gradually from the keywords using recurrent neural networks trained on existing poems. Our approach is evaluated by human assessors and compared to other generation baselines. The results show that our method can generate poems that are more artistic than the baseline methods. This is one of the few attempts to generate poetry from images. By deploying our proposed approach, XiaoIce has already generated more than 12 million poems for users since its release in July 2017. A book of its poems has been published by Cheers Publishing, which claimed that the book is the first-ever poetry collection written by an AI in human history.", "title": "" }, { "docid": "d3069dbe4da6057d15cc0f7f6e5244cc", "text": "We take the generation of Chinese classical poem lines as a sequence-to-sequence learning problem, and build a novel system based on the RNN Encoder-Decoder structure to generate quatrains (Jueju in Chinese), with a topic word as input. Our system can jointly learn semantic meaning within a single line, semantic relevance among lines in a poem, and the use of structural, rhythmical and tonal patterns, without utilizing any constraint templates. Experimental results show that our system outperforms other competitive systems. We also find that the attention mechanism can capture the word associations in Chinese classical poetry and inverting target lines in training can improve", "title": "" }, { "docid": "517ec608208a669872a1d11c1d7836a3", "text": "Hafez is an automatic poetry generation system that integrates a Recurrent Neural Network (RNN) with a Finite State Acceptor (FSA). It generates sonnets given arbitrary topics. Furthermore, Hafez enables users to revise and polish generated poems by adjusting various style configurations. Experiments demonstrate that such “polish” mechanisms consider the user’s intention and lead to a better poem. For evaluation, we build a web interface where users can rate the quality of each poem from 1 to 5 stars. We also speed up the whole system by a factor of 10, via vocabulary pruning and GPU computation, so that adequate feedback can be collected at a fast pace. Based on such feedback, the system learns to adjust its parameters to improve poetry quality.", "title": "" } ]
[ { "docid": "955858709f4f623fda7f271b90689fe4", "text": "Empirical studies of variations in debt ratios across firms have analyzed important determinants of capital structure using statistical models. Researchers, however, rarely employ nonlinear models to examine the determinants and make little effort to identify a superior prediction model among competing ones. This paper reviews the time-series cross-sectional (TSCS) regression and the predictive abilities of neural network (NN) utilizing panel data concerning debt ratio of high-tech industries in Taiwan. We built models with these two methods using the same set of measurements as determinants of debt ratio and compared the forecasting performance of five models, namely, three TSCS regression models and two NN models. Models built with neural network obtained the lowest mean square error and mean absolute error. These results reveal that the relationships between debt ratio and determinants are nonlinear and that NNs are more competent in modeling and forecasting the test panel data. We conclude that NN models can be used to solve panel data analysis and forecasting problems.", "title": "" }, { "docid": "81f2f2ecc3b408259c1d30e6dcde9ed8", "text": "A range of new datacenter switch designs combine wireless or optical circuit technologies with electrical packet switching to deliver higher performance at lower cost than traditional packet-switched networks. These \"hybrid\" networks schedule large traffic demands via a high-rate circuits and remaining traffic with a lower-rate, traditional packet-switches. Achieving high utilization requires an efficient scheduling algorithm that can compute proper circuit configurations and balance traffic across the switches. Recent proposals, however, provide no such algorithm and rely on an omniscient oracle to compute optimal switch configurations.\n Finding the right balance of circuit and packet switch use is difficult: circuits must be reconfigured to serve different demands, incurring non-trivial switching delay, while the packet switch is bandwidth constrained. Adapting existing crossbar scheduling algorithms proves challenging with these constraints. In this paper, we formalize the hybrid switching problem, explore the design space of scheduling algorithms, and provide insight on using such algorithms in practice. We propose a heuristic-based algorithm, Solstice that provides a 2.9× increase in circuit utilization over traditional scheduling algorithms, while being within 14% of optimal, at scale.", "title": "" }, { "docid": "522938687849ccc9da8310ac9d6bbf9e", "text": "Machine learning models, especially Deep Neural Networks, are vulnerable to adversarial examples—malicious inputs crafted by adding small noises to real examples, but fool the models. Adversarial examples transfer from one model to another, enabling black-box attacks to real-world applications. In this paper, we propose a strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) to discover adversarial examples. MI-FGSM is an extension of iterative fast gradient sign method (I-FGSM) but improves the transferability significantly. Besides, we study how to attack an ensemble of models efficiently. Experiments demonstrate the effectiveness of the proposed algorithm. We hope that MI-FGSM can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods.", "title": "" }, { "docid": "6b77d96528da3152fec757928b767d31", "text": "3D interfaces use motion sensing, physical input, and spatial interaction techniques to effectively control highly dynamic virtual content. Now, with the advent of the Nintendo Wii, Sony Move, and Microsoft Kinect, game developers and researchers must create compelling interface techniques and game-play mechanics that make use of these technologies. At the same time, it is becoming increasingly clear that emerging game technologies are not just going to change the way we play games, they are also going to change the way we make and view art, design new products, analyze scientific datasets, and more.\n This introduction to 3D spatial interfaces demystifies the workings of modern videogame motion controllers and provides an overview of how it is used to create 3D interfaces for tasks such as 2D and 3D navigation, object selection and manipulation, and gesture-based application control. Topics include the strengths and limitations of various motion-controller sensing technologies in today's peripherals, useful techniques for working with these devices, and current and future applications of these technologies to areas beyond games. The course presents valuable information on how to utilize existing 3D user-interface techniques with emerging technologies, how to develop interface techniques, and how to learn from the successes and failures of spatial interfaces created for a variety of application domains.", "title": "" }, { "docid": "96b1688b19bf71e8f1981d9abe52fc2c", "text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.", "title": "" }, { "docid": "35c18e570a6ab44090c1997e7fe9f1b4", "text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.", "title": "" }, { "docid": "5b6a73103e7310de86c37185c729b8d9", "text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "9a454ccc77edb739a327192dafd5d974", "text": "In the present time, due to attractive features of cloud computing, the massive amount of data has been stored in the cloud. Though cloud-based services offer many benefits but privacy and security of the sensitive data is a big issue. These issues are resolved by storing sensitive data in encrypted form. Encrypted storage protects the data against unauthorized access, but it weakens some basic and important functionality like search operation on the data, i.e. searching the required data by the user on the encrypted data requires data to be decrypted first and then search, so this eventually, slows down the process of searching. To achieve this many encryption schemes have been proposed, however, all of the schemes handle exact Query matching but not Similarity matching. While user uploads the file, features are extracted from each document. When the user fires a query, trapdoor of that query is generated and search is performed by finding the correlation among documents stored on cloud and query keyword, using Locality Sensitive Hashing.", "title": "" }, { "docid": "1cdee228f9813e4f33df1706ec4e7876", "text": "Existing methods on sketch based image retrieval (SBIR) are usually based on the hand-crafted features whose ability of representation is limited. In this paper, we propose a sketch based image retrieval method via image-aided cross domain learning. First, the deep learning model is introduced to learn the discriminative features. However, it needs a large number of images to train the deep model, which is not suitable for the sketch images. Thus, we propose to extend the sketch training images via introducing the real images. Specifically, we initialize the deep models with extra image data, and then extract the generalized boundary from real images as the sketch approximation. The using of generalized boundary is under the assumption that their domain is similar with sketch domain. Finally, the neural network is fine-tuned with the sketch approximation data. Experimental results on Flicker15 show that the proposed method has a strong ability to link the associated image-sketch pairs and the results outperform state-of-the-arts methods.", "title": "" }, { "docid": "5267441df39432707e5c3a4616ba1413", "text": "Many investigators have detailed the soft tissue anatomy of the face. Despite the broad reference base, confusion remains about the consistent nature of the fascial anatomy of the craniofacial soft tissue envelope in relation to the muscular, neurovascular and specialised structures. This confusion is compounded by the lack of consistent terminology. This study presents a coherent account of the fascial planes of the temple and midface. Ten fresh cadaveric facial halves were dissected, in a level-by-level approach, to display the fascial anatomy of the midface and temporal region. The contralateral 10 facial halves were coronally sectioned through the zygomatic arch at a consistent point anterior to the tragus. These sections were histologically prepared to demonstrate the fascial anatomy en-bloc with the skeletal and specialised soft tissues. Three generic subcutaneous fascial layers consistently characterise the face and temporal regions, and remain in continuity across the zygomatic arch. These three layers are the superficial musculo-aponeurotic system (SMAS), the innominate fascia, and the muscular fasciae. The many inconsistent names previously given to these layers reflect their regional specialisation in the temple, zygomatic area, and midface. Appreciation of the consistency of these layers, which are in continuity with the layers of the scalp, greatly facilitates an understanding of applied craniofacial soft tissue anatomy.", "title": "" }, { "docid": "57c8b69c18b5b2c38552295f8e8789d5", "text": "In many safety-critical applications such as autonomous driving and surgical robots, it is desirable to obtain prediction uncertainties from object detection modules to help support safe decision-making. Specifically, such modules need to estimate the probability of each predicted object in a given region and the confidence interval for its bounding box. While recent Bayesian deep learning methods provide a principled way to estimate this uncertainty, the estimates for the bounding boxes obtained using these methods are uncalibrated. In this paper, we address this problem for the single-object localization task by adapting an existing technique for calibrating regression models. We show, experimentally, that the resulting calibrated model obtains more reliable uncertainty estimates.", "title": "" }, { "docid": "be866036f5ae430d6dd46cdd1d9319dd", "text": "In this contribution an integrated HPA-DPDT for next generation AESA TRMs is presented. The proposed circuit relies on a concurrent design technique merging switches and HPA matching network. Realized MMIC features a 3×5mm2 outline operating in the 6-18 GHz band with a typical output power of 2W, an associated PAE of 13% and 3dB insertion loss in RX mode.", "title": "" }, { "docid": "d8de391287150bf580c8d613000d5b84", "text": "3D integration consists of 3D IC packaging, 3D IC integration, and 3D Si integration. They are different and in general the TSV (through-silicon via) separates 3D IC packaging from 3D IC/Si integrations since the latter two use TSV but 3D IC packaging does not. TSV (with a new concept that every chip or interposer could have two surfaces with circuits) is the heart of 3D IC/Si integrations and is the focus of this investigation. The origin of 3D integration is presented. Also, the evolution, challenges, and outlook of 3D IC/Si integrations are discussed as well as their road maps are presented. Finally, a few generic, low-cost, and thermal-enhanced 3D IC integration system-in-packages (SiPs) with various passive TSV interposers are proposed.", "title": "" }, { "docid": "2ca0c604b449e1495bd57d96381e0e1f", "text": "The data ̄ow program graph execution model, or data ̄ow for short, is an alternative to the stored-program (von Neumann) execution model. Because it relies on a graph representation of programs, the strengths of the data ̄ow model are very much the complements of those of the stored-program one. In the last thirty or so years since it was proposed, the data ̄ow model of computation has been used and developed in very many areas of computing research: from programming languages to processor design, and from signal processing to recon®gurable computing. This paper is a review of the current state-of-the-art in the applications of the data ̄ow model of computation. It focuses on three areas: multithreaded computing, signal processing and recon®gurable computing. Ó 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "2c2be931e456761824920fcc9e4666ec", "text": "The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers", "title": "" }, { "docid": "b9c40aa4c8ac9d4b6cbfb2411c542998", "text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.", "title": "" }, { "docid": "6902e1604957fa21adbe90674bf5488d", "text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.", "title": "" }, { "docid": "b853f492667d4275295c0228566f4479", "text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.", "title": "" } ]
scidocsrr
e16fc85996177c379524f95d71de188d
Explainable Agency for Intelligent Autonomous Systems
[ { "docid": "bb47e6b493a204a9e0fbe97aa14fec06", "text": "Intelligent artificial agents need to be able to explain and justify their actions. They must therefore understand the rationales for their own actions. This paper describes a technique for acquiring this understanding, implemented in a multimedia explanation system. The system determines the motivation for a decision by recalling the situation in which the decision was made, and replaying the decision under variants of the original situation. Through experimentation the agent is able to discover what factors led to the decisions, and what alternatives might have been chosen had the situation been slightly different. The agent learns to recognize similar situations where the same decision would be made for the same reasons. This approach is implemented in an artificial fighter pilot that can explain the motivations for its actions, situation assessments,", "title": "" } ]
[ { "docid": "c721d86b755ade46a4919cb283f21341", "text": "We propose a novel network-based approach for location estimation in social media that integrates evidence of the social tie strength between users for improved location estimation. Concretely, we propose a location estimator -- FriendlyLocation -- that leverages the relationship between the strength of the tie between a pair of users, and the distance between the pair. Based on an examination of over 100 million geo-encoded tweets and 73 million Twitter user profiles, we identify several factors such as the number of followers and how the users interact that can strongly reveal the distance between a pair of users. We use these factors to train a decision tree to distinguish between pairs of users who are likely to live nearby and pairs of users who are likely to live in different areas. We use the results of this decision tree as the input to a maximum likelihood estimator to predict a user's location. We find that this proposed method significantly improves the results of location estimation relative to a state-of-the-art technique. Our system reduces the average error distance for 80% of Twitter users from 40 miles to 21 miles using only information from the user's friends and friends-of-friends, which has great significance for augmenting traditional social media and enriching location-based services with more refined and accurate location estimates.", "title": "" }, { "docid": "dd1fd4f509e385ea8086a45a4379a8b5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "a52d0679863b148b4fd6e112cd8b5596", "text": "Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space – or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.", "title": "" }, { "docid": "5629a9cf39611bed79ce76e661dba2fe", "text": "We investigate aspects of interoperability between a broad range of common annotation schemes for syntacto-semantic dependencies. With the practical goal of making the LinGO Redwoods Treebank accessible to broader usage, we contrast seven distinct annotation schemes of functor–argument structure, both in terms of syntactic and semantic relations. Drawing examples from a multi-annotated gold standard, we show how abstractly similar information can take quite different forms across frameworks. We further seek to shed light on the representational ‘distance’ between pure bilexical dependencies, on the one hand, and full-blown logical-form propositional semantics, on the other hand. Furthermore, we propose a fully automated conversion procedure from (logical-form) meaning representation to bilexical semantic dependencies.†", "title": "" }, { "docid": "37825cd0f6ae399204a392e3b32a667b", "text": "Abduction is inference to the best explanation. Abduction has long been studied intensively in a wide range of contexts, from artificial intelligence research to cognitive science. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life problems, as of yet no existing approach to abduction has achieved both the efficiency and formal expressiveness necessary to be a practical solution for large-scale reasoning on real-life problems. The contributions of our work are the following: (i) we reformulate abduction as an Integer Linear Programming (ILP) optimization problem, providing full support for first-order predicate logic (FOPL); (ii) we employ Cutting Plane Inference, which is an iterative optimization strategy developed in Operations Research for making abductive reasoning in full-fledged FOPL tractable, showing its efficiency on a real-life dataset; (iii) the abductive inference engine presented in this paper is made publicly available.", "title": "" }, { "docid": "fb3002fff98d4645188910989638af69", "text": "Stress is important in substance use disorders (SUDs). Mindfulness training (MT) has shown promise for stress-related maladies. No studies have compared MT to empirically validated treatments for SUDs. The goals of this study were to assess MT compared to cognitive behavioral therapy (CBT) in substance use and treatment acceptability, and specificity of MT compared to CBT in targeting stress reactivity. Thirty-six individuals with alcohol and/or cocaine use disorders were randomly assigned to receive group MT or CBT in an outpatient setting. Drug use was assessed weekly. After treatment, responses to personalized stress provocation were measured. Fourteen individuals completed treatment. There were no differences in treatment satisfaction or drug use between groups. The laboratory paradigm suggested reduced psychological and physiological indices of stress during provocation in MT compared to CBT. This pilot study provides evidence of the feasibility of MT in treating SUDs and suggests that MT may be efficacious in targeting stress.", "title": "" }, { "docid": "969ba9848fa6d02f74dabbce2f1fe3ab", "text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.", "title": "" }, { "docid": "bd973cb5d5343293c9c68646dfbb1005", "text": "The betweenness metric has always been intriguing and used in many analyses. Yet, it is one of the most computationally expensive kernels in graph mining. For that reason, making betweenness centrality computations faster is an important and well-studied problem. In this work, we propose the framework, BADIOS, which compresses a network and shatters it into pieces so that the centrality computation can be handled independently for each piece. Although BADIOS is designed and tuned for betweenness centrality, it can easily be adapted for other centrality metrics. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for various types and sizes of networks. In particular, it reduces the computation time of a 4.6 million edges graph from more than 5 days to less than 16 hours.", "title": "" }, { "docid": "203e785e24430d4b0c9c1c1b13d2a254", "text": "The impact of cardiovascular disease was compared in non-diabetics and diabetics in the Framingham cohort. In the first 20 years of the study about 6% of the women and 8% of the men were diagnosed as diabetics. The incidence of cardiovascular disease among diabetic men was twice that among non-diabetic men. Among diabetic women the incidence of cardiovascular disease was three times that among non-diabetic women. Judging from a comparison of standardized coefficients for the regression of incidence of cardiovascular disease on specified risk factors, there is no indication that the relationship of risk factors to the subsequent development of cardiovascular disease is different for diabetics and non-diabetics. This study suggests that the role of diabetes as a cardiovascular risk factor does not derive from an altered ability to contend with known risk factors.", "title": "" }, { "docid": "05237a9da2d94be2b85011ec2af972ba", "text": "BACKGROUND\nStrong evidence shows that physical inactivity increases the risk of many adverse health conditions, including major non-communicable diseases such as coronary heart disease, type 2 diabetes, and breast and colon cancers, and shortens life expectancy. Because much of the world's population is inactive, this link presents a major public health issue. We aimed to quantify the eff ect of physical inactivity on these major non-communicable diseases by estimating how much disease could be averted if inactive people were to become active and to estimate gain in life expectancy at the population level.\n\n\nMETHODS\nFor our analysis of burden of disease, we calculated population attributable fractions (PAFs) associated with physical inactivity using conservative assumptions for each of the major non-communicable diseases, by country, to estimate how much disease could be averted if physical inactivity were eliminated. We used life-table analysis to estimate gains in life expectancy of the population.\n\n\nFINDINGS\nWorldwide, we estimate that physical inactivity causes 6% (ranging from 3·2% in southeast Asia to 7·8% in the eastern Mediterranean region) of the burden of disease from coronary heart disease, 7% (3·9-9·6) of type 2 diabetes, 10% (5·6-14·1) of breast cancer, and 10% (5·7-13·8) of colon cancer. Inactivity causes 9% (range 5·1-12·5) of premature mortality, or more than 5·3 million of the 57 million deaths that occurred worldwide in 2008. If inactivity were not eliminated, but decreased instead by 10% or 25%, more than 533 000 and more than 1·3 million deaths, respectively, could be averted every year. We estimated that elimination of physical inactivity would increase the life expectancy of the world's population by 0·68 (range 0·41-0·95) years.\n\n\nINTERPRETATION\nPhysical inactivity has a major health eff ect worldwide. Decrease in or removal of this unhealthy behaviour could improve health substantially.\n\n\nFUNDING\nNone.", "title": "" }, { "docid": "e37276916d5f8682b104448489efbfc6", "text": "With the spurt in usage of smart devices, large amounts of unstructured text is generated by numerous social media tools. This text is often filled with stylistic or linguistic variations making the text analytics using traditional machine learning tools to be less effective. One of the specific problem in Indian context is to deal with large number of languages used by social media users in their roman form. As part of FIRE-2015 shared task on mixed script information retrieval, we address the problem of word level language identification. Our approach consists of a two stage algorithm for language identification. First level classification is done using sentence level character n-grams and second level consists of word level character n-grams based classifier. This approach effectively captures the linguistic mode of author in social texting enviroment. The overall weighted F-Score for the run submitted to FIRE Shared task is 0.7692. The sentence level classification algorithm which is used in achiving this result has an accuracy of 0.6887. We could further improve the accuracy of sentence level classifier further by 1.6% using additional social media text crawled from other sources. Naive Bayes classifier showed largest improvement (5.5%) in accuracy level by the addition of supplementary tuples. We also observed that using semi-supervised learning algorithm such as Expectation Maximization with Naive Bayes, the accuracy could be improved to 0.7977.", "title": "" }, { "docid": "1b22f8af2314c5fb9a2a8218e6ba5c54", "text": "It has been well known that neural networks with rectified linear hidden units (ReLU) as activation functions are positively scale invariant, which results in severe redundancy in their weight space (i.e., many ReLU networks with different weights are actually equivalent). In this paper, we formally characterize this redundancy/equivalence using the language of quotient space and discuss its negative impact on the optimization of ReLU neural networks. Specifically, we show that all equivalent ReLU networks correspond to the same vector in the quotient space, and each such vector can be characterized by the so-called skeleton paths in the ReLU networks. With this, we prove that the dimensionality of the quotient space is #weight−#(hidden nodes), indicating that the redundancy of the weight space is huge. In this paper, we propose to optimize ReLU neural networks directly in the quotient space, instead of the original weight space. We represent the loss function in the quotient space and design a new stochastic gradient descent algorithm to iteratively learn the model, which we call Quotient stochastic gradient descent (abbreviated as Quotient SGD). We also develop efficient tricks to ensure that the implementation of Quotient SGD almost requires no extra computations as compared to standard SGD. According to the experiments on benchmark datasets, our proposed Quotient SGD can significantly improve the accuracy of the learned model.", "title": "" }, { "docid": "4ad3c199ad1ba51372e9f314fc1158be", "text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.", "title": "" }, { "docid": "36d0c6ba49223becc0e28c4b197b17a3", "text": "Wastewater treatment plants (WWTPs) have been identified as potential sources of antibiotic resistance genes (ARGs) but the effects of tertiary wastewater treatment processes on ARGs have not been well characterized. Therefore, the objective of this study was to determine the fate of ARGs throughout a tertiary-stage WWTP. Two ARGs, sul1 and bla, were quantified via quantitative polymerase chain reaction (qPCR) in solids and dissolved fractions of raw sewage, activated sludge, secondary effluent and tertiary effluent from a full-scale WWTP. Tertiary media filtration and chlorine disinfection were studied further with the use of a pilot-scale media filter. Results showed that both genes were reduced at each successive stage of treatment in the dissolved fraction. The solids-associated ARGs increased during activated sludge stage and were reduced in each subsequent stage. Overall reductions were approximately four log10 with the tertiary media filtration and disinfection providing the largest decrease. The majority of ARGs were solids-associated except for in the tertiary effluent. There was no evidence for positive selection of ARGs during treatment. The removal of ARGs by chlorine was improved by filtration compared to unfiltered, chlorinated secondary effluent. This study demonstrates that tertiary-stage WWTPs with disinfection can provide superior removal of ARGs compared to secondary treatment alone.", "title": "" }, { "docid": "a29a61f5ad2e4b44e8e3d11b471a0f06", "text": "To ascertain by MRI the presence of filler injected into facial soft tissue and characterize complications by contrast enhancement. Nineteen volunteers without complications were initially investigated to study the MRI features of facial fillers. We then studied another 26 patients with clinically diagnosed filler-related complications using contrast-enhanced MRI. TSE-T1-weighted, TSE-T2-weighted, fat-saturated TSE-T2-weighted, and TIRM axial and coronal scans were performed in all patients, and contrast-enhanced fat-suppressed TSE-T1-weighted scans were performed in complicated patients, who were then treated with antibiotics. Patients with soft-tissue enhancement and those without enhancement but who did not respond to therapy underwent skin biopsy. Fisher’s exact test was used for statistical analysis. MRI identified and quantified the extent of fillers. Contrast enhancement was detected in 9/26 patients, and skin biopsy consistently showed inflammatory granulomatous reaction, whereas in 5/17 patients without contrast enhancement, biopsy showed no granulomas. Fisher’s exact test showed significant correlation (p < 0.001) between subcutaneous contrast enhancement and granulomatous reaction. Cervical lymph node enlargement (longitudinal axis >10 mm) was found in 16 complicated patients (65 %; levels IA/IB/IIA/IIB). MRI is a useful non-invasive tool for anatomical localization of facial dermal filler; IV gadolinium administration is advised in complicated cases for characterization of granulomatous reaction. • MRI is a non-invasive tool for facial dermal filler detection and localization. • MRI-criteria to evaluate complicated/non-complicated cases after facial dermal filler injections are defined. • Contrast-enhanced MRI detects subcutaneous inflammatory granulomatous reaction due to dermal filler. • 65 % patients with filler-related complications showed lymph-node enlargement versus 31.5 % without complications. • Lymph node enlargement involved cervical levels (IA/IB/IIA/IIB) that drained treated facial areas.", "title": "" }, { "docid": "49a87829a12168de2be2ee32a23ddeb7", "text": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users’ levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.", "title": "" }, { "docid": "8109594325601247cdb253dbb76b9592", "text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.", "title": "" }, { "docid": "c894deedbdbd6aee3cf3955d1c463577", "text": "Vast collections of documents available in image format need to be indexed for information retrieval purposes. In this framework, word spotting is an alternative solution to optical character recognition (OCR), which is rather inefficient for recognizing text of degraded quality and unknown fonts usually appearing in printed text, or writing style variations in handwritten documents. Over the past decade there has been a growing interest in addressing document indexing using word spotting which is reflected by the continuously increasing number of approaches. However, there exist very few comprehensive studies which analyze the various aspects of a word spotting system. This work aims to review the recent approaches as well as fill the gaps in several topics with respect to the related works. The nature of texts and inherent challenges addressed by word spotting methods are thoroughly examined. After presenting the core steps which compose a word spotting system, we investigate the use of retrieval enhancement techniques based on relevance feedback which improve the retrieved results. Finally, we present the datasets which are widely used for word spotting, we describe the evaluation standards and measures applied for performance assessment and discuss the results achieved by the state of the art. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "889e3d786a27a3e75972573e30f02e9b", "text": "We present part of a vision system for blind and visually impaired people. It detects obstacles on sidewalks and provides guidance to avoid them. Obstacles are trees, light poles, trash cans, holes, branches, stones and other objects at a distance of 3 to 5 meters from the camera position. The system first detects the sidewalk borders, using edge information in combination with a tracking mask, to obtain straight lines with their slopes and the vanishing point. Once the borders are found, a rectangular window is defined within which two obstacle detection methods are applied. The first determines the variation of the maxima and minima of the gray levels of the pixels. The second uses the binary edge image and searches in the vertical and horizontal histograms for discrepancies of the number of edge points. Together, these methods allow to detect possible obstacles with their position and size, such that the user can be alerted and informed about the best way to avoid them. The system works in realtime and complements normal navigation with the cane.", "title": "" }, { "docid": "c12e906e6841753657ffe7630145708b", "text": "We present here a complete dynamic model of a lithium ion battery that is suitable for virtual-prototyping of portable battery-powered systems. The model accounts for nonlinear equilibrium potentials, rateand temperature-dependencies, thermal effects and response to transient power demand. The model is based on publicly available data such as the manufacturers’ data sheets. The Sony US18650 is used as an example. The model output agrees both with manufacturer’s data and with experimental results. The model can be easily modified to fit data from different batteries and can be extended for wide dynamic ranges of different temperatures and current rates.", "title": "" } ]
scidocsrr
f172512c8d31844ec68149e88c094982
Cellulose chemical markers in transformer oil insulation Part 1: Temperature correction factors
[ { "docid": "7a4f42c389dbca2f3c13469204a22edd", "text": "This article attempts to capture and summarize the known technical information and recommendations for analysis of furan test results. It will also provide the technical basis for continued gathering and evaluation of furan data for liquid power transformers, and provide a recommended structure for collecting that data.", "title": "" } ]
[ { "docid": "94f39416ba9918e664fb1cd48732e3ae", "text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.", "title": "" }, { "docid": "1274656b97db1f736944c174a174925d", "text": "In full-duplex systems, due to the strong self-interference signal, system nonlinearities become a significant limiting factor that bounds the possible cancellable self-interference power. In this paper, a self-interference cancellation scheme for full-duplex orthogonal frequency division multiplexing systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearities. An iterative technique is used to jointly estimate the self-interference channel and the nonlinearity coefficients required to suppress the distortion signal. The performance is numerically investigated showing that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.", "title": "" }, { "docid": "fac92316ce84b0c10b0bef2827d78b03", "text": "Background: High rates of teacher turnover likely mean greater school instability, disruption of curricular cohesiveness, and a continual need to hire inexperienced teachers, who typically are less effective, as replacements for teachers who leave. Unfortunately, research consistently finds that teachers who work in schools with large numbers of poor students and students of color feel less satisfied and are more likely to turn over, meaning that turnover is concentrated in the very schools that would benefit most from a stable staff of experienced teachers. Despite the potential challenge that this turnover disparity poses for equity of educational opportunity and student performance gaps across schools, little research has examined the reasons for elevated teacher turnover in schools with large numbers of traditionally disadvantaged students. Purpose: This study hypothesizes that school working conditions help explain both teacher satisfaction and turnover. In particular, it focuses on the role effective principals in retaining teachers, particularly in disadvantaged schools with the greatest staffing challenges. Research Design: The study conducts quantitative analysis of national data from the 2003-04 Schools and Staffing Survey and 2004-05 Teacher Follow-up Survey. Regression analyses combat the potential for bias from omitted variables by utilizing an extensive set of control variables and employing a school district fixed effects approach that implicitly makes comparisons among principals and teachers within the same local context. Conclusions: Descriptive analyses confirm that observable measures of teachers‘ work environments, including ratings of the effectiveness of the principal, generally are lower in schools with large numbers of disadvantaged students. Regression results show that principal effectiveness is associated with greater teacher satisfaction and a lower probability that the teacher leaves the school within a year. Moreover, the positive impacts of principal effectiveness on these teacher outcomes are even greater in disadvantaged schools. These findings suggest that policies focused on getting the best principals into the most challenging school environments may be effective strategies for lowering perpetually high teacher turnover rates in those schools.", "title": "" }, { "docid": "9955e99d9eba166458f5551551ab05e3", "text": "Every day, millions of tons of temperature sensitive goods are produced, transported, stored or distributed worldwide. For all these products the control of temperature is essential. The term “cold chain” describes the series of interdependent equipment and processes employed to ensure the temperature preservation of perishables and other temperaturecontrolled products from the production to the consumption end in a safe, wholesome, and good quality state (Zhang, 2007). In other words, it is a supply chain of temperature sensitive products. So temperature-control is the key point in cold chain operation and the most important factor when prolonging the practical shelf life of produce. Thus, the major challenge is to ensure a continuous ‘cold chain’ from producer to consumer in order to guaranty prime condition of goods (Ruiz-Garcia et al., 2007).These products can be perishable items like fruit, vegetables, flowers, fish, meat and dairy products or medical products like drugs, blood, vaccines, organs, plasma and tissues. All of them can have their properties affected by temperature changes. Also some chemicals and electronic components like microchips are temperature sensitive.", "title": "" }, { "docid": "e948583ef067952fa8c968de5e5ae643", "text": "A key problem in learning representations of multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter. Distinguishing individual objects in a scene would allow unsupervised learning of multiple objects from unlabeled images. There is psychophysical and neurophysiological evidence that the brain employs visual attention to select relevant parts of the image and to serialize the perception of individual objects. We propose a method for the selection of salient regions likely to contain objects, based on bottom-up visual attention. By comparing the performance of David Lowe s recognition algorithm with and without attention, we demonstrate in our experiments that the proposed approach can enable one-shot learning of multiple objects from complex scenes, and that it can strongly improve learning and recognition performance in the presence of large amounts of clutter. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a55422a96369797c7d42cb77dc99c6dc", "text": "In order to store massive image data in real-time system, a high performance Serial Advanced Technology Attachment[1] (SATA) controller is proposed in this paper. RocketIO GTX transceiver[2] realizes physical layer of SATA protocol. Link layer and transport layers are implemented in VHDL with programmable logic resources. Application layer is developed on POWERPC440 embedded in Xilinx Virtex-5 FPGA. The whole SATA protocol implement in a platform FPGA has better features in expansibility, scalability, improvability and in-system programmability comparing with realizing it using Application Specific Integrated Circuit (ASIC). The experiment results shown that the controller works accurately and stably and the maximal sustained orderly data transfer rate up to 110 MB/s when connect to SATA hard disk. The high performance of the host SATA controller makes it possible that cheap SATA hard disk instead expensive Small Computer System Interface (SCSI) hard disk in some application. The controller is very suited for high speed mass data storage in embedded system.", "title": "" }, { "docid": "df63ca9286b2fc520d6be36edb7afaef", "text": "To analyse the accuracy of dual-energy contrast-enhanced spectral mammography in dense breasts in comparison with contrast-enhanced subtracted mammography (CESM) and conventional mammography (Mx). CESM cases of dense breasts with histological proof were evaluated in the present study. Four radiologists with varying experience in mammography interpretation blindly read Mx first, followed by CESM. The diagnostic profiles, consistency and learning curve were analysed statistically. One hundred lesions (28 benign and 72 breast malignancies) in 89 females were analysed. Use of CESM improved the cancer diagnosis by 21.2 % in sensitivity (71.5 % to 92.7 %), by 16.1 % in specificity (51.8 % to 67.9 %) and by 19.8 % in accuracy (65.9 % to 85.8 %) compared with Mx. The interobserver diagnostic consistency was markedly higher using CESM than using Mx alone (0.6235 vs. 0.3869 using the kappa ratio). The probability of a correct prediction was elevated from 80 % to 90 % after 75 consecutive case readings. CESM provided additional information with consistent improvement of the cancer diagnosis in dense breasts compared to Mx alone. The prediction of the diagnosis could be improved by the interpretation of a significant number of cases in the presence of 6 % benign contrast enhancement in this study. • DE-CESM improves the cancer diagnosis in dense breasts compared with mammography. • DE-CESM shows greater consistency than mammography alone by interobserver blind reading. • Diagnostic improvement of DE-CESM is independent of the mammographic reading experience.", "title": "" }, { "docid": "0169f6c2eee1710d2ccd1403116da68f", "text": "A resonant snubber is described for voltage-source inverters, current-source inverters, and self-commutated frequency changers. The main self-turn-off devices have shunt capacitors directly across them. The lossless resonant snubber described avoids trapping energy in a converter circuit where high dynamic stresses at both turn-on and turn-off are normally encountered. This is achieved by providing a temporary parallel path through a small ordinary thyristor (or other device operating in a similar node) to take over the high-stress turn-on duty from the main gate turn-off (GTO) or power transistor, in a manner that leaves no energy trapped after switching.<<ETX>>", "title": "" }, { "docid": "dc323eabca83c4e9381539832dbb7f63", "text": "We present the main freight transportation planning and management issues, briefly review the associated literature, describe a number of major developments, and identify trends and challenges. In order to keep the length of the paper within reasonable limits, we focus on long-haul, intercity, freight transportation. Optimization-based operations research methodologies are privileged. The paper starts with an overview of freight transportation systems and planning issues and continues with models which attempt to analyze multimodal, multicommodity transportation systems at the regional, national or global level. We then review location and network design formulations which are often associated with the long-term evolution of transportation systems and also appear prominently when service design issues are considered as described later on. Operational models and methods, particularly those aimed at the allocation and repositioning of resources such as empty vehicles, are then described. To conclude, we identify a number of interesting problems and challenges.", "title": "" }, { "docid": "7ac2f63821256491f45e2a9666333853", "text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.", "title": "" }, { "docid": "fdd14b086d77b95b7ca00ab744f39458", "text": "1567-4223/$34.00 Crown Copyright 2008 Publishe doi:10.1016/j.elerap.2008.11.001 * Corresponding author. Tel.: +886 7 5254713; fax: E-mail address: [email protected] (C.-C. H While eWOM advertising has recently emerged as an effective marketing strategy among marketing practitioners, comparatively few studies have been conducted to examine the eWOM from the perspective of pass-along emails. Based on social capital theory and social cognitive theory, this paper develops a model involving social enablers and personal cognition factors to explore the eWOM behavior and its efficacy. Data collected from 347 email users have lent credit to the model proposed. Tested by LISREL 8.70, the results indicate that the factors such as message involvement, social interaction tie, affection outcome expectations and message passing self-efficacy exert significant influences on pass-along email intentions (PAEIs). The study result may well be useful to marketing practitioners who are considering email marketing, especially to those who are in the process of selecting key email users and/or designing product advertisements to heighten the eWOM effect. Crown Copyright 2008 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a120d11f432017c3080bb4107dd7ea71", "text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.", "title": "" }, { "docid": "f0c8b45d2648de6825975cba4dd9d429", "text": "This work presents a safe navigation approach for a carlike robot. The approach relies on a global motion planning based on Velocity Vector Fields along with a Dynamic Window Approach for avoiding unmodeled obstacles. Basically, the vector field is associated with a kinematic, feedback-linearization controller whose outputs are validated, and eventually modified, by the Dynamic Window Approach. Experiments with a full-size autonomous car equipped with a stereo camera show that the vehicle was able to track the vector field and avoid obstacles in its way.", "title": "" }, { "docid": "6922a913c6ede96d5062f055b55377e7", "text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.", "title": "" }, { "docid": "22654d2ed4c921c7bceb22ce9f9dc892", "text": "xv", "title": "" }, { "docid": "ddeb70a9abd07b113c8c7bfcf2f535b6", "text": "Implementation of authentic leadership can affect not only the nursing workforce and the profession but the healthcare delivery system and society as a whole. Creating a healthy work environment for nursing practice is crucial to maintain an adequate nursing workforce; the stressful nature of the profession often leads to burnout, disability, and high absenteeism and ultimately contributes to the escalating shortage of nurses. Leaders play a pivotal role in retention of nurses by shaping the healthcare practice environment to produce quality outcomes for staff nurses and patients. Few guidelines are available, however, for creating and sustaining the critical elements of a healthy work environment. In 2005, the American Association of Critical-Care Nurses released a landmark publication specifying 6 standards (skilled communication, true collaboration, effective decision making, appropriate staffing, meaningful recognition, and authentic leadership) necessary to establish and sustain healthy work environments in healthcare. Authentic leadership was described as the \"glue\" needed to hold together a healthy work environment. Now, the roles and relationships of authentic leaders in the healthy work environment are clarified as follows: An expanded definition of authentic leadership and its attributes (eg, genuineness, trustworthiness, reliability, compassion, and believability) is presented. Mechanisms by which authentic leaders can create healthy work environments for practice (eg, engaging employees in the work environment to promote positive behaviors) are described. A practical guide on how to become an authentic leader is advanced. A research agenda to advance the study of authentic leadership in nursing practice through collaboration between nursing and business is proposed.", "title": "" }, { "docid": "e72cfaa1d2781e7dda66625ce45bdebb", "text": "Providing appropriate methods to facilitate the analysis of time-oriented data is a key issue in many application domains. In this paper, we focus on the unique role of the parameter time in the context of visually driven data analysis. We will discuss three major aspects - visualization, analysis, and the user. It will be illustrated that it is necessary to consider the characteristics of time when generating visual representations. For that purpose, we take a look at different types of time and present visual examples. Integrating visual and analytical methods has become an increasingly important issue. Therefore, we present our experiences in temporal data abstraction, principal component analysis, and clustering of larger volumes of time-oriented data. The third main aspect we discuss is supporting user-centered visual analysis. We describe event-based visualization as a promising means to adapt the visualization pipeline to needs and tasks of users.", "title": "" }, { "docid": "7ebd355d65c8de8607da0363e8c86151", "text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.", "title": "" }, { "docid": "32025802178ce122c288a558ba6572e4", "text": "Based on this literature review, early orthodontic treatment of unilateral posterior crossbites with mandibular shifts is recommended. Treatment success is high if it is started early. Evidence that crossbites are not self-correcting, have some association with temporomandibular disorders and cause skeletal, dental and muscle adaptation provides further rationale for early treatment. It can be difficult to treat unilateral crossbites in adults without a combination of orthodontics and surgery. The most appropriate timing of treatment occurs when the patient is in the late deciduous or early mixed dentition stage as expansion modalities are very successful in this age group and permanent incisors are given more space as a result of the expansion. Treatment of unilateral posterior crossbites generally involves symmetric expansion of the maxillary arch, removal of selective occlusal interferences and elimination of the mandibular functional shift. The general practitioner and pediatric dentist must be able to diagnose unilateral posterior crossbites successfully and provide treatment or referral to take advantage of the benefits of early treatment.", "title": "" }, { "docid": "dfcb51bd990cce7fb7abfe8802dc0c4e", "text": "In this paper, we describe the machine learning approach we used in the context of the Automatic Cephalometric X-Ray Landmark Detection Challenge. Our solution is based on the use of ensembles of Extremely Randomized Trees combined with simple pixel-based multi-resolution features. By carefully tuning method parameters with cross-validation, our approach could reach detection rates ≥ 90% at an accuracy of 2.5mm for 8 landmarks. Our experiments show however a high variability between the different landmarks, with some landmarks detected at a much lower rate than others.", "title": "" } ]
scidocsrr
c28b2ace3b64b6a709de7a3e3d48af26
McDRAM: Low Latency and Energy-Efficient Matrix Computations in DRAM
[ { "docid": "d716725f2a5d28667a0746b31669bbb7", "text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.", "title": "" }, { "docid": "59ba2709e4f3653dcbd3a4c0126ceae1", "text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.", "title": "" }, { "docid": "f53d8be1ec89cb8a323388496d45dcd0", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" } ]
[ { "docid": "0ab46230770ad5977608ebb3257c0cc1", "text": "In this letter, we present a system capable of inferring intent from observed vehicles traversing an unsignalized intersection, a task critical for the safe driving of autonomous vehicles, and beneficial for advanced driver assistance systems. We present a prediction method based on recurrent neural networks that takes data from a Lidar-based tracking system similar to those expected in future smart vehicles. The model is validated on a roundabout, a popular style of unsignalized intersection in urban areas. We also present a very large naturalistic dataset recorded in a typical intersection during two days of operation. This comprehensive dataset is used to demonstrate the performance of the algorithm introduced in this letter. The system produces excellent results, giving a significant 1.3-s prediction window before any potential conflict occurs.", "title": "" }, { "docid": "285d1b4d5a38ecb2e6eb45fbebfa0d0e", "text": "As machine learning (ML) systems become democratized, it becomes increasingly important to help users easily debug their models. However, current data tools are still primitive when it comes to helping users trace model performance problems all the way to the data. We focus on the particular problem of slicing data to identify subsets of the validation data where the model performs poorly. This is an important problem in model validation because the overall model performance can fail to reflect that of the smaller subsets, and slicing allows users to analyze the model performance on a more granularlevel. Unlike general techniques (e.g., clustering) that can find arbitrary slices, our goal is to find interpretable slices (which are easier to take action compared to arbitrary subsets) that are problematic and large. We propose Slice Finder, which is an interactive framework for identifying such slices using statistical techniques. Applications include diagnosing model fairness and fraud detection, where identifying slices that are interpretable to humans is crucial.", "title": "" }, { "docid": "0780f9240aaaa6b45cf4edf1d0de15ec", "text": "Adaptive Case Management (ACM) is a new paradigm that facilitates the coordination of knowledge work through case handling. Current ACM systems, however, lack support of providing sophisticated user guidance for next step recommendations and predictions about the case future. In recent years, process mining research developed approaches to make recommendations and predictions based on event logs readily available in process-aware information systems. This paper builds upon those approaches and integrates them into an existing ACM solution. The research goal is to design and develop a prototype that gives next step recommendations and predictions based on process mining techniques in ACM systems. The models proposed, recommend actions that shorten the case running time, mitigate deadline transgressions, support case goals and have been used in former cases with similar properties. They further give case predictions about the remaining time, possible deadline violations, and whether the current case path supports given case goals. A final evaluation proves that the prototype is indeed capable of making proper recommendations and predictions. In addition, starting points for further improvement are discussed.", "title": "" }, { "docid": "571c7cb6e0670539a3effbdd65858d2a", "text": "When writing software, developers often employ abbreviations in identifier names. In fact, some abbreviations may never occur with the expanded word, or occur more often in the code. However, most existing program comprehension and search tools do little to address the problem of abbreviations, and therefore may miss meaningful pieces of code or relationships between software artifacts. In this paper, we present an automated approach to mining abbreviation expansions from source code to enhance software maintenance tools that utilize natural language information. Our scoped approach uses contextual information at the method, program, and general software level to automatically select the most appropriate expansion for a given abbreviation. We evaluated our approach on a set of 250 potential abbreviations and found that our scoped approach provides a 57% improvement in accuracy over the current state of the art.", "title": "" }, { "docid": "ec0f7117acc67ae85b381b1d5f2dc5fa", "text": "We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation. Compared to the commonly used Dice loss, our loss function achieves a better trade off between precision and recall when training on small structures such as lesions. To evaluate our loss function, we improve the attention U-Net model by incorporating an image pyramid to preserve contextual features. We experiment on the BUS 2017 dataset and ISIC 2018 dataset where lesions occupy 4.84% and 21.4% of the images area and improve segmentation accuracy when compared to the standard U-Net by 25.7% and 3.6%, respectively.", "title": "" }, { "docid": "fb173d15e079fcdf0cc222f558713f9c", "text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.", "title": "" }, { "docid": "51be236c79d1af7a2aff62a8049fba34", "text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.", "title": "" }, { "docid": "136481c06ef00d0bd5bb7f45a8655c35", "text": "The spread of aggressive tweets, status and comments on social network are increasing gradually. People are using social media networks as a virtual platform to troll, objurgate, blaspheme and revile one another. These activities are spreading animosity in race-to-race, religion to religion etc. So, these comments should be identified and blocked on social networks. This work focuses on extracting comments from social networks and analyzes those comments whether they convey any blaspheme or revile in meaning. Comments are classified into three distinct classes; offensive, hate speech and neither. Document similarity analyses are done to identify the correlations among the documents. A well defined text pre-processing analysis is done to create an optimized word vector to train the classification model. Finally, the proposed model categorizes the comments into their respective classes with more than 93% accuracy.", "title": "" }, { "docid": "8047c0ba3b0a2838e7df95c8246863f4", "text": "Neurons in the ventral premotor cortex of the monkey encode the locations of visual, tactile, auditory and remembered stimuli. Some of these neurons encode the locations of stimuli with respect to the arm, and may be useful for guiding movements of the arm. Others encode the locations of stimuli with respect to the head, and may be useful for guiding movements of the head. We suggest that a general principle of sensory-motor integration is that the space surrounding the body is represented in body-part-centered coordinates. That is, there are multiple coordinate systems used to guide movement, each one attached to a different part of the body. This and other recent evidence from both monkeys and humans suggest that the formation of spatial maps in the brain and the guidance of limb and body movements do not proceed in separate stages but are closely integrated in both the parietal and frontal lobes.", "title": "" }, { "docid": "e8fb4848c8463bfcbe4a09dfeda52584", "text": "A highly efficient rectifier for wireless power transfer in biomedical implant applications is implemented using 0.18-m CMOS technology. The proposed rectifier with active nMOS and pMOS diodes employs a four-input common-gate-type capacitively cross-coupled latched comparator to control the reverse leakage current in order to maximize the power conversion efficiency (PCE) of the rectifier. The designed rectifier achieves a maximum measured PCE of 81.9% at 13.56 MHz under conditions of a low 1.5-Vpp RF input signal with a 1- k output load resistance and occupies 0.009 mm2 of core die area.", "title": "" }, { "docid": "dc34a320af0e7a104686a36f7a6101c3", "text": "In this paper, the proposed SIMO (Single input multiple outputs) DC-DC converter based on coupled inductor. The required controllable high DC voltage and intermediate DC voltage with high voltage gain from low input voltage sources, like renewable energy, can be achieved easily from the proposed converter. The high voltage DC bus can be used as the leading power for a DC load and intermediate voltage DC output terminals can charge supplementary power sources like battery modules. This converter operates simply with one power switch. It incorporates the techniques of voltage clamping (VC) and zero current switching (ZCS). The simulation result in PSIM software shows that the aims of high efficiency, high voltage gain, several output voltages with unlike levels, are achieved.", "title": "" }, { "docid": "a6e4a1912f2a0e58f97f4b5a5ab93dec", "text": "An adaptive fuzzy inference neural network (AFINN) is proposed in this paper. It has self-construction ability, parameter estimation ability and rule extraction ability. The structure of AFINN is formed by the following four phases: (1) initial rule creation, (2) selection of important input elements, (3) identification of the network structure and (4) parameter estimation using LMS (least-mean square) algorithm. When the number of input dimension is large, the conventional fuzzy systems often cannot handle the task correctly because the degree of each rule becomes too small. AFINN solves such a problem by modification of the learning and inference algorithm. 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "81b242e3c98eaa20e3be0a9777aa3455", "text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.", "title": "" }, { "docid": "997603462c825d4a3d61683adc2003c6", "text": "A new feeding technique for a broadband circularly polarized aperture-coupled patch antenna is proposed operating at X-band. The stacked microstrip antennas are used for broad bandwidth and flat gain. These broadband antennas fed by slot-coupled quadrature hybrid have dual-offset feedlines for low cross-polarization level. The quadrature hybrid has multi-section with multi-layers. The grounds of coupler and antenna are connected by via. The simulated 10 dB return loss bandwidth is 35.5% from 8.1 to 11.6 GHz and the 3 dB axial ratio (AR) bandwidth is 35%.", "title": "" }, { "docid": "32bf3e0ce6f9bc8864bd905ffebcfcce", "text": "BACKGROUND AND PURPOSE\nTo improve the accuracy of early postonset prediction of motor recovery in the flaccid hemiplegic arm, the effects of change in motor function over time on the accuracy of prediction were evaluated, and a prediction model for the probability of regaining dexterity at 6 months was developed.\n\n\nMETHODS\nIn 102 stroke patients, dexterity and paresis were measured with the Action Research Arm Test, Motricity Index, and Fugl-Meyer motor evaluation. For model development, 23 candidate determinants were selected. Logistic regression analysis was used for prognostic factors and model development.\n\n\nRESULTS\nAt 6 months, some dexterity in the paretic arm was found in 38%, and complete functional recovery was seen in 11.6% of the patients. Total anterior circulation infarcts, right hemisphere strokes, homonymous hemianopia, visual gaze deficit, visual inattention, and paresis were statistically significant related to a poor arm function. Motricity Index leg scores of at least 25 points in the first week and Fugl-Meyer arm scores of 11 points in the second week increasing to 19 points in the fourth week raised the probability of developing some dexterity (Action Research Arm Test >or=10 points) from 74% (positive predictive value [PPV], 0.74; 95% confidence interval [CI], 0.63 to 0.86) to 94% (PPV, 0.83; 95% CI, 0.76 to 0.91) at 6 months. No change in probabilities of prediction dexterity was found after 4 weeks.\n\n\nCONCLUSIONS\nBased on the Fugl-Meyer scores of the flaccid arm, optimal prediction of arm function outcome at 6 months can be made within 4 weeks after onset. Lack of voluntary motor control of the leg in the first week with no emergence of arm synergies at 4 weeks is associated with poor outcome at 6 months.", "title": "" }, { "docid": "e8cd97674866f4ef6aa33445a5cebea8", "text": "The ever increasing popularity of social networks and the ever easier photo taking and sharing experience have led to unprecedented concerns on privacy infringement. Inspired by the fact that the Robot Exclusion Protocol, which regulates web crawlers' behavior according a per-site deployed robots.txt, and cooperative practices of major search service providers, have contributed to a healthy web search industry, in this paper, we propose Privacy Expressing and Respecting Protocol (PERP) that consists of a Privacy.tag -- a physical tag that enables a user to explicitly and flexibly express their privacy deal, and Privacy Respecting Sharing Protocol (PRSP) -- a protocol that empowers the photo service provider to exert privacy protection following users' policy expressions, to mitigate the public's privacy concern, and ultimately create a healthy photo-sharing ecosystem in the long run. We further design an exemplar Privacy.Tag using customized yet compatible QR-code, and implement the Protocol and study the technical feasibility of our proposal. Our evaluation results confirm that PERP and PRSP are indeed feasible and incur negligible computation overhead.", "title": "" }, { "docid": "8b3557219674c8441e63e9b0ab459c29", "text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.", "title": "" }, { "docid": "836ac0267a67fd2e7657a5893975b023", "text": "Managing trust efficiently and effectively is critical to facilitating cooperation or collaboration and decision making tasks in tactical networks while meeting system goals such as reliability, availability, or scalability. Delay tolerant networks are often encountered in military network environments where end-to-end connectivity is not guaranteed due to frequent disconnection or delay. This work proposes a provenance-based trust framework for efficiency in resource consumption as well as effectiveness in trust evaluation. Provenance refers to the history of ownership of a valued object or information. We adopt the concept of provenance in that trustworthiness of an information provider affects that of information, and vice-versa. The proposed trust framework takes a data-driven approach to reduce resource consumption in the presence of selfish or malicious nodes. This work adopts a model-based method to evaluate the proposed trust framework using Stochastic Petri Nets. The results show that the proposed trust framework achieves desirable accuracy of trust evaluation of nodes compared with an existing scheme while consuming significantly less communication overhead.", "title": "" }, { "docid": "03966c28d31e1c45896eab46a1dcce57", "text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.", "title": "" }, { "docid": "107133e9b114526ac100714599305c20", "text": "While clinical text NLP systems have become very effective in recognizing named entities in clinical text and mapping them to standardized terminologies in the normalization process, there remains a gap in the ability of extractors to combine entities together into a complete semantic representation of medical concepts that contain multiple attributes each of which has its own set of allowed named entities or values. Furthermore, additional domain knowledge may be required to determine the semantics of particular tokens in the text that take on special meanings in relation to this concept. This research proposes an approach that provides ontological mappings of the surface forms of medical concepts that are of the UMLS semantic class signs/symptoms. The mappings are used to extract and encode the constituent set of named entities into interoperable semantic structures that can be linked to other structured and unstructured data for reuse in research and analysis.", "title": "" } ]
scidocsrr
bd350c8fef15aacdde0c205b4aab44c3
Learning Structured Semantic Embeddings for Visual Recognition
[ { "docid": "574a2a883f4b97793e5264b6f7beb073", "text": "We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods.", "title": "" }, { "docid": "df163d94fbf0414af1dde4a9e7fe7624", "text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.", "title": "" } ]
[ { "docid": "76fc5bf9bc5b5d6d19e30537ce0b173d", "text": "Data Stream Management Systems (DSMS) are crucial for modern high-volume/high-velocity data-driven applications, necessitating a distributed approach to processing them. In addition, data providers often require certain levels of confidentiality for their data, especially in cases of user-generated data, such as those coming out of physical activity/health tracking devices (i.e., our motivating application). This demonstration will showcase Synefo, an infrastructure that enables elastic scaling of DSMS operators, and CryptStream, a framework that provides confidentiality and access controls for data streams while allowing computation on untrusted servers, fused as CE-Storm. We will demonstrate both systems working in tandem and also visualize their behavior over time under different scenarios.", "title": "" }, { "docid": "c0c7752c6b9416e281c3649e70f9daae", "text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.", "title": "" }, { "docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "f262ccb0c19c84b51d48eb735fdaa54e", "text": "The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended.", "title": "" }, { "docid": "cd68bb753f25843b6706de39ffb3073d", "text": "The project aims to give convenience for searching best matched cuisine in Yelp review dataset, and also translate English reviews to German by attention neural machine translation. It’s a great fun to explore Natural Language Processing applications in search engine, besides improving the distributed performance for large dataset. Our project and demo focus on the high performance of retrieval valuable Yelp reviews and direct real-time hyperlinks to business’ homepage with query terms of cuisine; our report will focus on illustrating zigzag discovery path on \"the-state-of-the-art\" neural machine translation. We have spent much time on acquiring new NLP knowledge, learnt tf-seq2seq by TensorFlow, trained translationmodels on GPU servers, and translated the reviewing dataset from English to German.", "title": "" }, { "docid": "1e8e4364427d18406594af9ad3a73a28", "text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.", "title": "" }, { "docid": "3b4b1386322c820f15086e5953fa1ac4", "text": "A key goal in natural language generation (NLG) is to enable fast generation even with large vocabularies, grammars and worlds. In this work, we build upon a recently proposed NLG system, Sentence Tree Realization with UCT (STRUCT). We describe four enhancements to this system: (i) pruning the grammar based on the world and the communicative goal, (ii) intelligently caching and pruning the combinatorial space of semantic bindings, (iii) reusing the lookahead search tree at different search depths, and (iv) learning and using a search control heuristic. We evaluate the resulting system on three datasets of increasing size and complexity, the largest of which has a vocabulary of about 10K words, a grammar of about 32K lexicalized trees and a world with about 11K entities and 23K relations between them. Our results show that the system has a median generation time of 8.5s and finds the best sentence on average within 25s. These results are based on a sequential, interpreted implementation and are significantly better than the state of the art for planningbased NLG systems.", "title": "" }, { "docid": "c592f46ffd8286660b9e233127cefea7", "text": "According to literature, penetration pricing is the dominant pricing strategy for network effect markets. In this paper we show that diffusion of products in a network effect market does not only vary with the set of pricing strategies chosen by competing vendors but also strongly depends on the topological structure of the customers' network. This stresses the inappropriateness of classical \"installed base\" models (abstracting from this structure). Our simulations show that although competitive prices tend to be significantly higher in close topology markets, they lead to lower total profit and lower concentration of vendors' profit in these markets.", "title": "" }, { "docid": "17ec5256082713e85c819bb0a0dd3453", "text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "title": "" }, { "docid": "5f862920548a825a20c1a860c0ef20ca", "text": "A recommendation system tracks past actions of a group of users to make recommendations to individual members of the group. The growth of computer-mediated marketing and commerce has led to increased interest in such systems. We introduce a simple analytical framework for recommendation systems, including a basis for defining the utilit y of such a system. We perform probabilistic analyses of algorithmic methods within this framework. These analyses yield insights into how much utility can be derived from the memory of past actions and on how this memory can be exploited.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "353761bae5088e8ee33025fc04695297", "text": " Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes.", "title": "" }, { "docid": "e17ad914854d148d5ca8000bdcab4298", "text": "BACKGROUND\nThe introduction of proton pump inhibitors (PPIs) into clinical practice has revolutionized the management of acid-related diseases. Studies in primary care and emergency settings suggest that PPIs are frequently prescribed for inappropriate indications or for indications where their use offers little benefit. Inappropriate PPI use is a matter of great concern, especially in the elderly, who are often affected by multiple comorbidities and are taking multiple medications, and are thus at an increased risk of long-term PPI-related adverse outcomes as well as drug-to-drug interactions. Herein, we aim to review the current literature on PPI use and develop a position paper addressing the benefits and potential harms of acid suppression with the purpose of providing evidence-based guidelines on the appropriate use of these medications.\n\n\nMETHODS\nThe topics, identified by a Scientific Committee, were assigned to experts selected by three Italian Scientific Societies, who independently performed a systematic search of the relevant literature using Medline/PubMed, Embase, and the Cochrane databases. Search outputs were distilled, paying more attention to systematic reviews and meta-analyses (where available) representing the best evidence. The draft prepared on each topic was circulated amongst all the members of the Scientific Committee. Each expert then provided her/his input to the writing, suggesting changes and the inclusion of new material and/or additional relevant references. The global recommendations were then thoroughly discussed in a specific meeting, refined with regard to both content and wording, and approved to obtain a summary of current evidence.\n\n\nRESULTS\nTwenty-five years after their introduction into clinical practice, PPIs remain the mainstay of the treatment of acid-related diseases, where their use in gastroesophageal reflux disease, eosinophilic esophagitis, Helicobacter pylori infection, peptic ulcer disease and bleeding as well as, and Zollinger-Ellison syndrome is appropriate. Prevention of gastroduodenal mucosal lesions (and symptoms) in patients taking non-steroidal anti-inflammatory drugs (NSAIDs) or antiplatelet therapies and carrying gastrointestinal risk factors also represents an appropriate indication. On the contrary, steroid use does not need any gastroprotection, unless combined with NSAID therapy. In dyspeptic patients with persisting symptoms, despite successful H. pylori eradication, short-term PPI treatment could be attempted. Finally, addition of PPIs to pancreatic enzyme replacement therapy in patients with refractory steatorrhea may be worthwhile.\n\n\nCONCLUSIONS\nOverall, PPIs are irreplaceable drugs in the management of acid-related diseases. However, PPI treatment, as any kind of drug therapy, is not without risk of adverse effects. The overall benefits of therapy and improvement in quality of life significantly outweigh potential harms in most patients, but those without clear clinical indication are only exposed to the risks of PPI prescription. Adhering with evidence-based guidelines represents the only rational approach to effective and safe PPI therapy. Please see related Commentary: doi: 10.1186/s12916-016-0724-1 .", "title": "" }, { "docid": "9961f44d4ab7d0a344811186c9234f2c", "text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.", "title": "" }, { "docid": "746bb0b7ed159fcfbe7940a33e6debf1", "text": "Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network’s input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.", "title": "" }, { "docid": "6f2dbfcce622454579c607bf7a8a2797", "text": "A new 3D graphics and multimedia hardware architecture, cod named Talisman, is described which exploits both spatial and temporal coherence to reduce the cost of high quality animatio Individually animated objects are rendered into independent image layers which are composited together at video refresh ra to create the final display. During the compositing process, a fu affine transformation is applied to the layers to allow translatio rotation, scaling and skew to be used to simulate 3D motion of objects, thus providing a multiplier on 3D rendering performan and exploiting temporal image coherence. Image compression broadly exploited for textures and image layers to reduce imag capacity and bandwidth requirements. Performance rivaling hi end 3D graphics workstations can be achieved at a cost point two to three hundred dollars.", "title": "" }, { "docid": "b0d456d92d3cb9d6e1fb5372f3819951", "text": "“Clothes make the man,” said Mark Twain. This article presents a survey of the literature on Artificial Intelligence applications to clothing fashion. An AIbased stylist model is proposed based on fundamental fashion theory and the early work of AI in fashion. This study examines three essential components of a complete styling task as well as previously launched applications and earlier research work. Additionally, the implementation and performance of Neural Networks, Genetic Algorithms, Support Vector Machines and other AI methods used in the fashion domain are discussed in detail. This article explores the focus of previous studies and provides a general overview of the usage of AI techniques in the fashion domain.", "title": "" }, { "docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc", "text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.", "title": "" }, { "docid": "4b74b9d4c4b38082f9f667e363f093b2", "text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.", "title": "" }, { "docid": "281a9d0c9ad186c1aabde8c56c41cefa", "text": "Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings.", "title": "" } ]
scidocsrr
60e0e6e522f0527526e74a0863aa0e0e
A Benchmark Dataset to Study the Representation of Food Images
[ { "docid": "d6d9cb649294de96ea2bfe18753559df", "text": "Since health care on foods is drawing people's attention recently, a system that can record everyday meals easily is being awaited. In this paper, we propose an automatic food image recognition system for recording people's eating habits. In the proposed system, we use the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively. MKL enables to estimate optimal weights to combine image features for each category. In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 61.34% classification rate for 50 kinds of foods. To the best of our knowledge, this is the first report of a food image classification system which can be applied for practical use.", "title": "" } ]
[ { "docid": "46360fec3d7fa0adbe08bb4b5bb05847", "text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.", "title": "" }, { "docid": "36286c36dfd7451ecd297e2ebe445a35", "text": "Research on the \"dark side\" of organizational behavior has determined that employee sabotage is most often a reaction by disgruntled employees to perceived mistreatment. To date, however, most studies on employee retaliation have focused on intra-organizational sources of (in)justice. Results from this field study of customer service representatives (N = 358) showed that interpersonal injustice from customers relates positively to customer-directed sabotage over and above intra-organizational sources of fairness. Moreover, the association between unjust treatment and sabotage was moderated by 2 dimensions of moral identity (symbolization and internalization) in the form of a 3-way interaction. The relationship between injustice and sabotage was more pronounced for employees high (vs. low) in symbolization, but this moderation effect was weaker among employees who were high (vs. low) in internalization. Last, employee sabotage was negatively related to job performance ratings.", "title": "" }, { "docid": "6cbdb95791cc214a1b977e92e69904bb", "text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.", "title": "" }, { "docid": "218ddb719c00ea390d08b2d128481333", "text": "Teeth move through alveolar bone, whether through the normal process of tooth eruption or by strains generated by orthodontic appliances. Both eruption and orthodontics accomplish this feat through similar fundamental biological processes, osteoclastogenesis and osteogenesis, but there are differences that make their mechanisms unique. A better appreciation of the molecular and cellular events that regulate osteoclastogenesis and osteogenesis in eruption and orthodontics is not only central to our understanding of how these processes occur, but also is needed for ultimate development of the means to control them. Possible future studies in these areas are also discussed, with particular emphasis on translation of fundamental knowledge to improve dental treatments.", "title": "" }, { "docid": "36658d434d8c0ab11fe28323c971e13b", "text": "The aim of our research was to apply well-known data mining techniques (such as linear neural networks, multi-layered perceptrons, probabilistic neural networks, classification and regression trees, support vector machines and finally a hybrid decision tree – neural network approach) to the problem of predicting the quality of service in call centers; based on the performance data actually collected in a call center of a large insurance company. Our aim was two-fold. First, to compare the performance of models built using the abovementioned techniques and, second, to analyze the characteristics of the input sensitivity in order to better understand the relationship between the performance evaluation process and the actual performance and in this way help improve the performance of call centers. In this paper we summarize our findings.", "title": "" }, { "docid": "eef87d8905b621d2d0bb2b66108a56c1", "text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.", "title": "" }, { "docid": "f69113c023a9900be69fd6109c6d5d30", "text": "The IETF designed the Routing Protocol for Low power and Lossy Networks (RPL) as a candidate for use in constrained networks. Keeping in mind the different requirements of such networks, the protocol was designed to support multiple routing topologies, called DODAGs, constructed using different objective functions, so as to optimize routing based on divergent metrics. A DODAG versioning system is incorporated into RPL in order to ensure that the topology does not become stale and that loops are not formed over time. However, an attacker can exploit this versioning system to gain an advantage in the topology and also acquire children that would be forced to route packets via this node. In this paper we present a study of possible attacks that exploit the DODAG version system. The impact on overhead, delivery ratio, end-to-end delay, rank inconsistencies and loops is studied.", "title": "" }, { "docid": "0c49617f6070d73a75fd51fbb50b52dd", "text": "High-quality image inpainting methods based on nonlinear higher-order partial differential equations have been developed in the last few years. These methods are iterative by nature, with a time variable serving as iteration parameter. For reasons of stability a large number of iterations can be needed which results in a computational complexity that is often too large for interactive image manipulation. Based on a detailed analysis of stationary first order transport equations the current paper develops a fast noniterative method for image inpainting. It traverses the inpainting domain by the fast marching method just once while transporting, along the way, image values in a coherence direction robustly estimated by means of the structure tensor. Depending on a measure of coherence strength the method switches continuously between diffusion and directional transport. It satisfies a comparison principle. Experiments with the inpainting of gray tone and color images show that the novel algorithm meets the high level of quality of the methods of Bertalmio et al. (SIG-GRAPH ’00: Proc. 27th Conf. on Computer Graphics and Interactive Techniques, New Orleans, ACM Press/Addison-Wesley, New York, pp. 417–424, 2000), Masnou (IEEE Trans. Image Process. 11(2):68–76, 2002), and Tschumperlé (Int. J. Comput. Vis. 68(1):65–82, 2006), while being faster by at least an order of magnitude.", "title": "" }, { "docid": "6b52cc8055bd565e1f04095da8a7a5e9", "text": "This study examined the effect of lifelong bilingualism on maintaining cognitive functioning and delaying the onset of symptoms of dementia in old age. The sample was selected from the records of 228 patients referred to a Memory Clinic with cognitive complaints. The final sample consisted of 184 patients diagnosed with dementia, 51% of whom were bilingual. The bilinguals showed symptoms of dementia 4 years later than monolinguals, all other measures being equivalent. Additionally, the rate of decline in Mini-Mental State Examination (MMSE) scores over the 4 years subsequent to the diagnosis was the same for a subset of patients in the two groups, suggesting a shift in onset age with no change in rate of progression.", "title": "" }, { "docid": "654592f46fbc578c756cddf4887eafb6", "text": "We investigate the vulnerability of convolutional neural network (CNN) based face-recognition (FR) systems to presentation attacks (PA) performed using custom-made silicone masks. Previous works have studied the vulnerability of CNN-FR systems to 2D PAs such as print-attacks, or digitalvideo replay attacks, and to rigid 3D masks. This is the first study to consider PAs performed using custom-made flexible silicone masks. Before embarking on research on detecting a new variety of PA, it is important to estimate the seriousness of the threat posed by the type of PA. In this work we demonstrate that PAs using custom silicone masks do pose a serious threat to state-of-the-art FR systems. Using a new dataset based on six custom silicone masks, we show that the vulnerability of each FR system in this study is at least 10 times higher than its false match rate. We also propose a simple but effective presentation attack detection method, based on a low-cost thermal camera.", "title": "" }, { "docid": "9a7d21701b0c45bfe9d0ba7928266f50", "text": "Increase in demand of electricity for entire applications in any country, need to produce consistently with advanced protection system. Many special protection systems are available based on volume of power distributed and often the load changes without prediction required an advanced and special communication based systems to control the electrical parameters of the generation. Most of the existing systems are reliable on various applications but not perfect for electrical applications. Electrical environment will have lots of disturbance in nature, Due to natural disasters like storms, cyclones or heavy rains transmission and distribution lines may lead to damage. The electrical wire may cut and fall on ground, this leads to very harmful for human beings and may become fatal. So, a rigid, reliable and robust communications like GSM technology instead of many communication techniques used earlier. This enhances speed of communication with distance independenncy. This technology saves human life from this electrical danger by providing the fault detection and automatically stops the electricity to the damaged line and also conveys the message to the electricity board to clear the fault. An Embedded based hardware design is developed and must acquire data from electrical sensing system. A powerful GSM networking is designed to send data from a network to other network. Any change in parameters of transmission is sensed to protect the entire transmission and distribution.", "title": "" }, { "docid": "2f4a4c223c13c4a779ddb546b3e3518c", "text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.", "title": "" }, { "docid": "eb0da55555e816d706908e0695075dc5", "text": "With the fast progression of digital data exchange information security has become an important issue in data communication. Encryption algorithms play an important role in information security system. These algorithms use techniques to enhance the data confidentiality and privacy by making the information indecipherable which can be only be decoded or decrypted by party those possesses the associated key. But at the same time, these algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. So we need to evaluate the performance of different cryptographic algorithms to find out best algorithm to use in future. This paper provides evaluation of both symmetric (AES, DES, Blowfish) as well as asymmetric (RSA) cryptographic algorithms by taking different types of files like Binary, text and image files. A comparison has been conducted for these encryption algorithms using evaluation parameters such as encryption time, decryption time and throughput. Simulation results are given to demonstrate the effectiveness of each.", "title": "" }, { "docid": "c05a32fdc2344cb4a6831f5cc033820f", "text": "We have constructed a wave-front sensor to measure the irregular as well as the classical aberrations of the eye, providing a more complete description of the eye's aberrations than has previously been possible. We show that the wave-front sensor provides repeatable and accurate measurements of the eye's wave aberration. The modulation transfer function of the eye computed from the wave-front sensor is in fair, though not complete, agreement with that obtained under similar conditions on the same observers by use of the double-pass and the interferometric techniques. Irregular aberrations, i.e., those beyond defocus, astigmatism, coma, and spherical aberration, do not have a large effect on retinal image quality in normal eyes when the pupil is small (3 mm). However, they play a substantial role when the pupil is large (7.3-mm), reducing visual performance and the resolution of images of the living retina. Although the pattern of aberrations varies from subject to subject, aberrations, including irregular ones, are correlated in left and right eyes of the same subject, indicating that they are not random defects.", "title": "" }, { "docid": "7c8d1b0c77acb4fd6db6e7f887e66133", "text": "Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. We decided to analyze apparent cases of SSDHI from our prospective databank. We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI.", "title": "" }, { "docid": "83f6d9a404f5050b3b7eef68e1de6206", "text": "We propose a simple, yet effective approach for real-time hand pose estimation from single depth images using three-dimensional Convolutional Neural Networks (3D CNNs). Image based features extracted by 2D CNNs are not directly suitable for 3D hand pose estimation due to the lack of 3D spatial information. Our proposed 3D CNN taking a 3D volumetric representation of the hand depth image as input can capture the 3D spatial structure of the input and accurately regress full 3D hand pose in a single pass. In order to make the 3D CNN robust to variations in hand sizes and global orientations, we perform 3D data augmentation on the training data. Experiments show that our proposed 3D CNN based approach outperforms state-of-the-art methods on two challenging hand pose datasets, and is very efficient as our implementation runs at over 215 fps on a standard computer with a single GPU.", "title": "" }, { "docid": "633d32667221f53def4558db23a8b8af", "text": "In this paper we present, ARCTREES, a novel way of visualizing hierarchical and non-hierarchical relations within one interactive visualization. Such a visualization is challenging because it must display hierarchical information in a way that the user can keep his or her mental map of the data set and include relational information without causing misinterpretation. We propose a hierarchical view derived from traditional Treemaps and augment this view with an arc diagram to depict relations. In addition, we present interaction methods that allow the exploration of the data set using Focus+Context techniques for navigation. The development was motivated by a need for understanding relations in structured documents but it is also useful in many other application domains such as project management and calendars.", "title": "" }, { "docid": "b5f22614e5cd76a66b754fd79299493a", "text": "We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time \"twist\": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of \"big data\". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production today, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a \"big data\" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle \"big\" as well as \"fast\" data.", "title": "" }, { "docid": "f7e3d9070792af014b4b9ebaaf047e44", "text": "Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature–engineering property on simulated examples.", "title": "" }, { "docid": "ff5ced88aefa871760b1131d501f9f37", "text": "A number of applications have emerged over recent years that use datagram transport. These applications include real time video conferencing, Internet telephony, and online games such as Quake and StarCraft. These applications are all delay sensitive and use unreliable datagram transport. Applications that are based on reliable transport can be secured using TLS, but no compelling alternative exists for securing datagram based applications. In this paper we present DTLS, a datagram capable version of TLS. DTLS is extremely similar to TLS and therefore allows reuse of pre-existing protocol infrastructure. Our experimental results show that DTLS adds minimal overhead to a previously non-DTLS capable application.", "title": "" } ]
scidocsrr
c2ee1f1e8bc5b50cdb12761b88029339
Business Process Analytics
[ { "docid": "4ca4ccd53064c7a9189fef3e801612a0", "text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.", "title": "" } ]
[ { "docid": "1381104da316d0e1b66fce7f3b51a153", "text": "Automatic segmentation and quantification of skeletal structures has a variety of applications for biological research. Although solutions for good quality X-ray images of human skeletal structures are in existence in recent years, automatic solutions working on poor quality X-ray images of mice are rare. This paper proposes a fully automatic solution for spine segmentation and curvature quantification from X-ray images of mice. The proposed solution consists of three stages, namely preparation of the region of interest, spine segmentation, and spine curvature quantification, aiming to overcome technical difficulties in processing the X-ray images. We examined six different automatic measurements for quantifying the spine curvature through tests on a sample data set of 100 images. The experimental results show that some of the automatic measures are very close to and consistent with the best manual measurement results by annotators. The test results also demonstrate the effectiveness of the curvature quantification produced by the proposed solution in distinguishing abnormally shaped spines from the normal ones with accuracy up to 98.6%.", "title": "" }, { "docid": "e9db97070b87e567ff7904fe40f30086", "text": "OBJECTIVES\nCongenital adrenal hyperplasia (CAH) is a disease that occurs during fetal development and can lead to virilization in females or death in newborn males if not discovered early in life. Because of this there is a need to seek morphological markers in order to help diagnose the disease. In order to test the hypothesis that prenatal hormones can affect the sexual dimorphic pattern 2D:4D digit ratio in individual with CAH, the aim of this study was to compare the digit ratio in female and male patients with CAH and control subjects.\n\n\nMETHODS\nThe 2D:4D ratios in both hands of 40 patients (31 females-46, XX, and 9 males-46, XY) were compared with the measures of control individuals without CAH (100 males and 100 females).\n\n\nRESULTS\nFemales with CAH showed 2D:4D ratios typical of male controls (0.950 and 0.947) in both hands (P < 0.001). In CAH males the left hand 2D:4D ratio (0.983) was statistically different from that of male controls (P < 0.05).\n\n\nCONCLUSIONS\nThese finding support the idea that sexual dimorphism in skeletal development in early fetal life is associated with differences between the exposure to androgens in males and females, and significant differences associated with adrenal hyperplasia. Although the effects of prenatal androgens on skeletal developmental are supported by numerous studies, further investigation is yet required to clarify the disease and establish the digit ratio as a biomarker for CAH.", "title": "" }, { "docid": "1420f07e309c114dfc264797ab82ceec", "text": "Introduction: The knowledge of clinical spectrum and epidemiological profile of critically ill children plays a significant role in the planning of health policies that would mitigate various factors related to the evolution of diseases prevalent in these sectors. The data collected enable prospective comparisons to be made with benchmark standards including regional and international units for the continuous pursuit of providing essential health care and improving the quality of patient care. Purpose: To study the clinical spectrum and epidemiological profile of the critically ill children admitted to the pediatric intensive care unit at a tertiary care center in South India. Materials and Methods: Descriptive data were collected retrospectively from the Hospital medical records between 2013 and 2016. Results: A total of 1833 patients were analyzed during the 3-year period, of which 1166 (63.6%) were males and 667 (36.4%) were females. A mean duration of stay in pediatric intensive care unit (PICU) was 2.21 ± 1.90 days. Respiratory system was the most common system affected in our study 738 (40.2 %). Acute poisoning in children constituted 99 patients (5.4%). We observed a mortality rate of 1.96%, with no association with age or sex. The mortality rate was highest in infants below 1-year of age (50%). In our study, the leading systemic cause for both admission and death was the respiratory system. Conclusion: This study analyses the epidemiological pattern of patients admitted to PICU in South India. We would also like to emphasize on public health prevention strategies and community health education which needs to be reinforced, especially in remote places and in rural India. This, in turn, would help in decreasing the cases of unknown bites, scorpion sting, poisoning and arthropod-borne illnesses, which are more prevalent in this part of the country.", "title": "" }, { "docid": "46c2d96220d670115f9b4dba4e600ec8", "text": "The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.", "title": "" }, { "docid": "6a1a9c6cb2da06ee246af79fdeedbed9", "text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review", "title": "" }, { "docid": "a112cd31e136054bdf9d34c82b960d95", "text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "b45bb513f7bd9de4941785490945d53e", "text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.", "title": "" }, { "docid": "8930924a223ef6a8d19e52ab5c6e7736", "text": "Modern perception systems are notoriously complex, featuring dozens of interacting parameters that must be tuned to achieve good performance. Conventional tuning approaches require expensive ground truth, while heuristic methods are difficult to generalize. In this work, we propose an introspective ground-truth-free approach to evaluating the performance of a generic perception system. By using the posterior distribution estimate generated by a Bayesian estimator, we show that the expected performance can be estimated efficiently and without ground truth. Our simulated and physical experiments in a demonstrative indoor ground robot state estimation application show that our approach can order parameters similarly to using a ground-truth system, and is able to accurately identify top-performing parameters in varying contexts. In contrast, baseline approaches that reason only about observation log-likelihood fail in the face of challenging perceptual phenomena.", "title": "" }, { "docid": "69bb10420be07fe9fb0fd372c606d04e", "text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.", "title": "" }, { "docid": "242a2f64fc103af641320c1efe338412", "text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.", "title": "" }, { "docid": "471e835e66b1bdfabd5de8a14914e9e6", "text": "Context. The theme of the 2003 annual meeting is \"accountability for educational quality\". The emphasis on accountability reflects the increasing need for educators, students and politicians to demonstrate the effectiveness of educational systems. As part of the growing emphasis on accountability, high stakes achievement tests have become increasingly important and a student's performance on such tests can have a significant impact on his or her access to future educational opportunities. At the same time, concern is growing that the use of high stakes achievement tests, such as the SATMath exam and others (e.g., the Massachusetts MCAS exam) simply exacerbates existing group differences, and puts female students and those from traditionally underrepresented minority groups at a disadvantage (Willingham & Cole, 1997). New approaches are required to help all students perform to the best of their ability on high stakes tests.", "title": "" }, { "docid": "c72a2e504934580f9542a62b7037cdd4", "text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.", "title": "" }, { "docid": "2258a0ba739557d489a796f050fad3e0", "text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10", "title": "" }, { "docid": "322fd3b0c6c833bac9598b510dc40b98", "text": "Quality assessment is an indispensable technique in a large body of media applications, i.e., photo retargeting, scenery rendering, and video summarization. In this paper, a fully automatic framework is proposed to mimic how humans subjectively perceive media quality. The key is a locality-preserved sparse encoding algorithm that accurately discovers human gaze shifting paths from each image or video clip. In particular, we first extract local image descriptors from each image/video, and subsequently project them into the so-called perceptual space. Then, a nonnegative matrix factorization (NMF) algorithm is proposed that represents each graphlet by a linear and sparse combination of the basis ones. Since each graphlet is visually/semantically similar to its neighbors, a locality-preserved constraint is encoded into the NMF algorithm. Mathematically, the saliency of each graphlet is quantified by the norm of its sparse codes. Afterward, we sequentially link them into a path to simulate human gaze allocation. Finally, a probabilistic quality model is learned based on such paths extracted from a collection of photos/videos, which are marked as high quality ones via multiple Flickr users. Comprehensive experiments have demonstrated that: 1) our quality model outperforms many of its competitors significantly, and 2) the learned paths are on average 89.5% consistent with real human gaze shifting paths.", "title": "" }, { "docid": "013f9499b9a3e1ffdd03aa4de48d233b", "text": "We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a \"sanitization\" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a \"synthetic data set\" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.\n For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.", "title": "" }, { "docid": "ec4dcce4f53e38909be438beeb62b1df", "text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.", "title": "" }, { "docid": "9c3172266da959ee3cf9e7316bbcba96", "text": "We propose a new research direction for eye-typing which is potentially much faster: dwell-free eye-typing. Dwell-free eye-typing is in principle possible because we can exploit the high redundancy of natural languages to allow users to simply look at or near their desired letters without stopping to dwell on each letter. As a first step we created a system that simulated a perfect recognizer for dwell-free eye-typing. We used this system to investigate how fast users can potentially write using a dwell-free eye-typing interface. We found that after 40 minutes of practice, users reached a mean entry rate of 46 wpm. This indicates that dwell-free eye-typing may be more than twice as fast as the current state-of-the-art methods for writing by gaze. A human performance model further demonstrates that it is highly unlikely traditional eye-typing systems will ever surpass our dwell-free eye-typing performance estimate.", "title": "" }, { "docid": "681aba7f37ae6807824c299454af5721", "text": "Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA’s ability to enhance the security of each device and the security of the network as a whole.", "title": "" }, { "docid": "7c482427e4f0305c32210093e803eb78", "text": "A healable transparent capacitive touch screen sensor has been fabricated based on a healable silver nanowire-polymer composite electrode. The composite electrode features a layer of silver nanowire percolation network embedded into the surface layer of a polymer substrate comprising an ultrathin soldering polymer layer to confine the nanowires to the surface of a healable Diels-Alder cycloaddition copolymer and to attain low contact resistance between the nanowires. The composite electrode has a figure-of-merit sheet resistance of 18 Ω/sq with 80% transmittance at 550 nm. A surface crack cut on the conductive surface with 18 Ω is healed by heating at 100 °C, and the sheet resistance recovers to 21 Ω in 6 min. A healable touch screen sensor with an array of 8×8 capacitive sensing points is prepared by stacking two composite films patterned with 8 rows and 8 columns of coupling electrodes at 90° angle. After deliberate damage, the coupling electrodes recover touch sensing function upon heating at 80 °C for 30 s. A capacitive touch screen based on Arduino is demonstrated capable of performing quick recovery from malfunction caused by a razor blade cutting. After four cycles of cutting and healing, the sensor array remains functional.", "title": "" }, { "docid": "d8127fc372994baee6fd8632d585a347", "text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.", "title": "" } ]
scidocsrr
1f18e5170c0de6160d9360e87e80eca2
MODEC: Multimodal Decomposable Models for Human Pose Estimation
[ { "docid": "ba085cc5591471b8a46e391edf2e78d4", "text": "Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.", "title": "" } ]
[ { "docid": "371ab49af58c0eb4dc55f3fdf1c741f0", "text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.", "title": "" }, { "docid": "1047e89937593d2e08c5433652316d73", "text": "We describe a set of top-performing systems at the SemEval 2015 English Semantic Textual Similarity (STS) task. Given two English sentences, each system outputs the degree of their semantic similarity. Our unsupervised system, which is based on word alignments across the two input sentences, ranked 5th among 73 submitted system runs with a mean correlation of 79.19% with human annotations. We also submitted two runs of a supervised system which uses word alignments and similarities between compositional sentence vectors as its features. Our best supervised run ranked 1st with a mean correlation of 80.15%.", "title": "" }, { "docid": "82e170219f7fefdc2c36eb89e44fa0f5", "text": "The Internet of Things (IOT), the idea of getting real-world objects connected with each other, will change the ways we organize, obtain and consume information radically. Through sensor networks, agriculture can be connected to the IOT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of the connections, the agronomists will have better understanding of crop growth models and farming practices will be improved as well. This paper reports on the design of the sensor network when connecting agriculture to the IOT. Reliability, management, interoperability, low cost and commercialization are considered in the design. Finally, we share our experiences in both development and deployment.", "title": "" }, { "docid": "70df369be2c95afd04467cd291e60175", "text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.", "title": "" }, { "docid": "416f9184ae6b0c04803794b1ab2b8f50", "text": "Although hydrophilic small molecule drugs are widely used in the clinic, their rapid clearance, suboptimal biodistribution, low intracellular absorption and toxicity can limit their therapeutic efficacy. These drawbacks can potentially be overcome by loading the drug into delivery systems, particularly liposomes; however, low encapsulation efficiency usually results. Many strategies are available to improve both the drug encapsulation efficiency and delivery to the target site to reduce side effects. For encapsulation, passive and active strategies are available. Passive strategies encompass the proper selection of the composition of the formulation, zeta potential, particle size and preparation method. Moreover, many weak acids and bases, such as doxorubicin, can be actively loaded with high efficiency. It is highly desirable that once the drug is encapsulated, it should be released preferentially at the target site, resulting in an optimal therapeutic effect devoid of side effects. For this purpose, targeted and triggered delivery approaches are available. The rapidly increasing knowledge of the many overexpressed biochemical makers in pathological sites, reviewed herein, has enabled the development of liposomes decorated with ligands for cell-surface receptors and active delivery. Furthermore, many liposomal formulations have been designed to actively release their content in response to specific stimuli, such as a pH decrease, heat, external alternating magnetic field, ultrasound or light. More than half a century after the discovery of liposomes, some hydrophilic small molecule drugs loaded in liposomes with high encapsulation efficiency are available on the market. However, targeted liposomes or formulations able to deliver the drug after a stimulus are not yet a reality in the clinic and are still awaited.", "title": "" }, { "docid": "2d95b9919e1825ea46b5c5e6a545180c", "text": "Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multislice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains. 1 ar X iv :1 60 9. 09 14 3v 1 [ st at .M L ] 2 8 Se p 20 16", "title": "" }, { "docid": "96aa1f19a00226af7b5bbe0bb080582e", "text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.", "title": "" }, { "docid": "630c4e87333606c6c8e7345cb0865c64", "text": "MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geodistributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.", "title": "" }, { "docid": "3ea533be157b63e673f43205d195d13e", "text": "Recent work on fairness in machine learning has begun to be extended to recommender systems. While there is a tension between the goals of fairness and of personalization, there are contexts in which a global evaluations of outcomes is possible and where equity across such outcomes is a desirable goal. In this paper, we introduce the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the SLIM algorithm can be used to improve the balance of user neighborhoods, with the result of achieving greater outcome fairness in a real-world dataset with minimal loss in ranking performance.", "title": "" }, { "docid": "1a6e9229f6bc8f6dc0b9a027e1d26607", "text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.", "title": "" }, { "docid": "ce53aa803d587301a47166c483ecec34", "text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.", "title": "" }, { "docid": "6091748ab964ea58a06f9b8335f9829e", "text": "Apprenticeship is an inherently social learning method with a long history of helping novices become experts in fields as diverse as midwifery, construction, and law. At the center of apprenticeship is the concept of more experienced people assisting less experienced ones, providing structure and examples to support the attainment of goals. Traditionally apprenticeship has been associated with learning in the context of becoming skilled in a trade or craft—a task that typically requires both the acquisition of knowledge, concepts, and perhaps psychomotor skills and the development of the ability to apply the knowledge and skills in a context-appropriate manner—and far predates formal schooling as it is known today. In many nonindustrialized nations apprenticeship remains the predominant method of teaching and learning. However, the overall concept of learning from experts through social interactions is not one that should be relegated to vocational and trade-based training while K–12 and higher educational institutions seek to prepare students for operating in an information-based society. Apprenticeship as a method of teaching and learning is just as relevant within the cognitive and metacognitive domain as it is in the psychomotor domain. In the last 20 years, the recognition and popularity of facilitating learning of all types through social methods have grown tremendously. Educators and educational researchers have looked to informal learning settings, where such methods have been in continuous use, as a basis for creating more formal instructional methods and activities that take advantage of these social constructivist methods. Cognitive apprenticeship— essentially, the use of an apprentice model to support learning in the cognitive domain—is one such method that has gained respect and popularity throughout the 1990s and into the 2000s. Scaffolding, modeling, mentoring, and coaching are all methods of teaching and learning that draw on social constructivist learning theory. As such, they promote learning that occurs through social interactions involving negotiation of content, understanding, and learner needs, and all three generally are considered forms of cognitive apprenticeship (although certainly they are not the only methods). This chapter first explores prevailing definitions and underlying theories of these teaching and learning strategies and then reviews the state of research in these area.", "title": "" }, { "docid": "5c5e9a93b4838cbebd1d031a6d1038c4", "text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.", "title": "" }, { "docid": "26c003f70bbaade54b84dcb48d2a08c9", "text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.", "title": "" }, { "docid": "181a3d68fd5b5afc3527393fc3b276f9", "text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.", "title": "" }, { "docid": "1c1f5159ab51923fcc4fef2fad501159", "text": "This article assesses the consequences of poverty between a child's prenatal year and 5th birthday for several adult achievement, health, and behavior outcomes, measured as late as age 37. Using data from the Panel Study of Income Dynamics (1,589) and controlling for economic conditions in middle childhood and adolescence, as well as demographic conditions at the time of the birth, findings indicate statistically significant and, in some cases, quantitatively large detrimental effects of early poverty on a number of attainment-related outcomes (adult earnings and work hours). Early-childhood poverty was not associated with such behavioral measures as out-of-wedlock childbearing and arrests. Most of the adult earnings effects appear to operate through early poverty's association with adult work hours.", "title": "" }, { "docid": "3ae6cb348cff49851cf15036483e2117", "text": "Rate-Distortion Methods for Image and Video Compression: An. Or Laplacian p.d.f.s and optimal bit allocation techniques to ensure that bits.Rate-Distortion Methods for Image and Video Compression. Coding Parameters: chosen on input-by-input rampant caries pdf basis to optimize. In this article we provide an overview of rate-distortion R-D based optimization techniques and their practical application to image and video. Rate-distortion methods for image and video compression. Enter the password to open this PDF file.Bernd Girod: EE368b Image and Video Compression. Lower the bit-rate R by allowing some acceptable distortion. Consideration of a specific coding method. Bit-rate at least R.rate-distortion R-D based optimization techniques and their practical application to. Area of R-D optimized image and video coding see 1, 2 and many of the. Such Intra coding alone is in common use as ramones guitar tab pdf a video coding method today. MPEG-2: A step higher in bit rate, picture quality, and popularity.coding, rate distortion RD optimization, soft decision quantization SDQ. RD methods for video compression can be classified into two categories. Practical SDQ include without limitation SDQ in JPEG image coding and H. However, since we know that most lossy compression techniques operate on data. In image and video compression, the human perception models are less well. The conditional PDF QY Xy x that minimize rate for a given distortion D.The H. 264AVC video coding standard has been recently proposed by the Joint. MB which determine the overall rate and the distortion of the coded. Figure 2: The picture encoding process in the proposed method. Selection of λ and.fact, operational rate-distortion methods have come into wide use for image and video coders. In previous work, de Queiroz applied this technique to finding.", "title": "" }, { "docid": "fdc875181fe37e6b469d07e0e580fadb", "text": "Attention mechanism has recently attracted increasing attentions in the area of facial action unit (AU) detection. By finding the region of interest (ROI) of each AU with the attention mechanism, AU related local features can be captured. Most existing attention based AU detection works use prior knowledge to generate fixed attentions or refine the predefined attentions within a small range, which limits their capacity to model various AUs. In this paper, we propose a novel end-to-end weakly-supervised attention and relation learning framework for AU detection with only AU labels, which has not been explored before. In particular, multi-scale features shared by each AU are learned firstly, and then both channel-wise attentions and spatial attentions are learned to select and extract AU related local features. Moreover, pixellevel relations for AUs are further captured to refine spatial attentions so as to extract more relevant local features. Extensive experiments on BP4D and DISFA benchmarks demonstrate that our framework (i) outperforms the state-of-the-art methods for AU detection, and (ii) can find the ROI of each AU and capture the relations among AUs adaptively.", "title": "" }, { "docid": "8be921cfab4586b6a19262da9a1637de", "text": "Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for a variety of cells acquired under a variety of conditions.", "title": "" }, { "docid": "37a47bd2561b534d5734d250d16ff1c2", "text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.", "title": "" } ]
scidocsrr
06c54a722f4ecdb598abb4a60d3f0a74
Vocabulary Selection Strategies for Neural Machine Translation
[ { "docid": "4b983214cbc0bf42ee8d04ebf8a31fa8", "text": "We introduce BilBOWA (“Bilingual Bag-of-Words without Alignments”), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large datasets and does not require wordaligned training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient crosslingual feature learning. We show that bilingual embeddings learned using the proposed model outperforms state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on the WMT11 data. Our code will be made available as part of the open-source word2vec toolkit.", "title": "" } ]
[ { "docid": "0881f7ce5bc26fbf0dd718be3bf1ec79", "text": "The prophylactic administration of dimenhydrinate (Dramamine) is as effective as the use of ondansetron (Zofran) in preventing postoperative nausea and vomiting (PONV) in patients undergoing elective laparoscopic cholecystectomy. A prospective double-blind randomized study was performed in a tertiary care referral center. For this study, 128 American Society of Anesthesiology (ASA) physical statuses I, II, and III patients were randomly assigned to receive either ondansetron 4 mg intravenously (IV) at $17 per dose (group 1) or dimenhydrinate 50 mg IV at $2.50 per dose (group 2) before induction of anesthesia. The end points evaluated were frequency of PONV, need for rescue antiemetics, need for overnight hospitalization secondary to persistent nausea and vomiting, and frequency PONV 24 h after discharge. Chi-square tests and student’s t-test were used to determine the significance of differences among groups. Of the 128 patients enrolled in this study, 20 were excluded: 15 patients received an additional antiemetic preoperative; 4 were converted to open cholecystectomies; and 1 procedure was aborted due to carcinomatosis. Of the 108 remaining participants, 50 received ondansetron (group 1) and 58 received dimenhydrinate (group 2). Both groups were well matched for demographics including gender, ASA class, and history of motion sickness. The need for rescue antiemetics occurred in 34% of group 1 and 29% of Group 2 (p=0.376), postoperative vomiting in 6% of group 1 and 12% of group 2 (p=0.228), and postoperative nausea in 42% of group 1 and 34% of group 2 (p=0.422). One group 1 patient and two group 2 patients required overnight hospitalization for persistent nausea, a difference that was not significant. Rates of PONV 24 h after discharge were similar between groups 1 and 2 (10% vs 14%, p=0.397 and 2% vs 5%, p=0.375, respectively). Prophylactic administration of dimenhydrinate is as effective as the use of ondansetron in preventing PONV in patients undergoing elective laparoscopic cholecystectomy. Dimenhydrinate is the preferred drug because it is less expensive. With more than 500,000 laparoscopic cholecystectomies performed in the United States each year, the potential drug cost savings from the prophylactic administration of dimenhydrinate instead of ondansetron exceed $7.25 million per year.", "title": "" }, { "docid": "7487f889eae6a32fc1afab23e54de9b8", "text": "Although many researchers have investigated the use of different powertrain topologies, component sizes, and control strategies in fuel-cell vehicles, a detailed parametric study of the vehicle types must be conducted before a fair comparison of fuel-cell vehicle types can be performed. This paper compares the near-optimal configurations for three topologies of vehicles: fuel-cell-battery, fuel-cell-ultracapacitor, and fuel-cell-battery-ultracapacitor. The objective function includes performance, fuel economy, and powertrain cost. The vehicle models, including detailed dc/dc converter models, are programmed in Matlab/Simulink for the customized parametric study. A controller variable for each vehicle type is varied in the optimization.", "title": "" }, { "docid": "7f9a565c10fdee58cbe76b7e9351f037", "text": "The effects of iron substitution on the structural and magnetic properties of the GdCo(12-x)Fe(x)B6 (0 ≤ x ≤ 3) series of compounds have been studied. All of the compounds form in the rhombohedral SrNi12B6-type structure and exhibit ferrimagnetic behaviour below room temperature: T(C) decreases from 158 K for x = 0 to 93 K for x = 3. (155)Gd Mössbauer spectroscopy indicates that the easy magnetization axis changes from axial to basal-plane upon substitution of Fe for Co. This observation has been confirmed using neutron powder diffraction. The axial to basal-plane transition is remarkably sensitive to the Fe content and comparison with earlier (57)Fe-doping studies suggests that the boundary lies below x = 0.1.", "title": "" }, { "docid": "2a057079c544b97dded598b6f0d750ed", "text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:", "title": "" }, { "docid": "712098110f7713022e4664807ac106c7", "text": "Getting a machine to understand human narratives has been a classic challenge for NLP and AI. This paper proposes a new representation for the temporal structure of narratives. The representation is parsimonious, using temporal relations as surrogates for discourse relations. The narrative models, called Temporal Discourse Models, are treestructured, where nodes include abstract events interpreted as pairs of time points and where the dominance relation is expressed by temporal inclusion. Annotation examples and challenges are discussed, along with a report on progress to date in creating annotated corpora.", "title": "" }, { "docid": "95ee34da123289b9c538471844e39d8c", "text": "Population-level analyses often use average quantities to describe heterogeneous systems, particularly when variation does not arise from identifiable groups. A prominent example, central to our current understanding of epidemic spread, is the basic reproductive number, R0, which is defined as the mean number of infections caused by an infected individual in a susceptible population. Population estimates of R0 can obscure considerable individual variation in infectiousness, as highlighted during the global emergence of severe acute respiratory syndrome (SARS) by numerous ‘superspreading events’ in which certain individuals infected unusually large numbers of secondary cases. For diseases transmitted by non-sexual direct contacts, such as SARS or smallpox, individual variation is difficult to measure empirically, and thus its importance for outbreak dynamics has been unclear. Here we present an integrated theoretical and statistical analysis of the influence of individual variation in infectiousness on disease emergence. Using contact tracing data from eight directly transmitted diseases, we show that the distribution of individual infectiousness around R0 is often highly skewed. Model predictions accounting for this variation differ sharply from average-based approaches, with disease extinction more likely and outbreaks rarer but more explosive. Using these models, we explore implications for outbreak control, showing that individual-specific control measures outperform population-wide measures. Moreover, the dramatic improvements achieved through targeted control policies emphasize the need to identify predictive correlates of higher infectiousness. Our findings indicate that superspreading is a normal feature of disease spread, and to frame ongoing discussion we propose a rigorous definition for superspreading events and a method to predict their frequency.", "title": "" }, { "docid": "d6f1a0be144200bad7fe5fbc235254fd", "text": "When observing the actions of others, humans make inferences about why the others acted as they did, and what this implies about their view of the world. Humans also use the fact that their actions will be interpreted in this manner when observed by others, allowing them to act informatively and thereby communicate efficiently with others. Although learning algorithms have recently achieved superhuman performance in a number of two-player, zero-sum games, scalable multi-agent reinforcement learning algorithms that can discover effective strategies and conventions in complex, partially observable settings have proven elusive. We present the Bayesian action decoder (BAD), a new multi-agent learning method that uses an approximate Bayesian update to obtain a public belief that conditions on the actions taken by all agents in the environment. Together with the public belief, this Bayesian update effectively defines a new Markov decision process, the public belief MDP, in which the action space consists of deterministic partial policies, parameterised by neural networks, that can be sampled for a given public state. BAD exploits the fact that an agent acting only on this public belief state can still learn to use its private information if the action space is augmented to be over partial policies mapping private information into environment actions. The Bayesian update is also closely related to the theory of mind reasoning that humans carry out when observing others’ actions. We first validate BAD on a proof-of-principle twostep matrix game, where it outperforms policy gradient methods. We then evaluate BAD on the challenging, cooperative partial-information card game Hanabi, where in the two-player setting the method surpasses all previously published learnEqual contribution University of Oxford, UK DeepMind, London, UK. Correspondence to: Jakob Foerster <[email protected]>, Francis Song <[email protected]>. ing and hand-coded approaches establishing a new state of the art.", "title": "" }, { "docid": "08d2f25f2dd1e8bd187da6950df2cdc7", "text": "Summary form only given. Apache Flink is an open source system for expressive, declarative, fast, and efficient data analysis on both historical (batch) and real-time (streaming) data. Flink combines the scalability and programming flexibility of distributed MapReduce-like platforms with the efficiency, out-of-core execution, and query optimization capabilities found in parallel databases. At its core, Flink builds on a distributed dataflow runtime that unifies batch and incremental computations over a true-streaming pipelined execution. Its programming model allows for stateful, fault tolerant computations, flexible user-defined windowing semantics for streaming and unique support for iterations. Flink is converging into a use-case complete system for parallel data processing with a wide range of top level libraries ranging from machine learning through to graph processing. Apache Flink originates from the Stratosphere project led by TU Berlin and has led to various scientific papers (e.g., in VLDBJ, SIGMOD, (P)VLDB, ICDE, and HPDC). In this half-day tutorial we will introduce Apache Flink, and give a tutorial on its streaming capabilities using concrete examples of application scenarios, focusing on concepts such as stream windowing, and stateful operators.", "title": "" }, { "docid": "80e4d60d0687e44b027074c193fe2083", "text": "Sexual activity involves excitement with high arousal and pleasure as typical features of emotions. Brain activations specifically related to erotic feelings and those related to general emotional processing are therefore hard to disentangle. Using fMRI in 21 healthy subjects (11 males and 10 females), we investigated regions that show activations specifically related to the viewing of sexually intense pictures while controlling for general emotional arousal (GEA) or pleasure. Activations in the ventral striatum and hypothalamus were found to be modulated by the stimulus' specific sexual intensity (SSI) while activations in the anterior cingulate cortex were associated with an interaction between sexual intensity and emotional valence. In contrast, activation in other regions like the dorsomedial prefrontal cortex, the mediodorsal thalamus and the amygdala was associated only with a general emotional component during sexual arousal. No differences were found in these effects when comparing females and males. Our findings demonstrate for the first time neural differentiation between emotional and sexual components in the neural network underlying sexual arousal.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "5bd713c468f48313e42b399f441bb709", "text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.", "title": "" }, { "docid": "ec4213b73e6685f097c48c00cd930182", "text": "The need for photonic components with linear electric field has increased for both analog and digital communication systems as these systems evolve toward faster speed, higher spectral efficiency (SE), and wider bandwidth environments. Here, we present a common linearization platform for electric field coming from any Mach-Zehnder (MZ)-based devices. We show how this common platform has applications and advantages as linear Frequency Discriminator (FD) device for phase-modulated direct-detection Microwave Photonic Link (MPLs), and as linear Electric Field Modulator (LOFM) for multilevel coherent transmitter in digital optical communication systems.", "title": "" }, { "docid": "4b54cf876d3ab7c7277605125055c6c3", "text": "We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.", "title": "" }, { "docid": "132bb5b7024de19f4160664edca4b4f5", "text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.", "title": "" }, { "docid": "0b6167c7a42142a19fb94dcc4a96e4d7", "text": "AI Planning has been widely used for narrative generation and the control of virtual actors in interactive storytelling. Planning models for such dynamic environments must include alternative actions which enable deviation away from a baseline storyline in order to generate multiple story variants and to be able to respond to changes that might be made to the story world. However, the actual creation of these domain models has been a largely empirical process with a lack of principled approaches to the definition of alternative actions. Our work has addressed this problem and in the paper we present a novel automated method for the generation of interactive narrative domain models from existing non-interactive versions. Central to this is the use of actions that are contrary to those forming the baseline plot within a principled mechanism for their semi-automatic production. It is important that such newly created domain content should still be human-readable and to this end labels for new actions and predicates are generated automatically using antonyms selected from a range of on-line lexical resources. Our approach is fully implemented in a prototype system and its potential demonstrated via both formal experimental evaluation and user evaluation of the generated action labels.", "title": "" }, { "docid": "b8d63090ea7d3302c71879ea4d11fde5", "text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.", "title": "" }, { "docid": "c6a649a1eed332be8fc39bfa238f4214", "text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.", "title": "" }, { "docid": "2a09022e79f1d9b9eed405e0b92245f4", "text": "This paper considers a category of rogue access points (APs) that pretend to be legitimate APs to lure users to connect to them. We propose a practical timing based technique that allows the user to avoid connecting to rogue APs. Our method employs the round trip time between the user and the DNS server to independently determine whether an AP is legitimate or not without assistance from the WLAN operator. We implemented our detection technique on commercially available wireless cards to evaluate their performance.", "title": "" }, { "docid": "72c4ba6c7ffde3ad8c5aab9932aaa3fc", "text": "24 25 26 27 28 29 30 31 32 33 34 35 Article history: Received 22 June 2011 Received in revised form 5 September 2012 Accepted 20 September 2012 Available online xxxx", "title": "" } ]
scidocsrr
a92324172cfd09afa05ef9065dc06edc
The Utility of Hello Messages for Determining Link Connectivity
[ { "docid": "ef5f1aa863cc1df76b5dc057f407c473", "text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.", "title": "" } ]
[ { "docid": "30b1b4df0901ab61ab7e4cfb094589d1", "text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.", "title": "" }, { "docid": "701fb71923bb8a2fc90df725074f576b", "text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.", "title": "" }, { "docid": "56b42c551ad57c82ad15e6fc2e98f528", "text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward", "title": "" }, { "docid": "09132f8695e6f8d32d95a37a2bac46ee", "text": "Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.", "title": "" }, { "docid": "e729d7b399b3a4d524297ae79b28f45d", "text": "The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivativeThis work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan “Programma di Ricerca sulla Sicurezza”, Decreto 17/04/2003 G.U. n. 123 del 29/05/2003, by MIUR, FIRB 2001 Research Program Large-Scale Nonlinear Optimization and by the U.S. Office of Naval Research (NICOP grant N. 000140510617). E.F. Campana ( ) · D. Peri · A. Pinto INSEAN—Istituto Nazionale per Studi ed Esperienze di Architettura Navale, Via di Vallerano 139, 00128 Roma, Italy e-mail: [email protected] G. Liuzzi Consiglio Nazionale delle Ricerche, Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Viale Manzoni 30, 00185 Roma, Italy S. Lucidi Dipartimento di Informatica e Sistemistica “A. Ruberti”, Università degli Studi di Roma “Sapienza”, Via Ariosto 25, 00185 Roma, Italy V. Piccialli Dipartimento di Ingegneria dell’Impresa, Università degli Studi di Roma “Tor Vergata”, Via del Policlinico 1, 00133 Roma, Italy 534 E.F. Campana et al. free optimization methods. All the numerical results show the effectiveness of the two new algorithms.", "title": "" }, { "docid": "e95649b06c70682ba4229cff11fefeaf", "text": "In this paper, we present Black SDN, a Software Defined Networking (SDN) architecture for secure Internet of Things (IoT) networking and communications. SDN architectures were developed to provide improved routing and networking performance for broadband networks by separating the control plain from the data plain. This basic SDN concept is amenable to IoT networks, however, the common SDN implementations designed for wired networks are not directly amenable to the distributed, ad hoc, low-power, mesh networks commonly found in IoT systems. SDN promises to improve the overall lifespan and performance of IoT networks. However, the SDN architecture changes the IoT network's communication patterns, allowing new types of attacks, and necessitating a new approach to securing the IoT network. Black SDN is a novel SDN-based secure networking architecture that secures both the meta-data and the payload within each layer of an IoT communication packet while utilizing the SDN centralized controller as a trusted third party for secure routing and optimized system performance management. We demonstrate through simulation the feasibility of Black SDN in networks where nodes are asleep most of their lives, and specifically examine a Black SDN IoT network based upon the IEEE 802.15.4 LR WPAN (Low Rate - Wireless Personal Area Network) protocol.", "title": "" }, { "docid": "01d74a3a50d1121646ddab3ea46b5681", "text": "Sleep quality is important, especially given the considerable number of sleep-related pathologies. The distribution of sleep stages is a highly effective and objective way of quantifying sleep quality. As a standard multi-channel recording used in the study of sleep, polysomnography (PSG) is a widely used diagnostic scheme in sleep medicine. However, the standard process of sleep clinical test, including PSG recording and manual scoring, is complex, uncomfortable, and time-consuming. This process is difficult to implement when taking the whole PSG measurements at home for general healthcare purposes. This work presents a novel sleep stage classification system, based on features from the two forehead EEG channels FP1 and FP2. By recording EEG from forehead, where there is no hair, the proposed system can monitor physiological changes during sleep in a more practical way than previous systems. Through a headband or self-adhesive technology, the necessary sensors can be applied easily by users at home. Analysis results demonstrate that classification performance of the proposed system overcomes the individual differences between different participants in terms of automatically classifying sleep stages. Additionally, the proposed sleep stage classification system can identify kernel sleep features extracted from forehead EEG, which are closely related with sleep clinician's expert knowledge. Moreover, forehead EEG features are classified into five sleep stages by using the relevance vector machine. In a leave-one-subject-out cross validation analysis, we found our system to correctly classify five sleep stages at an average accuracy of 76.7 ± 4.0 (SD) % [average kappa 0.68 ± 0.06 (SD)]. Importantly, the proposed sleep stage classification system using forehead EEG features is a viable alternative for measuring EEG signals at home easily and conveniently to evaluate sleep quality reliably, ultimately improving public healthcare.", "title": "" }, { "docid": "6b1dc94c4c70e1c78ea32a760b634387", "text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.", "title": "" }, { "docid": "a341bcf8efb975c078cc452e0eecc183", "text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.", "title": "" }, { "docid": "5700ba2411f9b4e4ed59c8c5839dc87d", "text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.", "title": "" }, { "docid": "081c350100f4db11818c75507f715cda", "text": "Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.", "title": "" }, { "docid": "051c530bf9d49bf1066ddf856488dff1", "text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.", "title": "" }, { "docid": "dce75562a7e8b02364d39fd7eb407748", "text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.", "title": "" }, { "docid": "9dde89f24f55602e21823620b49633dd", "text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.", "title": "" }, { "docid": "c0c064fdc011973848568f5b087ba20b", "text": "’InfoVis novices’ have been found to struggle with visual data exploration. A ’conversational interface’ which would take natural language inputs to visualization generation and modification, while maintaining a history of the requests, visualizations and findings of the user, has the potential to ameliorate many of these challenges. We present Articulate2, initial work toward a conversational interface to visual data exploration.", "title": "" }, { "docid": "0b024671e04090051292b5e76a4690ae", "text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.", "title": "" }, { "docid": "25828231caaf3288ed4fdb27df7f8740", "text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.", "title": "" }, { "docid": "2318fbd8ca703c0ff5254606b8dce442", "text": "Historically, the inspection and maintenance of high-voltage power lines have been performed by linemen using various traditional means. In recent years, the use of robots appeared as a new and complementary method of performing such tasks, as several initiatives have been explored around the world. Among them is the teleoperated robotic platform called LineScout Technology, developed by Hydro-Québec, which has the capacity to clear most obstacles found on the grid. Since its 2006 introduction in the operations, it is considered by many utilities as the pioneer project in the domain. This paper’s purpose is to present the mobile platform design and its main mechatronics subsystems to support a comprehensive description of the main functions and application modules it offers. This includes sensors and a compact modular arm equipped with tools to repair cables and broken conductor strands. This system has now been used on many occasions to assess the condition of power line infrastructure and some results are presented. Finally, future developments and potential technologies roadmap are briefly discussed.", "title": "" } ]
scidocsrr
3522f6f9a5740a1562e42366aa734fe0
Routing betweenness centrality
[ { "docid": "e054c2d3b52441eaf801e7d2dd54dce9", "text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "d041b33794a14d07b68b907d38f29181", "text": "This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called \"Constant Load\" and \"Constant Number of Records\", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.", "title": "" }, { "docid": "801a197f630189ab0a9b79d3cbfe904b", "text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.", "title": "" }, { "docid": "53aa1145047cc06a1c401b04896ff1b1", "text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.", "title": "" }, { "docid": "e31fd6ce6b78a238548e802d21b05590", "text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.", "title": "" }, { "docid": "e2535e6887760b20a18c25385c2926ef", "text": "The rapid growth in demands for computing everywhere has made computer a pivotal component of human mankind daily lives. Whether we use the computers to gather information from the Web, to utilize them for entertainment purposes or to use them for running businesses, computers are noticeably becoming more widespread, mobile and smaller in size. What we often overlook and did not notice is the presence of those billions of small pervasive computing devices around us which provide the intelligence being integrated into the real world. These pervasive computing devices can help to solve some crucial problems in the activities of our daily lives. Take for examples, in the military application, a large quantity of the pervasive computing devices could be deployed over a battlefield to detect enemy intrusion instead of manually deploying the landmines for battlefield surveillance and intrusion detection Chong et al. (2003). Additionally, in structural health monitoring, these pervasive computing devices are also used to detect for any damage in buildings, bridges, ships and aircraft Kurata et al. (2006). To achieve this vision of pervasive computing, also known as ubiquitous computing, many computational devices are integrated in everyday objects and activities to enable better humancomputer interaction. These computational devices are generally equipped with sensing, processing and communicating abilities and these devices are known as wireless sensor nodes. When several wireless sensor nodes are meshed together, they form a network called the Wireless Sensor Network (WSN). Sensor nodes arranged in network form will definitely exhibit more and better characteristics than individual sensor nodes. WSN is one of the popular examples of ubiquitous computing as it represents a new generation of real-time embedded system which offers distinctly attractive enabling technologies for pervasive computing environments. Unlike the conventional networked systems like Wireless Local Area Network (WLAN) and Global System for Mobile communications (GSM), WSN promise to couple end users directly to sensor measurements and provide information that is precisely localized in time and/or space, according to the users’ needs or demands. In the Massachusetts Institute of Technology (MIT) technology review magazine of innovation published in February 2003 MIT (2003), the editors have identified Wireless Sensor Networks as the first of the top ten emerging technologies that will change the world. This explains why WSN has swiftly become a hot research topic in both academic and industry. 2", "title": "" }, { "docid": "958fea977cf31ddabd291da68754367d", "text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.", "title": "" }, { "docid": "2e7ee3674bdd58967380a59d638b2b17", "text": "Media applications are characterized by large amounts of available parallelism, little data reuse, and a high computation to memory access ratio. While these characteristics are poorly matched to conventional microprocessor architectures, they are a good fit for modern VLSI technology with its high arithmetic capacity but limited global bandwidth. The stream programming model, in which an application is coded as streams of data records passing through computation kernels, exposes both parallelism and locality in media applications that can be exploited by VLSI architectures. The Imagine architecture supports the stream programming model by providing a bandwidth hierarchy tailored to the demands of media applications. Compared to a conventional scalar processor, Imagine reduces the global register and memory bandwidth required by typical applications by factors of 13 and 21 respectively. This bandwidth efficiency enables a single chip Imagine processor to achieve a peak performance of 16.2GFLOPS (single-precision floating point) and sustained performance of up to 8.5GFLOPS on media processing kernels.", "title": "" }, { "docid": "f54631ac73d42af0ccb2811d483fe8c2", "text": "Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arxiv and Request for Proposal documents.", "title": "" }, { "docid": "3038ec4ac3d648a4ec052b8d7f854107", "text": "Anomalous data can negatively impact energy forecasting by causing model parameters to be incorrectly estimated. This paper presents two approaches for the detection and imputation of anomalies in time series data. Autoregressive with exogenous inputs (ARX) and artificial neural network (ANN) models are used to extract the characteristics of time series. Anomalies are detected by performing hypothesis testing on the extrema of the residuals, and the anomalous data points are imputed using the ARX and ANN models. Because the anomalies affect the model coefficients, the data cleaning process is performed iteratively. The models are re-learned on “cleaner” data after an anomaly is imputed. The anomalous data are reimputed to each iteration using the updated ARX and ANN models. The ARX and ANN data cleaning models are evaluated on natural gas time series data. This paper demonstrates that the proposed approaches are able to identify and impute anomalous data points. Forecasting models learned on the unclean data and the cleaned data are tested on an uncleaned out-of-sample dataset. The forecasting model learned on the cleaned data outperforms the model learned on the unclean data with 1.67% improvement in the mean absolute percentage errors and a 32.8% improvement in the root mean squared error. Existing challenges include correctly identifying specific types of anomalies such as negative flows.", "title": "" }, { "docid": "43685bd1927f309c8b9a5edf980ab53f", "text": "In this paper we propose a pipeline for accurate 3D reconstruction from multiple images that deals with some of the possible sources of inaccuracy present in the input data. Namely, we address the problem of inaccurate camera calibration by including a method [1] adjusting the camera parameters in a global structure-and-motion problem which is solved with a depth map representation that is suitable to large scenes. Secondly, we take the triangular mesh and calibration improved by the global method in the first phase to refine the surface both geometrically and radiometrically. Here we propose surface energy which combines photo consistency with contour matching and minimize it with a gradient method. Our main contribution lies in effective computation of the gradient that naturally balances weight between regularizing and data terms by employing scale space approach to find the correct local minimum. The results are demonstrated on standard high-resolution datasets and a complex outdoor scene.", "title": "" }, { "docid": "3eeacf0fb315910975e5ff0ffc4fe800", "text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.", "title": "" }, { "docid": "772193675598233ba1ab60936b3091d4", "text": "The proposed quasiresonant control scheme can be widely used in a dc-dc flyback converter because it can achieve high efficiency with minimized external components. The proposed dynamic frequency selector improves conversion efficiency especially at light loads to meet the requirement of green power since the converter automatically switches to the discontinuous conduction mode for reducing the switching frequency and the switching power loss. Furthermore, low quiescent current can be guaranteed by the constant current startup circuit to further reduce power loss after the startup procedure. The test chip fabricated in VIS 0.5 μm 500 V UHV process occupies an active silicon area of 3.6 mm 2. The peak efficiency can achieve 92% at load of 80 W and 85% efficiency at light load of 5 W.", "title": "" }, { "docid": "2fa61482be37fd956e6eceb8e517411d", "text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.", "title": "" }, { "docid": "2049ad444e14db330e2256ce412a19f8", "text": "1 of 11 08/06/07 18:23 Original: http://thebirdman.org/Index/Others/Others-Doc-Environment&Ecology/ +Doc-Environment&Ecology-FoodMatters/StimulatingPlantGrowthWithElectricity&Magnetism&Sound.htm 2007-08-06 Link here: http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.html PDF \"printout\": http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.pdf", "title": "" }, { "docid": "af08fa19de97eed61afd28893692e7ec", "text": "OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50\\% lower than CUDA. However, for some applications it can reach up to 98\\% with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.", "title": "" }, { "docid": "c4043bfa8cfd74f991ac13ce1edd5bf5", "text": "Citations between scientific papers and related bibliometric indices, such as the h-index for authors and the impact factor for journals, are being increasingly used – often in controversial ways – as quantitative tools for research evaluation. Yet, a fundamental research question remains still open: to which extent do quantitative metrics capture the significance of scientific works? We analyze the network of citations among the 449, 935 papers published by the American Physical Society (APS) journals between 1893 and 2009, and focus on the comparison of metrics built on the citation count with network-based metrics. We contrast five article-level metrics with respect to the rankings that they assign to a set of fundamental papers, called Milestone Letters, carefully selected by the APS editors for “making long-lived contributions to physics, either by announcing significant discoveries, or by initiating new areas of research”. A new metric, which combines PageRank centrality with the explicit requirement that paper score is not biased by paper age, is the best-performing metric overall in identifying the Milestone Letters. The lack of time bias in the new metric makes it also possible to use it to compare papers of different age on the same scale. We find that networkbased metrics identify the Milestone Letters better than metrics based on the citation count, which suggests that the structure of the citation network contains information that can be used to improve the ranking of scientific publications. The methods and results presented here are relevant for all evolving systems where network centrality metrics are applied, for example the World Wide Web and online social networks.", "title": "" }, { "docid": "54130e2dd3a202935facdad39c04d914", "text": "Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship between the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity information. We show substantive performance improvement on a difficult thermal-visible face dataset (UND-X1). The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%. The goal of training the deep network is to learn the projections that can be used to bring the two modalities together. Typically, this would mean regressing the representation from one modality towards the other. We construct a deep network comprising N +1 layers with m(k) units in the k-th layer, where k = 1,2, · · · ,N. For an input of x ∈Rd , each layer will output a non-linear projection by using the learned projection matrix W and the non-linear activation function g(·). The output of the k-th hidden layer is h(k) = g(W(k)h(k−1) + b(k)), where W(k) ∈ Rm×m(k−1) is the projection matrix to be learned in that layer, b(k) ∈Rm is a bias vector and g : Rm 7→ Rm is the non-linear activation function. Similarly, the output of the most top level hidden layer can be computed as:", "title": "" }, { "docid": "76e62af2971de3d11d684f1dd7100475", "text": "Recent advances in memory research suggest methods that can be applied to enhance educational practices. We outline four principles of memory improvement that have emerged from research: 1) process material actively, 2) practice retrieval, 3) use distributed practice, and 4) use metamemory. Our discussion of each principle describes current experimental research underlying the principle and explains how people can take advantage of the principle to improve their learning. The techniques that we suggest are designed to increase efficiency—that is, to allow a person to learn more, in the same unit of study time, than someone using less efficient memory strategies. A common thread uniting all four principles is that people learn best when they are active participants in their own learning.", "title": "" }, { "docid": "8eab9eab5b3d93e6688337128d647b06", "text": "Primary triple-negative breast cancers (TNBCs), a tumour type defined by lack of oestrogen receptor, progesterone receptor and ERBB2 gene amplification, represent approximately 16% of all breast cancers. Here we show in 104 TNBC cases that at the time of diagnosis these cancers exhibit a wide and continuous spectrum of genomic evolution, with some having only a handful of coding somatic aberrations in a few pathways, whereas others contain hundreds of coding somatic mutations. High-throughput RNA sequencing (RNA-seq) revealed that only approximately 36% of mutations are expressed. Using deep re-sequencing measurements of allelic abundance for 2,414 somatic mutations, we determine for the first time—to our knowledge—in an epithelial tumour subtype, the relative abundance of clonal frequencies among cases representative of the population. We show that TNBCs vary widely in their clonal frequencies at the time of diagnosis, with the basal subtype of TNBC showing more variation than non-basal TNBC. Although p53 (also known as TP53), PIK3CA and PTEN somatic mutations seem to be clonally dominant compared to other genes, in some tumours their clonal frequencies are incompatible with founder status. Mutations in cytoskeletal, cell shape and motility proteins occurred at lower clonal frequencies, suggesting that they occurred later during tumour progression. Taken together, our results show that understanding the biology and therapeutic responses of patients with TNBC will require the determination of individual tumour clonal genotypes.", "title": "" }, { "docid": "b8fcade88646ef6926e756f92064477b", "text": "We have developed a stencil routing algorithm for implementing a GPU accelerated A-Buffer, by using a multisample texture to store a vector of fragments per pixel. First, all the fragments are captured per pixel in rasterization order. Second, a fullscreen shader pass sorts the fragments using a bitonic sort. At this point, the sorted fragments can be blended arbitrarily to implement various types of algorithms such as order independent transparency or layered depth image generation. Since we handle only 8 fragments per pass, we developed a method for detecting overflow, so we can do additional passes to capture more fragments.", "title": "" } ]
scidocsrr
9ab20062b846a737c67c08bed9fe8e3c
Semantic Word Clusters Using Signed Spectral Clustering
[ { "docid": "37f0bea4c677cfb7b931ab174d4d20c7", "text": "A persistent problem of psychology has been how to deal conceptually with patterns of interdependent properties. This problem has been central, of course, in the theoretical treatment by Gestalt psychologists of phenomenal or neural configurations or fields (12, 13, 15). It has also been of concern to social psychologists and sociologists who attempt to employ concepts referring to social systems (18). Heider (19), reflecting the general field-theoretical approach, has considered certain aspects of cognitive fields which contain perceived people and impersonal objects or events. His analysis focuses upon what he calls the P-O-X unit of a cognitive field, consisting of P (one person), 0 (another person), and X (an impersonal entity). Each relation among the parts of the unit is conceived as interdependent with each other relation. Thus, for example, if P has a relation of affection for 0 and if 0 is seen as responsible for X, then there will be a tendency for P to like or approve of X. If the nature of X is such that it would \"normally\" be evaluated as bad, the whole P-O-X unit is placed in a state of imbalance, and pressures", "title": "" }, { "docid": "d46af3854769569a631fab2c3c7fa8f3", "text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.", "title": "" } ]
[ { "docid": "0f11d0d1047a79ee63896f382ae03078", "text": "Much of the visual cortex is organized into visual field maps: nearby neurons have receptive fields at nearby locations in the image. Mammalian species generally have multiple visual field maps with each species having similar, but not identical, maps. The introduction of functional magnetic resonance imaging made it possible to identify visual field maps in human cortex, including several near (1) medial occipital (V1,V2,V3), (2) lateral occipital (LO-1,LO-2, hMT+), (3) ventral occipital (hV4, VO-1, VO-2), (4) dorsal occipital (V3A, V3B), and (5) posterior parietal cortex (IPS-0 to IPS-4). Evidence is accumulating for additional maps, including some in the frontal lobe. Cortical maps are arranged into clusters in which several maps have parallel eccentricity representations, while the angular representations within a cluster alternate in visual field sign. Visual field maps have been linked to functional and perceptual properties of the visual system at various spatial scales, ranging from the level of individual maps to map clusters to dorsal-ventral streams. We survey recent measurements of human visual field maps, describe hypotheses about the function and relationships between maps, and consider methods to improve map measurements and characterize the response properties of neurons comprising these maps.", "title": "" }, { "docid": "becda89fbb882f4da57a82441643bb99", "text": "During the nonbreeding season, adult Anna and black-chinned hummingbirds (Calypte anna and Archilochus alexandri) have lower defense costs and more exclusive territories than juveniles. Adult C. anna are victorious over juveniles in aggressive encounters, and tend to monopolize the most temporally predictable resources. Juveniles are more successful than adults at stealing food from territories (the primary alternative to territoriality), presumably because juveniles are less brightly colored. Juveniles have lighter wing disc loading than adults, and consequently should have lower rates of energy expenditure during flight. Reduced flight expenditures may be more important for juveniles because their foraging strategy requires large amounts of flight time. These results support the contention of the asymmetry hypothesis that dominance can result from a contested resource being more valuable to one contestant than to the other. Among juveniles, defence costs are also negatively correlated with age and coloration; amount of conspicucus coloration is negatively correlated with the number of bill striations, an inverse measure of age.", "title": "" }, { "docid": "eaf1c419853052202cb90246e48a3697", "text": "The objective of this document is to promote the use of dynamic daylight performance measures for sustainable building design. The paper initially explores the shortcomings of conventional, static daylight performance metrics which concentrate on individual sky conditions, such as the common daylight factor. It then provides a review of previously suggested dynamic daylight performance metrics, discussing the capability of these metrics to lead to superior daylighting designs and their accessibility to nonsimulation experts. Several example offices are examined to demonstrate the benefit of basing design decisions on dynamic performance metrics as opposed to the daylight factor. Keywords—–daylighting, dynamic, metrics, sustainable buildings", "title": "" }, { "docid": "7046221ad9045cb464f65666c7d1a44e", "text": "OBJECTIVES\nWe analyzed differences in pediatric elevated blood lead level incidence before and after Flint, Michigan, introduced a more corrosive water source into an aging water system without adequate corrosion control.\n\n\nMETHODS\nWe reviewed blood lead levels for children younger than 5 years before (2013) and after (2015) water source change in Greater Flint, Michigan. We assessed the percentage of elevated blood lead levels in both time periods, and identified geographical locations through spatial analysis.\n\n\nRESULTS\nIncidence of elevated blood lead levels increased from 2.4% to 4.9% (P < .05) after water source change, and neighborhoods with the highest water lead levels experienced a 6.6% increase. No significant change was seen outside the city. Geospatial analysis identified disadvantaged neighborhoods as having the greatest elevated blood lead level increases and informed response prioritization during the now-declared public health emergency.\n\n\nCONCLUSIONS\nThe percentage of children with elevated blood lead levels increased after water source change, particularly in socioeconomically disadvantaged neighborhoods. Water is a growing source of childhood lead exposure because of aging infrastructure.", "title": "" }, { "docid": "10b94bdea46ff663dd01291c5dac9e9f", "text": "The notion of an instance is ubiquitous in knowledge representations for domain modeling. Most languages used for domain modeling offer syntactic or semantic restrictions on specific language constructs that distinguish individuals and classes in the application domain. The use, however, of instances and classes to represent domain entities has been driven by concerns that range from the strictly practical (e.g. the exploitation of inheritance) to the vaguely philosophical (e.g. intuitive notions of intension and extension). We demonstrate the importance of establishing a clear ontological distinction between instances and classes, and then show modeling scenarios where a single object may best be viewed as a class and an instance. To avoid ambiguous interpretations of such objects, it is necessary to introduce separate universes of discourse in which the same object exists in different forms. We show that a limited facility to support this notion exists in modeling languages like Smalltalk and CLOS, and argue that a more general facility should be made explicit in modeling languages.", "title": "" }, { "docid": "b72f4554f2d7ac6c5a8000d36a099e67", "text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.", "title": "" }, { "docid": "5d9b29c10d878d288a960ae793f2366e", "text": "We propose a new bandgap reference topology for supply voltages as low as one diode drop (~0.8V). In conventional low-voltage references, supply voltage is limited by the generated reference voltage. Also, the proposed topology generates the reference voltage at the output of the feedback amplifier. This eliminates the need for an additional output buffer, otherwise required in conventional topologies. With the bandgap core biased from the reference voltage, the new topology is also suitable for a low-voltage shunt reference. We fabricated a 1V, 0.35mV/degC reference occupying 0.013mm2 in a 90nm CMOS process", "title": "" }, { "docid": "de630d018f3ff24fad06976e8dc390fa", "text": "A critical first step in navigation of unmanned aerial vehicles is the detection of the horizon line. This information can be used for adjusting flight parameters, attitude estimation as well as obstacle detection and avoidance. In this paper, a fast and robust technique for precise detection of the horizon is presented. Our approach is to apply convolutional neural networks to the task, training them to detect the sky and ground regions as well as the horizon line in flight videos. Thorough experiments using large datasets illustrate the significance and accuracy of this technique for various types of terrain as well as seasonal conditions.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "ac1302f482309273d9e61fdf0f093e01", "text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.", "title": "" }, { "docid": "af5cd4c5325db5f7d9131b7a7ba12ba5", "text": "Understanding unstructured text in e-commerce catalogs is important for product search and recommendations. In this paper, we tackle the product discovery problem for fashion e-commerce catalogs where each input listing text consists of descriptions of one or more products; each with its own set of attributes. For instance, [this RED printed short top paired with blue jeans makes you go green] contains two products: item top with attributes {pattern=printed, length=short, brand=RED} and item jeans with attributes {color=blue}. The task of product discovery is rendered quite challenging due to the complexity of fashion dictionary (e.g. RED is a brand or green is a metaphor) added to the difficulty of associating attributes to appropriate items (e.g. associating RED brand with item top). Beyond classical attribute extraction task, product discovery entails parsing multi-sentence listings to tag new items and attributes unknown to the underlying schema; at the same time, associating attributes to relevant items to form meaningful products. Towards solving this problem, we propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. To our knowledge, this is the first work to tackle product discovery and show effectiveness of neural architectures on a complex dataset that goes beyond popular datasets for POS tagging and NER.", "title": "" }, { "docid": "e1b69d4f2342a90b52215927f727421b", "text": "We present an inertial sensor based monitoring system for measuring upper limb movements in real time. The purpose of this study is to develop a motion tracking device that can be integrated within a home-based rehabilitation system for stroke patients. Human upper limbs are represented by a kinematic chain in which there are four joint variables to be considered: three for the shoulder joint and one for the elbow joint. Kinematic models are built to estimate upper limb motion in 3-D, based on the inertial measurements of the wrist motion. An efficient simulated annealing optimisation method is proposed to reduce errors in estimates. Experimental results demonstrate the proposed system has less than 5% errors in most motion manners, compared to a standard motion tracker.", "title": "" }, { "docid": "303098fa8e5ccd7cf50a955da7e47f2e", "text": "This paper describes the SALSA corpus, a large German corpus manually annotated with role-semantic information, based on the syntactically annotated TIGER newspaper corpus (Brants et al., 2002). The first release, comprising about 20,000 annotated predicate instances (about half the TIGER corpus), is scheduled for mid-2006. In this paper we discuss the frame-semantic annotation framework and its cross-lingual applicability, problems arising from exhaustive annotation, strategies for quality control, and possible applications.", "title": "" }, { "docid": "647ede4f066516a0343acef725e51d01", "text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.", "title": "" }, { "docid": "ddc6a5e9f684fd13aec56dc48969abc2", "text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.", "title": "" }, { "docid": "0830abcb23d763c1298bf4605f81eb72", "text": "A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGBD images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.", "title": "" }, { "docid": "27487316cbda79a378b706d19d53178f", "text": "Pallister-Killian syndrome (PKS) is a congenital disorder attributed to supernumerary isochromosome 12p mosaicism. Craniofacial dysmorphism, learning impairment and seizures are considered cardinal features. However, little is known regarding the seizure and epilepsy patterns in PKS. To better define the prevalence and spectrum of seizures in PKS, we studied 51 patients (39 male, 12 female; median age 4 years and 9 months; age range 7 months to 31 years) with confirmed 12p tetrasomy. Using a parent-based structured questionnaire, we collected data regarding seizure onset, frequency, timing, semiology, and medication therapy. Patients were recruited through our practice, at PKS Kids family events, and via the PKS Kids website. Epilepsy occurred in 27 (53%) with 23 (85%) of those with seizures having seizure onset prior to 3.5 years of age. Mean age at seizure onset was 2 years and 4 months. The most common seizure types were myoclonic (15/27, 56%), generalized convulsions (13/27, 48%), and clustered tonic spasms (similar to infantile spasms; 8/27, 30%). Thirteen of 27 patients with seizures (48%) had more than one seizure type with 26 out of 27 (96%) ever having taken antiepileptic medications. Nineteen of 27 (70%) continued to have seizures and 17/27 (63%) remained on antiepileptic medication. The most commonly used medications were: levetiracetam (10/27, 37%), valproic acid (10/27, 37%), and topiramate (9/27, 33%) with levetiracetam felt to be \"most helpful\" by parents (6/27, 22%). Further exploration of seizure timing, in-depth analysis of EEG recordings, and collection of MRI data to rule out confounding factors is warranted.", "title": "" }, { "docid": "ffc9a5b907f67e1cedd8f9ab0b45b869", "text": "In this brief, we study the design of a feedback and feedforward controller to compensate for creep, hysteresis, and vibration effects in an experimental piezoactuator system. First, we linearize the nonlinear dynamics of the piezoactuator by accounting for the hysteresis (as well as creep) using high-gain feedback control. Next, we model the linear vibrational dynamics and then invert the model to find a feedforward input to account vibration - this process is significantly easier than considering the complete nonlinear dynamics (which combines hysteresis and vibration effects). Afterwards, the feedforward input is augmented to the feedback-linearized system to achieve high-precision highspeed positioning. We apply the method to a piezoscanner used in an experimental atomic force microscope to demonstrate the method's effectiveness and we show significant reduction of both the maximum and root-mean-square tracking error. For example, high-gain feedback control compensates for hysteresis and creep effects, and in our case, it reduces the maximum error (compared to the uncompensated case) by over 90%. Then, at relatively high scan rates, the performance of the feedback controlled system can be improved by over 75% (i.e., reduction of maximum error) when the inversion-based feedforward input is integrated with the high-gain feedback controlled system.", "title": "" }, { "docid": "4023c95464a842277e4dc62b117de8d0", "text": "Many complex spike cells in the hippocampus of the freely moving rat have as their primary correlate the animal's location in an environment (place cells). In contrast, the hippocampal electroencephalograph theta pattern of rhythmical waves (7-12 Hz) is better correlated with a class of movements that change the rat's location in an environment. During movement through the place field, the complex spike cells often fire in a bursting pattern with an interburst frequency in the same range as the concurrent electroencephalograph theta. The present study examined the phase of the theta wave at which the place cells fired. It was found that firing consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This precession of the phase ranged from 100 degrees to 355 degrees in different cells. The effect appeared to be due to the fact that individual cells had a higher interburst rate than the theta frequency. The phase was highly correlated with spatial location and less well correlated with temporal aspects of behavior, such as the time after place field entry. These results have implications for several aspects of hippocampal function. First, by using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding. Second, the characteristics of the phase shift constrain the models that define the construction of place fields. Third, the results restrict the temporal and spatial circumstances under which synapses in the hippocampus could be modified.", "title": "" }, { "docid": "6bc31257bfbcc9531a3acf1ec738c790", "text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.", "title": "" } ]
scidocsrr
881dccbf7e1eb78c1904275e03904671
Multimodal Prediction of Affective Dimensions and Depression in Human-Computer Interactions
[ { "docid": "80bf80719a1751b16be2420635d34455", "text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.", "title": "" } ]
[ { "docid": "d12d5344268cd0f1ff05608009b88c2f", "text": "Guidelines, directives, and policy statements are usually presented in “linear” text form - word after word, page after page. However necessary, this practice impedes full understanding, obscures feedback dynamics, hides mutual dependencies and cascading effects and the like, — even when augmented with tables and diagrams. The net result is often a checklist response as an end in itself. All this creates barriers to intended realization of guidelines and undermines potential effectiveness. We present a solution strategy using text as “data”, transforming text into a structured model, and generate a network views of the text(s), that we then can use for vulnerability mapping, risk assessments and control point analysis. We apply this approach using two NIST reports on cybersecurity of smart grid, more than 600 pages of text. Here we provide a synopsis of approach, methods, and tools. (Elsewhere we consider (a) system-wide level, (b) aviation e-landscape, (c) electric vehicles, and (d) SCADA for smart grid).", "title": "" }, { "docid": "ca7e7fa988bf2ed1635e957ea6cd810d", "text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.", "title": "" }, { "docid": "64acb2d16c23f2f26140c0bce1785c9b", "text": "Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.", "title": "" }, { "docid": "9175794d83b5f110fb9f08dc25a264b8", "text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.", "title": "" }, { "docid": "18e75ca50be98af1d5a6a2fd22b610d3", "text": "We propose a new type of saliency&#x2014;context-aware saliency&#x2014;which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting, we demonstrate that using our saliency prevents distortions in the important regions. In summarization, we show that our saliency helps to produce compact, appealing, and informative summaries.", "title": "" }, { "docid": "8bdc4b79e71f8bb9f001c99ec3b5e039", "text": "The \"tragedy of the commons\" metaphor helps explain why people overuse shared resources. However, the recent proliferation of intellectual property rights in biomedical research suggests a different tragedy, an \"anticommons\" in which people underuse scarce resources because too many owners can block each other. Privatization of biomedical research must be more carefully deployed to sustain both upstream research and downstream product development. Otherwise, more intellectual property rights may lead paradoxically to fewer useful products for improving human health.", "title": "" }, { "docid": "14e0664fcbc2e29778a1ccf8744f4ca5", "text": "Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the fast repair of a failed connection. In conclusion, a short down-time of the transmission channel can mostly be tolerated.", "title": "" }, { "docid": "8b863cd49dfe5edc2d27a0e9e9db0429", "text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.", "title": "" }, { "docid": "5679a329a132125d697369ca4d39b93e", "text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "4e23bf1c89373abaf5dc096f76c893f3", "text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.", "title": "" }, { "docid": "7b627fa766382ead588c14e22541b766", "text": "This book highlights the importance of anchoring education in an evidence base derived from neuroscience. For far too long has the brain been neglected in discussions on education and often information about neuroscientific research is not easy to access. Our aim was to provide a source book that conveys the excitement of neuroscience research that is relevant to learning and education. This research has largely, but not exclusively, been carried out using neuroimaging methods in the past decade or so, ranging from investigations of brain structure and function in dyslexia and dyscalculia to investigations of the changes in the hippocampus of London taxi drivers. To speak to teachers who might not have scientific backgrounds, we have tried to use nontechnical language as far as possible and have provided an appendix illustrating the main methods and techniques currently used and a glossary, defining terms from Acetylcholine, Action Potentials and ADHD to White Matter, Word Form Area and Working Memory. We start with the idea that the brain has evolved to educate and to be educated, often instinctively and effortlessly. We believe that understanding the brain mechanisms that underlie learning and teaching could transform educational strategies and enable us to design educational programmes that optimize learning for people of all ages and of all needs. For this reason the first two-thirds of the book follows a developmental framework. The rest of the book focuses on learning in the brain at all ages. There is a vast amount brain research of direct relevance to education practice and policy. And yet neuroscience has had little impact on education. This might in part be due to a lack of interaction between educators and brain scientists. This in turn might be because of difficulties of translating the neuroscience knowledge of how learning takes place in the brain into information of value to teachers. It is here where we try to fill a gap. Interdisciplinary dialogue needs a mediator to prevent one or other discipline dominating, and, notwithstanding John Bruer’s remarks that it is cognitive psychology that ‘bridges the gap’ between neuroscience and education (Bruer, 1997), we feel that now is the time to explore the implications of brain science itself for education.", "title": "" }, { "docid": "b866e7e4d8522d820bd4fccc1a8fb0c0", "text": "The domain of smart home environments is viewed as a key element of the future Internet, and many homes are becoming “smarter” by using Internet of Things (IoT) technology to improve home security, energy efficiency and comfort. At the same time, enforcing privacy in IoT environments has been identified as one of the main barriers for realizing the vision of the smart home. Based on the results of a risk analysis of a smart home automation system developed in collaboration with leading industrial actors, we outline the first steps towards a general model of privacy and security for smart homes. As such, it is envisioned as support for enforcing system security and enhancing user privacy, and it can thus help to further realize the potential in smart home environments.", "title": "" }, { "docid": "147e0eecf649f96209056112269c2a73", "text": "Due to the fast evolution of the information on the Internet, update summarization has received much attention in recent years. It is to summarize an evolutionary document collection at current time supposing the users have read some related previous documents. In this paper, we propose a graph-ranking-based method. It performs constrained reinforcements on a sentence graph, which unifies previous and current documents, to determine the salience of the sentences. The constraints ensure that the most salient sentences in current documents are updates to previous documents. Since this method is NP-hard, we then propose its approximate method, which is polynomial time solvable. Experiments on the TAC 2008 and 2009 benchmark data sets show the effectiveness and efficiency of our method.", "title": "" }, { "docid": "dd9b6b67f19622bfffbad427b93a1829", "text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.", "title": "" }, { "docid": "9a12ec03e4521a33a7e76c0c538b6b43", "text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.", "title": "" }, { "docid": "e3eb4019846f9add4e464462e1065119", "text": "The internet – specifically its graphic interface, the world wide web – has had a major impact on all levels of (information) societies throughout the world. Specifically for journalism as it is practiced online, we can now identify the effect that this has had on the profession and its culture(s). This article defines four particular types of online journalism and discusses them in terms of key characteristics of online publishing – hypertextuality, interactivity, multimediality – and considers the current and potential impacts that these online journalisms can have on the ways in which one can define journalism as it functions in elective democracies worldwide. It is argued that the application of particular online characteristics not only has consequences for the type of journalism produced on the web, but that these characteristics and online journalisms indeed connect to broader and more profound changes and redefinitions of professional journalism and its (news) culture as a whole.", "title": "" }, { "docid": "f1e03d9f810409cd470ae65683553a0d", "text": "Emergency departments (ED) face significant challenges in delivering high quality and timely patient care on an ever-present background of increasing patient numbers and limited hospital resources. A mismatch between patient demand and the ED's capacity to deliver care often leads to poor patient flow and departmental crowding. These are associated with reduction in the quality of the care delivered and poor patient outcomes. A literature review was performed to identify evidence-based strategies to reduce the amount of time patients spend in the ED in order to improve patient flow and reduce crowding in the ED. The use of doctor triage, rapid assessment, streaming and the co-location of a primary care clinician in the ED have all been shown to improve patient flow. In addition, when used effectively point of care testing has been shown to reduce patient time in the ED. Patient flow and departmental crowding can be improved by implementing new patterns of working and introducing new technologies such as point of care testing in the ED.", "title": "" }, { "docid": "aaf8f3e2eaf6487b9284ed54803bd889", "text": "Intra- and subcorneal hematoma, a skin alteration seen palmar and plantar after trauma or physical exercise, can be challenging to distinguish from in situ or invasive acral lentiginous melanoma. Thus, careful examination including dermoscopic and histologic assessment may be necessary to make the correct diagnosis. We here present a case of a 67-year-old healthy female patient who presented with a pigmented plantar skin alteration. Differential diagnoses included benign skin lesions, for example, hematoma or melanocytic nevus, and also acral lentiginous melanoma or melanoma in situ. Since clinical and dermoscopic examinations did not rule out a malignant skin lesion, surgical excision was performed and confirmed an intracorneal hematoma. In summary, without adequate physical trigger, it may be clinically and dermoscopically challenging to make the correct diagnosis in pigmented palmar and plantar skin alterations. Thus, biopsy or surgical excision of the skin alteration may be necessary to rule out melanoma.", "title": "" } ]
scidocsrr
8c108461114f056041167732a0fced25
Evolving Deep Recurrent Neural Networks Using Ant Colony Optimization
[ { "docid": "83cace7cc84332bc30eeb6bc957ea899", "text": "Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing decision makers in many areas. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, using ANNs to model linear problems have yielded mixed results, and hence; it is not wise to apply ANNs blindly to any type of data. Autoregressive integrated moving average (ARIMA) models are one of the most popular linear models in time series forecasting, which have been widely applied in order to construct more accurate hybrid models during the past decade. Although, hybrid techniques, which decompose a time series into its linear and nonlinear components, have recently been shown to be successful for single models, these models have some disadvantages. In this paper, a novel hybridization of artificial neural networks and ARIMA model is proposed in order to overcome mentioned limitation of ANNs and yield more general and more accurate forecasting model than traditional hybrid ARIMA-ANNs models. In our proposed model, the unique advantages of ARIMA models in linear modeling are used in order to identify and magnify the existing linear structure in data, and then a neural network is used in order to determine a model to capture the underlying data generating process and predict, using preprocessed data. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy ybrid achieved by traditional h", "title": "" } ]
[ { "docid": "d3049fee1ed622515f5332bcfa3bdd7b", "text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.", "title": "" }, { "docid": "0c9fa24357cb09cea566b7b2493390c4", "text": "Conflict is a common phenomenon in interactions both between individuals, and between groups of individuals. As CSCW is concerned with the design of systems to support such interactions, an examination of conflict, and the various ways of dealing with it, would clearly be of benefit. This chapter surveys the literature that is most relevant to the CSCW community, covering many disciplines that have addressed particular aspects of conflict. The chapter is organised around a series of assertions, representing both commonly held beliefs about conflict, and hypotheses and theories drawn from the literature. In many cases no definitive statement can be made about the truth or falsity of an assertion: the empirical evidence both supporting and opposing is examined, and pointers are provided to further discussion in the literature. One advantage of organising the survey in this way is that it need not be read in order. Each assertion forms a self-contained essay, with cross-references to related assertions. Hence, treat the chapter as a resource to be dipped into rather than read in sequence. This introduction sets the scene by defining conflict, and providing a rationale for studying conflict in relation to CSCW. The assertions are presented in section 2, and form the main body of the chapter. Finally, section 3 relates the assertions to current work on CSCW systems.", "title": "" }, { "docid": "fde0f116dfc929bf756d80e2ce69b1c7", "text": "The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.", "title": "" }, { "docid": "13daec7c27db2b174502d358b3c19f43", "text": "The QRS complex of the ECG signal is the reference point for the most ECG applications. In this paper, we aim to describe the design and the implementation of an embedded system for detection of the QRS complexes in real-time. The design is based on the notorious algorithm of Pan & Tompkins, with a novel simple idea for the decision stage of this algorithm. The implementation uses a circuit of the current trend, i.e. the FPGA, and it is developed with the Xilinx design tool, System Generator for DSP. In the authors’ view, the specific feature, i.e. authenticity and simplicity of the proposed model, is that the threshold value is updated from the previous smallest peak; in addition, the model is entirely designed simply with MCode blocks. The hardware design is tested with five 30 minutes data records obtained from the MIT-BIH Arrhythmia database. Its accuracy exceeds 96%, knowing that four records among the five represent the worst cases in the database. In terms of the resources utilization, our implementation occupies around 30% of the used FPGA device, namely the Xilinx Spartan 3E XC3S500.", "title": "" }, { "docid": "fa0c62b91643a45a5eff7c1b1fa918f1", "text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.", "title": "" }, { "docid": "512d29a398f51041466884f4decec84a", "text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2", "title": "" }, { "docid": "876e56a4c859e5fc7fa0038845317da4", "text": "The rise of Web 2.0 with its increasingly popular social sites like Twitter, Facebook, blogs and review sites has motivated people to express their opinions publicly and more frequently than ever before. This has fueled the emerging field known as sentiment analysis whose goal is to translate the vagaries of human emotion into hard data. LCI is a social channel analysis platform that taps into what is being said to understand the sentiment with the particular ability of doing so in near real-time. LCI integrates novel algorithms for sentiment analysis and a configurable dashboard with different kinds of charts including dynamic ones that change as new data is ingested. LCI has been researched and prototyped at HP Labs in close interaction with the Business Intelligence Solutions (BIS) Division and a few customers. This paper presents an overview of the architecture and some of its key components and algorithms, focusing in particular on how LCI deals with Twitter and illustrating its capabilities with selected use cases.", "title": "" }, { "docid": "cd5a267c1dac92e68ba677c4a2e06422", "text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.", "title": "" }, { "docid": "51487a368a572dc415a5a4c0d4621d4b", "text": "Wireless sensor networks (WSNs) are an emerging technology for monitoring physical world. Different from the traditional wireless networks and ad hoc networks, the energy constraint of WSNs makes energy saving become the most important goal of various routing algorithms. For this purpose, a cluster based routing algorithm LEACH (low energy adaptive clustering hierarchy) has been proposed to organize a sensor network into a set of clusters so that the energy consumption can be evenly distributed among all the sensor nodes. Periodical cluster head voting in LEACH, however, consumes non-negligible energy and other resources. While another chain-based algorithm PEGASIS (powerefficient gathering in sensor information systems) can reduce such energy consumption, it causes a longer delay for data transmission. In this paper, we propose a routing algorithm called CCM (Chain-Cluster based Mixed routing), which makes full use of the advantages of LEACH and PEGASIS, and provide improved performance. It divides a WSN into a few chains and runs in two stages. In the first stage, sensor nodes in each chain transmit data to their own chain head node in parallel, using an improved chain routing protocol. In the second stage, all chain head nodes group as a cluster in a selforganized manner, where they transmit fused data to a voted cluster head using the cluster based routing. Experimental F. Tang (B) · M. Guo · Y. Ma Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] I. You School of Information Science, Korean Bible University, Seoul, South Korea F. Tang · S. Guo School of Computer Science and Engineering, The University of Aizu, Fukushima 965-8580, Japan results demonstrate that our CCM algorithm outperforms both LEACH and PEGASIS in terms of the product of consumed energy and delay, weighting the overall performance of both energy consumption and transmission delay.", "title": "" }, { "docid": "eccae386c0b8c053abda46537efbd792", "text": "Software Defined Networking (SDN) has recently emerged as a new network management platform. The centralized control architecture presents many new opportunities. Among the network management tasks, measurement is one of the most important and challenging one. Researchers have proposed many solutions to better utilize SDN for network measurement. Among them, how to detect Distributed Denial-of-Services (DDoS) quickly and precisely is a very challenging problem. In this paper, we propose methods to detect DDoS attacks leveraging on SDN's flow monitoring capability. Our methods utilize measurement resources available in the whole SDN network to adaptively balance the coverage and granularity of attack detection. Through simulations we demonstrate that our methods can quickly locate potential DDoS victims and attackers by using a constrained number of flow monitoring rules.", "title": "" }, { "docid": "27237bf03da7f6aea13c137668def5f0", "text": "In deep learning community, gradient based methods are typically employed to train the proposed models. These methods generally operate in a mini-batch training manner wherein a small fraction of the training data is invoked to compute an approximative gradient. It is reported that models trained with large batch are prone to generalize worse than those trained with small batch. Several inspiring works are conducted to figure out the underlying reason of this phenomenon, but almost all of them focus on classification tasks. In this paper, we investigate the influence of batch size on regression task. More specifically, we tested the generalizability of deep auto-encoder trained with varying batch size and checked some well-known measures relating to model generalization. Our experimental results lead to three conclusions. First, there exist no obvious generalization gap in regression model such as auto-encoders. Second, with a same train loss as target, small batch generally lead to solutions closer to the starting point than large batch. Third, spectral norm of weight matrices is closely related to generalizability of the model, but different layers contribute variously to the generalization performance.", "title": "" }, { "docid": "fc2a7c789f742dfed24599997845b604", "text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.", "title": "" }, { "docid": "3cc6d54cb7a8507473f623a149c3c64b", "text": "The measurement of loyalty is a topic of great interest for the marketing academic literature. The relation that loyalty has with the results of organizations has been tested by numerous studies and the search to retain profitable customers has become a maxim in firm management. Tourist destinations have not remained oblivious to this trend. However, the difficulty involved in measuring the loyalty of a tourist destination is a brake on its adoption by those in charge of destination management. The usefulness of measuring loyalty lies in being able to apply strategies which enable improving it, but that also impact on the enhancement of the organization’s results. The study of tourists’ loyalty to a destination is considered relevant for the literature and from the point of view of the management of the multiple actors involved in the tourist activity. Based on these considerations, this work proposes a synthetic indictor that allows the simple measurement of the tourist’s loyalty. To do so, we used as a starting point Best’s (2007) customer loyalty index adapted to the case of tourist destinations. We also employed a variable of results – the tourist’s overnight stays in the destination – to create a typology of customers according to their levels of loyalty and the number of their overnight stays. The data were obtained from a survey carried out with 2373 tourists of the city of Seville. In accordance with the results attained, the proposal of the synthetic indicator to measure tourist loyalty is viable, as it is a question of a simple index constructed from easily obtainable data. Furthermore, four groups of tourists have been identified, according to their degree of loyalty and profitability, using the number of overnight stays of the tourists in their visit to the destination. The study’s main contribution stems from the possibility of simply measuring loyalty and from establishing four profiles of tourists for which marketing strategies of differentiated relations can be put into practice and that contribute to the improvement of the destination’s results. © 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/", "title": "" }, { "docid": "14b0f4542d34a114fd84f14d1f0b53e8", "text": "Selection the ideal mate is the most confusing process in the life of most people. To explore these issues to examine differences under graduates socio-economic status have on their preference of marriage partner selection in terms of their personality traits, socio-economic status and physical attractiveness. A total of 770 respondents participated in this study. The respondents were mainly college students studying in final year degree in professional and non professional courses. The result revealed that the respondents socio-economic status significantly influence preferences in marriage partners selection in terms of personality traits, socio-economic status and physical attractiveness.", "title": "" }, { "docid": "69b831bb25e5ad0f18054d533c313b53", "text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.", "title": "" }, { "docid": "148af36df5a403b33113ee5b9a7ad1d3", "text": "The experience of interacting with a robot has been shown to be very different in comparison to people’s interaction experience with other technologies and artifacts, and often has a strong social or emotional component – a fact that raises concerns related to evaluation. In this paper we outline how this difference is due in part to the general complexity of robots’ overall context of interaction, related to their dynamic presence in the real world and their tendency to invoke a sense of agency. A growing body of work in Human-Robot Interaction (HRI) focuses on exploring this overall context and tries to unpack what exactly is unique about interaction with robots, often through leveraging evaluation methods and frameworks designed for more-traditional HCI. We raise the concern that, due to these differences, HCI evaluation methods should be applied to HRI with care, and we present a survey of HCI evaluation techJames E. Young University of Calgary, Canada, The University of Tokyo, Japan E-mail: [email protected] JaYoung Sung Georgia Institute of Technology, GA, U.S.A. E-mail: [email protected] Amy Voida University of Calgary, Canada E-mail: [email protected] Ehud Sharlin University of Calgary, Canada E-mail: [email protected] Takeo Igarashi The University of Tokyo, Japan, JST ERATO, Japan E-mail: [email protected] Henrik I. Christensen Georgia Institute of Technology, GA, U.S.A. E-mail: [email protected] Rebecca E. Grinter Georgia Institute of Technology, GA, U.S.A. E-mail: [email protected] niques from the perspective of the unique challenges of robots. Further, we have developed a new set of tools to aid evaluators in targeting and unpacking the holistic human-robot interaction experience. Our technique surrounds the development of a map of interaction experience possibilities and, as part of this, we present a set of three perspectives for targeting specific components of interaction experience, and demonstrate how these tools can be practically used in evaluation. CR Subject Classification H.1.2 [Models and principles]: user/machine systems–software psychology", "title": "" }, { "docid": "00639757a1a60fe8e56b868bd6e2ff62", "text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.", "title": "" }, { "docid": "922c0a315751c90a11b018547f8027b2", "text": "We propose a model for the recently discovered Θ+ exotic KN resonance as a novel kind of a pentaquark with an unusual color structure: a 3c ud diquark, coupled to 3c uds̄ triquark in a relative P -wave. The state has J P = 1/2+, I = 0 and is an antidecuplet of SU(3)f . A rough mass estimate of this pentaquark is close to experiment.", "title": "" }, { "docid": "9b19f343a879430283881a69e3f9cb78", "text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.", "title": "" } ]
scidocsrr
0687cc1454d931b15022c0ad9fc1d8c1
Effort during visual search and counting: insights from pupillometry.
[ { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "ca26daaa9961f7ba2343ae84245c1181", "text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.", "title": "" }, { "docid": "3a71dd4c8d9e1cf89134141cfd97023e", "text": "We introduce a novel solid modeling framework taking advantage of the architecture of parallel computing onmodern graphics hardware. Solidmodels in this framework are represented by an extension of the ray representation — Layered Depth-Normal Images (LDNI), which inherits the good properties of Boolean simplicity, localization and domain decoupling. The defect of ray representation in computational intensity has been overcome by the newly developed parallel algorithms running on the graphics hardware equipped with Graphics Processing Unit (GPU). The LDNI for a solid model whose boundary is representedby a closedpolygonalmesh canbe generated efficientlywith thehelp of hardware accelerated sampling. The parallel algorithm for computing Boolean operations on two LDNI solids runs well on modern graphics hardware. A parallel algorithm is also introduced in this paper to convert LDNI solids to sharp-feature preserved polygonal mesh surfaces, which can be used in downstream applications (e.g., finite element analysis). Different from those GPU-based techniques for rendering CSG-tree of solid models Hable and Rossignac (2007, 2005) [1,2], we compute and store the shape of objects in solid modeling completely on graphics hardware. This greatly eliminates the communication bottleneck between the graphics memory and the main memory. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fab72d1223fa94e918952b8715e90d30", "text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.", "title": "" }, { "docid": "4f15ef7dc7405f22e1ca7ae24154f5ef", "text": "This position paper addresses current debates about data in general, and big data specifically, by examining the ethical issues arising from advances in knowledge production. Typically ethical issues such as privacy and data protection are discussed in the context of regulatory and policy debates. Here we argue that this overlooks a larger picture whereby human autonomy is undermined by the growth of scientific knowledge. To make this argument, we first offer definitions of data and big data, and then examine why the uses of data-driven analyses of human behaviour in particular have recently experienced rapid growth. Next, we distinguish between the contexts in which big data research is used, and argue that this research has quite different implications in the context of scientific as opposed to applied research. We conclude by pointing to the fact that big data analyses are both enabled and constrained by the nature of data sources available. Big data research will nevertheless inevitably become more pervasive, and this will require more awareness on the part of data scientists, policymakers and a wider public about its contexts and often unintended consequences.", "title": "" }, { "docid": "46b5082df5dfd63271ec942ce28285fa", "text": "The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.", "title": "" }, { "docid": "2ee5e5ecd9304066b12771f3349155f8", "text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.", "title": "" }, { "docid": "10b4d77741d40a410b30b0ba01fae67f", "text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.", "title": "" }, { "docid": "cab386acd4cf89803325e5d33a095a62", "text": "Dipyridamole is a widely prescribed drug in ischemic disorders, and it is here investigated for potential clinical use as a new treatment for breast cancer. Xenograft mice bearing triple-negative breast cancer 4T1-Luc or MDA-MB-231T cells were generated. In these in vivo models, dipyridamole effects were investigated for primary tumor growth, metastasis formation, cell cycle, apoptosis, signaling pathways, immune cell infiltration, and serum inflammatory cytokines levels. Dipyridamole significantly reduced primary tumor growth and metastasis formation by intraperitoneal administration. Treatment with 15 mg/kg/day dipyridamole reduced mean primary tumor size by 67.5 % (p = 0.0433), while treatment with 30 mg/kg/day dipyridamole resulted in an almost a total reduction in primary tumors (p = 0.0182). Experimental metastasis assays show dipyridamole reduces metastasis formation by 47.5 % in the MDA-MB-231T xenograft model (p = 0.0122), and by 50.26 % in the 4T1-Luc xenograft model (p = 0.0292). In vivo dipyridamole decreased activated β-catenin by 38.64 % (p < 0.0001), phospho-ERK1/2 by 25.05 % (p = 0.0129), phospho-p65 by 67.82 % (p < 0.0001) and doubled the expression of IkBα (p = 0.0019), thus revealing significant effects on Wnt, ERK1/2-MAPK and NF-kB pathways in both animal models. Moreover dipyridamole significantly decreased the infiltration of tumor-associated macrophages and myeloid-derived suppressor cells in primary tumors (p < 0.005), and the inflammatory cytokines levels in the sera of the treated mice. We suggest that when used at appropriate doses and with the correct mode of administration, dipyridamole is a promising agent for breast-cancer treatment, thus also implying its potential use in other cancers that show those highly activated pathways.", "title": "" }, { "docid": "2949a903b7ab1949b6aaad305c532f4b", "text": "This paper presents a semantics-based approach to Recommender Systems (RS), to exploit available contextual information about both the items to be recommended and the recommendation process, in an attempt to overcome some of the shortcomings of traditional RS implementations. An ontology is used as a backbone to the system, while multiple web services are orchestrated to compose a suitable recommendation model, matching the current recommendation context at run-time. To achieve such dynamic behaviour the proposed system tackles the recommendation problem by applying existing RS techniques on three different levels: the selection of appropriate sets of features, recommendation model and recommendable items.", "title": "" }, { "docid": "41076f408c1c00212106433b47582a43", "text": "Polyols such as mannitol, erythritol, sorbitol, and xylitol are naturally found in fruits and vegetables and are produced by certain bacteria, fungi, yeasts, and algae. These sugar alcohols are widely used in food and pharmaceutical industries and in medicine because of their interesting physicochemical properties. In the food industry, polyols are employed as natural sweeteners applicable in light and diabetic food products. In the last decade, biotechnological production of polyols by lactic acid bacteria (LAB) has been investigated as an alternative to their current industrial production. While heterofermentative LAB may naturally produce mannitol and erythritol under certain culture conditions, sorbitol and xylitol have been only synthesized through metabolic engineering processes. This review deals with the spontaneous formation of mannitol and erythritol in fermented foods and their biotechnological production by heterofermentative LAB and briefly presented the metabolic engineering processes applied for polyol formation.", "title": "" }, { "docid": "cc2822b15ccf29978252b688111d58cd", "text": "Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse-engineering an existing configuration (say, when a new security administrator takes over) is hard. Firewall configuration files are written in low-level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology, and directly parses the various vendor-specific lowlevel configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstraction. A typical question our tool can answer is “from which machines can our DMZ be reached, and with which services?”. Thus, our tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed, it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.", "title": "" }, { "docid": "b08f67bc9b84088f8298b35e50d0b9c5", "text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.", "title": "" }, { "docid": "cceec94ed2462cd657be89033244bbf9", "text": "This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a posttest and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.", "title": "" }, { "docid": "5efd5fb9caaeadb90a684d32491f0fec", "text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.", "title": "" }, { "docid": "90b3e6aee6351b196445843ca8367a3b", "text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.", "title": "" }, { "docid": "2af5e18cfb6dadd4d5145a1fa63f0536", "text": "Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.", "title": "" }, { "docid": "356684bac2e5fecd903eb428dc5455f4", "text": "Social media expose millions of users every day to information campaigns - some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter. After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending. Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75% AUC score for early detection, increasing to above 95% after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.", "title": "" }, { "docid": "a6fec60aeb6e5824ed07eaa3257969aa", "text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only", "title": "" }, { "docid": "fee4b80923ff9b6611e95836a90beb06", "text": "We present an annotation management system for relational databases. In this system, every piece of data in a relation is assumed to have zero or more annotations associated with it and annotations are propagated along, from the source to the output, as data is being transformed through a query. Such an annotation management system could be used for understanding the provenance (aka lineage) of data, who has seen or edited a piece of data or the quality of data, which are useful functionalities for applications that deal with integration of scientific and biological data. We present an extension, pSQL, of a fragment of SQL that has three different types of annotation propagation schemes, each useful for different purposes. The default scheme propagates annotations according to where data is copied from. The default-all scheme propagates annotations according to where data is copied from among all equivalent formulations of a given query. The custom scheme allows a user to specify how annotations should propagate. We present a storage scheme for the annotations and describe algorithms for translating a pSQL query under each propagation scheme into one or more SQL queries that would correctly retrieve the relevant annotations according to the specified propagation scheme. For the default-all scheme, we also show how we generate finitely many queries that can simulate the annotation propagation behavior of the set of all equivalent queries, which is possibly infinite. The algorithms are implemented and the feasibility of the system is demonstrated by a set of experiments that we have conducted.", "title": "" } ]
scidocsrr
494618e843cad4d38743b862d5b3d3a7
Measuring the Lifetime Value of Customers Acquired from Google Search Advertising
[ { "docid": "bfe762fc6e174778458b005be75d8285", "text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.", "title": "" } ]
[ { "docid": "5b9488755fb3146adf5b6d8d767b7c8f", "text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.", "title": "" }, { "docid": "bda892eb6cdcc818284f56b74c932072", "text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.", "title": "" }, { "docid": "24d0d2a384b2f9cefc6e5162cdc52c45", "text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.", "title": "" }, { "docid": "723f7d157cacfcad4523f7544a9d1c77", "text": "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.", "title": "" }, { "docid": "faf83822de9f583bebc120aecbcd107a", "text": "Relapsed B-cell lymphomas are incurable with conventional chemotherapy and radiation therapy, although a fraction of patients can be cured with high-dose chemoradiotherapy and autologous stemcell transplantation (ASCT). We conducted a phase I/II trial to estimate the maximum tolerated dose (MTD) of iodine 131 (131I)–tositumomab (anti-CD20 antibody) that could be combined with etoposide and cyclophosphamide followed by ASCT in patients with relapsed B-cell lymphomas. Fifty-two patients received a trace-labeled infusion of 1.7 mg/kg 131Itositumomab (185-370 MBq) followed by serial quantitative gamma-camera imaging and estimation of absorbed doses of radiation to tumor sites and normal organs. Ten days later, patients received a therapeutic infusion of 1.7 mg/kg tositumomab labeled with an amount of 131I calculated to deliver the target dose of radiation (20-27 Gy) to critical normal organs (liver, kidneys, and lungs). Patients were maintained in radiation isolation until their total-body radioactivity was less than 0.07 mSv/h at 1 m. They were then given etoposide and cyclophosphamide followed by ASCT. The MTD of 131Itositumomab that could be safely combined with 60 mg/kg etoposide and 100 mg/kg cyclophosphamide delivered 25 Gy to critical normal organs. The estimated overall survival (OS) and progressionfree survival (PFS) of all treated patients at 2 years was 83% and 68%, respectively. These findings compare favorably with those in a nonrandomized control group of patients who underwent transplantation, external-beam total-body irradiation, and etoposide and cyclophosphamide therapy during the same period (OS of 53% and PFS of 36% at 2 years), even after adjustment for confounding variables in a multivariable analysis. (Blood. 2000;96:2934-2942)", "title": "" }, { "docid": "6838d497f81c594cb1760c075b0f5d48", "text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.", "title": "" }, { "docid": "ea1d408c4e4bfe69c099412da30949b0", "text": "The amount of scientific papers in the Molecular Biology field has experienced an enormous growth in the last years, prompting the need of developing automatic Information Extraction (IE) systems. This work is a first step towards the ontology-based domain-independent generalization of a system that identifies Escherichia coli regulatory networks. First, a domain ontology based on the RegulonDB database was designed and populated. After that, the steps of the existing IE system were generalized to use the knowledge contained in the ontology, so that it could be potentially applied to other domains. The resulting system has been tested both with abstract and full articles that describe regulatory interactions for E. coli, obtaining satisfactory results. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4b94082787aed8e947ae798b74bdd552", "text": "AIM\nThe aim of the study was to determine the prevalence of high anxiety and substance use among university students in the Republic of Macedonia.\n\n\nMATERIAL AND METHODS\nThe sample comprised 742 students, aged 18-22 years, who attended the first (188 students) and second year studies at the Medical Faculty (257), Faculty of Dentistry (242), and Faculty of Law (55) within Ss. Cyril and Methodius University in Skopje. As a psychometric test the Beck Anxiety Inventory (BAI) was used. It is a self-rating questionnaire used for measuring the severity of anxiety. A psychiatric interview was performed with students with BAI scores > 25. A self-administered questionnaire consisted of questions on the habits of substance (alcohol, nicotine, sedative-hypnotics, and illicit drugs) use and abuse was also used. For statistical evaluation Statistica 7 software was used.\n\n\nRESULTS\nThe highest mean BAI scores were obtained by first year medical students (16.8 ± 9.8). Fifteen percent of all students and 20% of first year medical students showed high levels of anxiety. Law students showed the highest prevalence of substance use and abuse.\n\n\nCONCLUSION\nHigh anxiety and substance use as maladaptive behaviours among university students are not systematically investigated in our country. The study showed that students show these types of unhealthy reactions, regardless of the curriculum of education. More attention should be paid to students in the early stages of their education. A student counselling service which offers mental health assistance needs to be established within University facilities in R. Macedonia alongside the existing services in our health system.", "title": "" }, { "docid": "d1525fdab295a16d5610210e80fb8104", "text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.", "title": "" }, { "docid": "7884c51de6f53d379edccac50fd55caa", "text": "Objective. We analyze the process of changing ethical attitudes over time by focusing on a specific set of ‘‘natural experiments’’ that occurred over an 18-month period, namely, the accounting scandals that occurred involving Enron/Arthur Andersen and insider-trader allegations related to ImClone. Methods. Given the amount of media attention devoted to these ethical scandals, we test whether respondents in a cross-sectional sample taken over 18 months become less accepting of ethically charged vignettes dealing with ‘‘accounting tricks’’ and ‘‘insider trading’’ over time. Results. We find a significant and gradual decline in the acceptance of the vignettes over the 18-month period. Conclusions. Findings presented here may provide valuable insight into potential triggers of changing ethical attitudes. An intriguing implication of these results is that recent highly publicized ethical breaches may not be only a symptom, but also a cause of changing attitudes.", "title": "" }, { "docid": "8d208bb5318dcbc5d941df24906e121f", "text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.", "title": "" }, { "docid": "584de328ade02c34e36e2006f3e66332", "text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.", "title": "" }, { "docid": "7aa6b9cb3a7a78ec26aff130a1c9015a", "text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.", "title": "" }, { "docid": "ef9cea211dfdc79f5044a0da606bafb5", "text": "Gender identity disorder (GID) refers to transsexual individuals who feel that their assigned biological gender is incongruent with their gender identity and this cannot be explained by any physical intersex condition. There is growing scientific interest in the last decades in studying the neuroanatomy and brain functions of transsexual individuals to better understand both the neuroanatomical features of transsexualism and the background of gender identity. So far, results are inconclusive but in general, transsexualism has been associated with a distinct neuroanatomical pattern. Studies mainly focused on male to female (MTF) transsexuals and there is scarcity of data acquired on female to male (FTM) transsexuals. Thus, our aim was to analyze structural MRI data with voxel based morphometry (VBM) obtained from both FTM and MTF transsexuals (n = 17) and compare them to the data of 18 age matched healthy control subjects (both males and females). We found differences in the regional grey matter (GM) structure of transsexual compared with control subjects, independent from their biological gender, in the cerebellum, the left angular gyrus and in the left inferior parietal lobule. Additionally, our findings showed that in several brain areas, regarding their GM volume, transsexual subjects did not differ significantly from controls sharing their gender identity but were different from those sharing their biological gender (areas in the left and right precentral gyri, the left postcentral gyrus, the left posterior cingulate, precuneus and calcarinus, the right cuneus, the right fusiform, lingual, middle and inferior occipital, and inferior temporal gyri). These results support the notion that structural brain differences exist between transsexual and healthy control subjects and that majority of these structural differences are dependent on the biological gender.", "title": "" }, { "docid": "459a3bc8f54b8f7ece09d5800af7c37b", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.", "title": "" }, { "docid": "f740191f7c6d27811bb09bf40e8da021", "text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that", "title": "" }, { "docid": "af1ddb07f08ad6065c004edae74a3f94", "text": "Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias – the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.", "title": "" }, { "docid": "b141c5a1b7a92856b9dc3e3958a91579", "text": "Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. Currently available commercial and academic FPAAs are typically based on operational amplifiers (or other similar analog primitives) with only a few computational elements per chip. While their specific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs limited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as modern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent advances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualities, and current research promises a digitally controllable analog technology that can be directly mated to commercial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dramatically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characterization and system-level experiments on the most recent FPAA are shown.", "title": "" }, { "docid": "3dcce7058de4b41ad3614561832448a4", "text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.", "title": "" } ]
scidocsrr
2b22bedc6f58481917af3d5656987d6b
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
[ { "docid": "7cc20934720912ad1c056dc9afd97e18", "text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.", "title": "" }, { "docid": "fb7f079d104e81db41b01afe67cdf3b0", "text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.", "title": "" }, { "docid": "e0919ddaddfbf307f33b7442ee99cbad", "text": "With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactions with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.", "title": "" } ]
[ { "docid": "bc6418ef8b51eb409a79838a88bf6ae1", "text": "The growth of outsourced storage in the form of storage service providers underlines the importance of developing efficient security mechanisms to protect the data stored in a networked storage system. For securing the data stored remotely, we consider an architecture in which clients have access to a small amount of trusted storage, which could either be local to each client or, alternatively, could be provided by a client’s organization through a dedicated server. In this thesis, we propose new approaches for various mechanisms that are currently employed in implementations of secure networked storage systems. In designing the new algorithms for securing storage systems, we set three main goals. First, security should be added by clients transparently for the storage servers so that the storage interface does not change; second, the amount of trusted storage used by clients should be minimized; and, third, the performance overhead of the security algorithms should not be prohibitive. The first contribution of this dissertation is the construction of novel space-efficient integrity algorithms for both block-level storage systems and cryptographic file systems. These constructions are based on the observation that block contents typically written to disks feature low entropy, and as such are efficiently distinguishable from uniformly random blocks. We provide a rigorous analysis of security of the new integrity algorithms and demonstrate that they maintain the same security properties as existing algorithms (e.g., Merkle tree). We implement the new algorithms for integrity checking of files in the EncFS cryptographic file system and measure their performance cost, as well as the amount of storage needed for integrity and the integrity bandwidth (i.e., the amount of information needed to update or check the integrity of a file block) used. We evaluate the block-level integrity algorithms using a disk trace we collected, and the integrity algorithms for file systems using NFS traces collected at Harvard university. We also construct efficient key management schemes for cryptographic file systems in which the re-encryption of a file following a user revocation is delayed until the next write to that file, a model called lazy revocation. The encryption key evolves at each revocation and we devise an efficient algorithm to recover previous encryption keys with only logarithmic cost in the number of revocations supported. The novel key management scheme is based on a binary tree to derive the keys and improves existing techniques by several orders of magnitude, as shown by our experiments. Our final contribution is to analyze theoretically the consistency of encrypted shared file objects used to implement cryptographic file systems. We provide sufficient conditions for the realization of a given level of consistency, when concurrent writes to both the file and encryption key objects are possible. We show that the consistency of both the key", "title": "" }, { "docid": "19b283a1438058088f9f9e337dd5aac7", "text": "Analysis on Web search query logs has revealed that there is a large portion of entity-bearing queries, reflecting the increasing demand of users on retrieving relevant information about entities such as persons, organizations, products, etc. In the meantime, significant progress has been made in Web-scale information extraction, which enables efficient entity extraction from free text. Since an entity is expected to capture the semantic content of documents and queries more accurately than a term, it would be interesting to study whether leveraging the information about entities can improve the retrieval accuracy for entity-bearing queries. In this paper, we propose a novel retrieval approach, i.e., latent entity space (LES), which models the relevance by leveraging entity profiles to represent semantic content of documents and queries. In the LES, each entity corresponds to one dimension, representing one semantic relevance aspect. We propose a formal probabilistic framework to model the relevance in the high-dimensional entity space. Experimental results over TREC collections show that the proposed LES approach is effective in capturing latent semantic content and can significantly improve the search accuracy of several state-of-the-art retrieval models for entity-bearing queries.", "title": "" }, { "docid": "223252b8bf99671eedd622c99bc99aaf", "text": "We present a novel dataset for natural language generation (NLG) in spoken dialogue systems which includes preceding context (user utterance) along with each system response to be generated, i.e., each pair of source meaning representation and target natural language paraphrase. We expect this to allow an NLG system to adapt (entrain) to the user’s way of speaking, thus creating more natural and potentially more successful responses. The dataset has been collected using crowdsourcing, with several stages to obtain natural user utterances and corresponding relevant, natural, and contextually bound system responses. The dataset is available for download under the Creative Commons 4.0 BY-SA license.", "title": "" }, { "docid": "1c5e17c7acff27e3b10aecf15c5809e7", "text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.", "title": "" }, { "docid": "f334f49a1e21e3278c25ca0d63b2ef8a", "text": "We show that if (J,,} is a sequence of uniformly LI-bounded functions on a measure space, and if.f, -fpointwise a.e., then lim,,_(I{lf,, 1 -IIf,, fII) If I,' for all 0 < p < oc. This result is also generalized in Theorem 2 to some functionals other than the L P norm, namely I. /( J,, -(f, f) f ) -1 0 for suitablej: C -C and a suitable sequence (fJ}. A brief discussion is given of the usefulness of this result in variational problems.", "title": "" }, { "docid": "c27fb42cf33399c9c84245eeda72dd46", "text": "The proliferation of technology has empowered the web applications. At the same time, the presences of Cross-Site Scripting (XSS) vulnerabilities in web applications have become a major concern for all. Despite the many current detection and prevention approaches, attackers are exploiting XSS vulnerabilities continuously and causing significant harm to the web users. In this paper, we formulate the detection of XSS vulnerabilities as a prediction model based classification problem. A novel approach based on text-mining and pattern-matching techniques is proposed to extract a set of features from source code files. The extracted features are used to build prediction models, which can discriminate the vulnerable code files from the benign ones. The efficiency of the developed models is evaluated on a publicly available labeled dataset that contains 9408 PHP labeled (i.e. safe, unsafe) source code files. The experimental results depict the superiority of the proposed approach over existing ones.", "title": "" }, { "docid": "feed386f42b9e4940adb4ce6db0e947b", "text": "We proposed an algorithm to significantly reduce of the number of neurons in a convolutional neural network by adding sparse constraints during the training step. The forward-backward splitting method is applied to solve the sparse constrained problem. We also analyze the benefits of using rectified linear units as non-linear activation function to remove a larger number of neurons. Experiments using four popular CNNs including AlexNet and VGG-B demonstrate the capacity of the proposed method to reduce the number of neurons, therefore, the number of parameters and memory footprint, with a negligible loss in performance.", "title": "" }, { "docid": "8d0066400985b2577f4fbe8013d5ba1d", "text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13 percents in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.", "title": "" }, { "docid": "d369d3bd03f54e9cb912f53cdaf51631", "text": "This paper presents a method to detect table regions in document images by identifying the column and row line-separators and their properties. The method employs a run-length approach to identify the horizontal and vertical lines present in the input image. From each group of intersecting horizontal and vertical lines, a set of 26 low-level features are extracted and an SVM classifier is used to test if it belongs to a table or not. The performance of the method is evaluated on a heterogeneous corpus of French, English and Arabic documents that contain various types of table structures and compared with that of the Tesseract OCR system.", "title": "" }, { "docid": "54ec681832cd276b6641f7e7e08205a7", "text": "In this paper, we proposed PRPRS (Personalized Research Paper Recommendation System) that designed expansively and implemented a UserProfile-based algorithm for extracting keyword by keyword extraction and keyword inference. If the papers don't have keyword section, we consider the title and text as an argument of keyword and execute the algorithm. Then, we create the possible combination from each word of title. We extract the combinations presented in the main text among the longest word combinations which include the same words. If the number of extracted combinations is more than the standard number, we used that combination as keyword. Otherwise, we refer the main text and extract combination as much as standard in order of high Term-Frequency. Whenever collected research papers by topic are selected, a renewal of UserProfile increases the frequency of each Domain, Topic and keyword. Each ratio of occurrence is recalculated and reflected on UserProfile. PRPRS calculates the similarity between given topic and collected papers by using Cosine Similarity which is used to recommend initial paper for each topic in Information retrieval. We measured satisfaction and accuracy for each system-recommended paper to test and evaluated performances of the suggested system. Finally PRPRS represents high level of satisfaction and accuracy.", "title": "" }, { "docid": "ff4c034ecbd01e0308b68df353ce1751", "text": "Social media is a rich data source for analyzing the social impact of hazard processes and human behavior in disaster situations; it is used by rescue agencies for coordination and by local governments for the distribution of official information. In this paper, we propose a method for data mining in Twitter to retrieve messages related to an event. We describe an automated process for the collection of hashtags highly related to the event and specific only to it. We compare our method with existing keyword-based methods and prove that hashtags are good markers for the separation of similar, simultaneous incidents; therefore, the retrieved messages have higher relevancy. The method uses disaster databases to find the location of an event and to estimate the impact area. The proposed method can also be adapted to retrieve messages about other types of events with a known location, such as riots, festivals and exhibitions.", "title": "" }, { "docid": "4261306ca632ada117bdb69af81dcb3f", "text": "Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. In some cases it may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between an IP enabled sensor nodes and a device on traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec on Contiki. Our extension supports both IPsec’s Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms.", "title": "" }, { "docid": "561e9f599e5dc470ca6f57faa62ebfce", "text": "Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for oneshot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.", "title": "" }, { "docid": "7831c93b0c09c1690b4a2f1fefa766c4", "text": "Amazon Web Services offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 140 AWS services are available. New services can be provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements. This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform.", "title": "" }, { "docid": "51f5ba274068c0c03e5126bda056ba98", "text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7a356a485b46c6fc712a0174947e142e", "text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.", "title": "" }, { "docid": "f5b027fedefe929e9530f038c3fb219a", "text": "Outfits in online fashion data are composed of items of many different types (e.g . top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-toend model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3-5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries.", "title": "" }, { "docid": "abdd8eb3c08b63762cb0a0dffdbade12", "text": "Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.", "title": "" }, { "docid": "7359729fe4bb369798c05c8c7c258111", "text": "By considering various situations of climatologically phenomena affecting local weather conditions in various parts of the world. These weather conditions have a direct effect on crop yield. Various researches have been done exploring the connections between large-scale climatologically phenomena and crop yield. Artificial neural networks have been demonstrated to be powerful tools for modeling and prediction, to increase their effectiveness. Crop prediction methodology is used to predict the suitable crop by sensing various parameter of soil and also parameter related to atmosphere. Parameters like type of soil, PH, nitrogen, phosphate, potassium, organic carbon, calcium, magnesium, sulphur, manganese, copper, iron, depth, temperature, rainfall, humidity. For that purpose we are used artificial neural network (ANN).", "title": "" }, { "docid": "b8bd0e7a31e4ae02f845fa5f57a5297f", "text": "In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.", "title": "" } ]
scidocsrr
d2f41b7b54666c0c6d95140ca3095cc6
PALM-COEIN FIGO Classification for diagnosis of Abnormal Uterine Bleeding : Practical Utility of same at Tertiary Care Centre in North India
[ { "docid": "cfa8e5af1a37c96617164ea319dba4a5", "text": "In 2011, the FIGO classification system (PALM-COEIN) was published to standardize terminology, diagnostic and investigations of causes of abnormal uterine bleeding (AUB). According to FIGO new classification, in the absence of structural etiology, the formerly called \"dysfunctional uterine bleeding\" should be avoided and clinicians should state if AUB are caused by coagulation disorders (AUB-C), ovulation disorder (AUB-O), or endometrial primary dysfunction (AUB-E). Since this publication, some societies have released or revised their guidelines for the diagnosis and the management of the formerly called \"dysfunctional uterine bleeding\" according new FIGO classification. In this review, we summarize the most relevant new guidelines for the diagnosis and the management of AUB-C, AUB-O, and AUB-E.", "title": "" } ]
[ { "docid": "c5081f86c4a173a40175e65b05d9effb", "text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.", "title": "" }, { "docid": "c69e249b0061057617eb8c70d26df0b4", "text": "This paper explores the use of GaN MOSFETs and series-connected inverter segments to realize an IMMD. The proposed IMMD topology reduces the segment voltage and offers an opportunity to utilize wide bandgap 200V GaN MOSFETs. Consequently, a reduction in IMMD size is achieved by eliminating inverter heat sink and optimizing the choice of DC-link capacitors. Gate signals of the IMMD segments are shifted (interleaved) to cancel the capacitor voltage ripple and further reduce the capacitor size. Motor winding configuration and coupling effect are also investigated to match with the IMMD design. An actively controlled balancing resistor is programmed to balance the voltages of series connected IMMD segments. Furthermore, this paper presents simulation results as well as experiment results to validate the proposed design.", "title": "" }, { "docid": "981d140731d8a3cdbaebacc1fd26484a", "text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.", "title": "" }, { "docid": "b68001bf953e63db5ef12be3b20a90aa", "text": "Contrast sensitivity (CS) is the ability of the observer to discriminate between adjacent stimuli on the basis of their differences in relative luminosity (contrast) rather than their absolute luminances. In previous studies, using a narrow range of species, birds have been reported to have low contrast detection thresholds relative to mammals and fishes. This was an unexpected finding because birds had been traditionally reported to have excellent visual acuity and color vision. This study reports CS in six species of birds that represent a range of visual adaptations to varying environments. The species studied were American kestrels (Falco sparverius), barn owls (Tyto alba), Japanese quail (Coturnix coturnix japonica), white Carneaux pigeons (Columba livia), starlings (Sturnus vulgaris), and red-bellied woodpeckers (Melanerpes carolinus). Contrast sensitivity functions (CSFs) were obtained from these birds using the pattern electroretinogram and compared with CSFs from the literature when possible. All of these species exhibited low CS relative to humans and most mammals, which suggests that low CS is a general characteristic of birds. Their low maximum CS may represent a trade-off of contrast detection for some other ecologically vital capacity such as UV detection or other aspects of their unique color vision.", "title": "" }, { "docid": "8e7d3462f93178f6c2901a429df22948", "text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.", "title": "" }, { "docid": "d9791131cefcf0aa18befb25c12b65b2", "text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.", "title": "" }, { "docid": "645395d46f653358d942742711d50c0b", "text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets", "title": "" }, { "docid": "f05cb5a3aeea8c4151324ad28ad4dc93", "text": "With the discovery of induced pluripotent stem (iPS) cells, it is now possible to convert differentiated somatic cells into multipotent stem cells that have the capacity to generate all cell types of adult tissues. Thus, there is a wide variety of applications for this technology, including regenerative medicine, in vitro disease modeling, and drug screening/discovery. Although biological and biochemical techniques have been well established for cell reprogramming, bioengineering technologies offer novel tools for the reprogramming, expansion, isolation, and differentiation of iPS cells. In this article, we review these bioengineering approaches for the derivation and manipulation of iPS cells and focus on their relevance to regenerative medicine.", "title": "" }, { "docid": "4345ed089e019402a5a4e30497bccc8a", "text": "BACKGROUND\nFluridil, a novel topical antiandrogen, suppresses the human androgen receptor. While highly hydrophobic and hydrolytically degradable, it is systemically nonresorbable. In animals, fluridil demonstrated high local and general tolerance.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of a topical anti- androgen, fluridil, in male androgenetic alopecia.\n\n\nMETHODS\nIn 20 men, for 21 days, occlusive forearm patches with 2, 4, and 6% fluridil, isopropanol, and/or vaseline were applied. In 43 men with androgenetic alopecia (AGA), Norwood grade II-Va, 2% fluridil was evaluated in a double-blind, placebo-controlled study after 3 months clinically by phototrichograms, hematology, and blood chemistry including analysis for fluridil, and at 9 months by phototrichograms.\n\n\nRESULTS\nNeither fluridil nor isopropanol showed sensitization/irritation potential, unlike vaseline. In all AGA subjects, baseline anagen/telogen counts were equal. After 3 months, the average anagen percentage did not change in placebo subjects, but increased in fluridil subjects from 76% to 85%, and at 9 months to 87%. In former placebo subjects, fluridil increased the anagen percentage after 6 months from 76% to 85%. Sexual functions, libido, hematology, and blood chemistry values were normal throughout, except that at 3 months, in the spring, serum testosterone increased within the normal range equally in placebo and fluridil groups. No fluridil or its decomposition product, BP-34, was detectable in the serum at 0, 3, or 90 days.\n\n\nCONCLUSION\nTopical fluridil is nonirritating, nonsensitizing, nonresorbable, devoid of systemic activity, and anagen promoting after daily use in most AGA males.", "title": "" }, { "docid": "dd211105651b376b40205eb16efe1c25", "text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.", "title": "" }, { "docid": "f5e56872c66a126ada7d54c218c06836", "text": "INTRODUCTION\nGender dysphoria, a marked incongruence between one's experienced gender and biological sex, is commonly believed to arise from discrepant cerebral and genital sexual differentiation. With the discovery that estrogen receptor β is associated with female-to-male (FtM) but not with male-to-female (MtF) gender dysphoria, and given estrogen receptor α involvement in central nervous system masculinization, it was hypothesized that estrogen receptor α, encoded by the ESR1 gene, also might be implicated.\n\n\nAIM\nTo investigate whether ESR1 polymorphisms (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 and their haplotypes are associated with gender dysphoria in adults.\n\n\nMETHODS\nMolecular analysis was performed in peripheral blood samples from 183 FtM subjects, 184 MtF subjects, and 394 sex- and ethnically-matched controls.\n\n\nMAIN OUTCOME MEASURES\nGenotype and haplotype analyses of the (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 polymorphisms.\n\n\nRESULTS\nAllele and genotype frequencies for the polymorphism XbaI were statistically significant only in FtM vs control XX subjects (P = .021 and P = .020). In XX individuals, the A/G genotype was associated with a low risk of gender dysphoria (odds ratio [OR] = 0.34; 95% CI = 0.16-0.74; P = .011); in XY individuals, the A/A genotype implied a low risk of gender dysphoria (OR = 0.39; 95% CI = 0.17-0.89; P = .008). Binary logistic regression showed partial effects for all three polymorphisms in FtM but not in MtF subjects. The three polymorphisms were in linkage disequilibrium: a small number of TA repeats was linked to the presence of PvuII and XbaI restriction sites (haplotype S-T-A), and a large number of TA repeats was linked to the absence of these restriction sites (haplotype L-C-G). In XX individuals, the presence of haplotype L-C-G carried a low risk of gender dysphoria (OR = 0.66; 95% CI = 0.44-0.99; P = .046), whereas the presence of haplotype L-C-A carried a high susceptibility to gender dysphoria (OR = 3.96; 95% CI = 1.04-15.02; P = .044). Global haplotype was associated with FtM gender dysphoria (P = .017) but not with MtF gender dysphoria.\n\n\nCONCLUSIONS\nXbaI-rs9340799 is involved in FtM gender dysphoria in adults. Our findings suggest different genetic programs for gender dysphoria in men and women. Cortés-Cortés J, Fernández R, Teijeiro N, et al. Genotypes and Haplotypes of the Estrogen Receptor α Gene (ESR1) Are Associated With Female-to-Male Gender Dysphoria. J Sex Med 2017;14:464-472.", "title": "" }, { "docid": "c4d0a1cd8a835dc343b456430791035b", "text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.", "title": "" }, { "docid": "489015cc236bd20f9b2b40142e4b5859", "text": "We present an experimental study which demonstrates that model checking techniques can be effective in finding synchronization errors in safety critical software when they are combined with a design for verification approach. We apply the concurrency controller design pattern to the implementation of the synchronization operations in Java programs. This pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller can be verified with respect to arbitrary numbers of threads using infinite state model checking techniques, and the threads which use the controller classes can be checked for interface violations using finite state model checking techniques. We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted an experimental study investigating the effectiveness of the presented design for verification approach on safety critical air traffic control software. In this study, we first reengineered the Tactical Separation Assisted Flight Environment (TSAFE) software using the concurrency controller design pattern. Then, using fault seeding, we created 40 faulty versions of TSAFE and used both infinite and finite state verification techniques for finding the seeded faults. The experimental study demonstrated the effectiveness of the presented modular verification approach and resulted in a classification of faults that can be found using the presented approach.", "title": "" }, { "docid": "ae8292c58a58928594d5f3730a6feacf", "text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.", "title": "" }, { "docid": "fc2f99fff361e68f154d88da0739bac4", "text": "Mondor's disease is characterized by thrombophlebitis of the superficial veins of the breast and the chest wall. The list of causes is long. Various types of clothing, mainly tight bras and girdles, have been postulated as causes. We report a case of a 34-year-old woman who referred typical symptoms and signs of Mondor's disease, without other possible risk factors, and showed the cutaneous findings of the tight bra. Therefore, after distinguishing benign causes of Mondor's disease from hidden malignant causes, the clinicians should consider this clinical entity.", "title": "" }, { "docid": "269e2f8bca42d5369f9337aea6191795", "text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.", "title": "" }, { "docid": "95fe3badecc7fa92af6b6aa49b6ff3b2", "text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.", "title": "" }, { "docid": "0b6ce2e4f3ef7f747f38068adef3da54", "text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.", "title": "" }, { "docid": "488c7437a32daec6fbad12e07bb31f4c", "text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.", "title": "" }, { "docid": "cd3d9bb066729fc7107c0fef89f664fe", "text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.", "title": "" } ]
scidocsrr
a67a6049fe809bf7f232ba7aed418aa2
Use of SIMD Vector Operations to Accelerate Application Code Performance on Low-Powered ARM and Intel Platforms
[ { "docid": "9200498e7ef691b83bf804d4c5581ba2", "text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.", "title": "" } ]
[ { "docid": "39070a1f503e60b8709050fc2a250378", "text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.", "title": "" }, { "docid": "d7a143bdb62e4aaeaf18b0aabe35588e", "text": "BACKGROUND\nShort-acting insulin analogue use for people with diabetes is still controversial, as reflected in many scientific debates.\n\n\nOBJECTIVES\nTo assess the effects of short-acting insulin analogues versus regular human insulin in adults with type 1 diabetes.\n\n\nSEARCH METHODS\nWe carried out the electronic searches through Ovid simultaneously searching the following databases: Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R) (1946 to 14 April 2015), EMBASE (1988 to 2015, week 15), the Cochrane Central Register of Controlled Trials (CENTRAL; March 2015), ClinicalTrials.gov and the European (EU) Clinical Trials register (both March 2015).\n\n\nSELECTION CRITERIA\nWe included all randomised controlled trials with an intervention duration of at least 24 weeks that compared short-acting insulin analogues with regular human insulins in the treatment of adults with type 1 diabetes who were not pregnant.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data and assessed trials for risk of bias, and resolved differences by consensus. We graded overall study quality using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) instrument. We used random-effects models for the main analyses and presented the results as odds ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes.\n\n\nMAIN RESULTS\nWe identified nine trials that fulfilled the inclusion criteria including 2693 participants. The duration of interventions ranged from 24 to 52 weeks with a mean of about 37 weeks. The participants showed some diversity, mainly with regard to diabetes duration and inclusion/exclusion criteria. The majority of the trials were carried out in the 1990s and participants were recruited from Europe, North America, Africa and Asia. None of the trials was carried out in a blinded manner so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the trials. Furthermore, several trials showed inconsistencies in the reporting of methods and results.The mean difference (MD) in glycosylated haemoglobin A1c (HbA1c) was -0.15% (95% CI -0.2% to -0.1%; P value < 0.00001; 2608 participants; 9 trials; low quality evidence) in favour of insulin analogues. The comparison of the risk of severe hypoglycaemia between the two treatment groups showed an OR of 0.89 (95% CI 0.71 to 1.12; P value = 0.31; 2459 participants; 7 trials; very low quality evidence). For overall hypoglycaemia, also taking into account mild forms of hypoglycaemia, the data were generally of low quality, but also did not indicate substantial group differences. Regarding nocturnal severe hypoglycaemic episodes, two trials reported statistically significant effects in favour of the insulin analogue, insulin aspart. However, due to inconsistent reporting in publications and trial reports, the validity of the result remains questionable.We also found no clear evidence for a substantial effect of insulin analogues on health-related quality of life. However, there were few results only based on subgroups of the trial populations. None of the trials reported substantial effects regarding weight gain or any other adverse events. No trial was designed to investigate possible long-term effects (such as all-cause mortality, diabetic complications), in particular in people with diabetes related complications.\n\n\nAUTHORS' CONCLUSIONS\nOur analysis suggests only a minor benefit of short-acting insulin analogues on blood glucose control in people with type 1 diabetes. To make conclusions about the effect of short acting insulin analogues on long-term patient-relevant outcomes, long-term efficacy and safety data are needed.", "title": "" }, { "docid": "91123d18f56d5aef473394e871c099ec", "text": "Image-to-Image translation was proposed as a general form of many image learning problems. While generative adversarial networks were successfully applied on many image-to-image translations, many models were limited to specific translation tasks and were difficult to satisfy practical needs. In this work, we introduce a One-to-Many conditional generative adversarial network, which could learn from heterogeneous sources of images. This is achieved by training multiple generators against a discriminator in synthesized learning way. This framework supports generative models to generate images in each source, so output images follow corresponding target patterns. Two implementations, hybrid fake and cascading learning, of the synthesized adversarial training scheme are also proposed, and experimented on two benchmark datasets, UTZap50K and MVOD5K, as well as a new high-quality dataset BehTex7K. We consider five challenging image-to-image translation tasks: edges-to-photo, edges-to-similar-photo translation on UTZap50K, cross-view translation on MVOD5K, and grey-to-color, grey-to-Oil-Paint on BehTex7K. We show that both implementations are able to faithfully translate from an image to another image in edges-to-photo, edges-to-similar-photo, grey-to-color, and grey-to-Oil-Paint translation tasks. The quality of output images in cross-view translation need to be further boosted.", "title": "" }, { "docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0", "text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.", "title": "" }, { "docid": "20f05b48fa88283d649a3bcadf2ed818", "text": "A great variety of native and introduced plant species were used as foods, medicines and raw materials by the Rumsen and Mutsun Costanoan peoples of central California. The information presented here has been abstracted from original unpublished field notes recorded during the 1920s and 1930s by John Peabody Harrington, who also directed the collection of some 500 plant specimens. The nature of Harrington’s data and their significance for California ethnobotany are described, followed by a summary of information on the ethnographic uses of each plant.", "title": "" }, { "docid": "6b8be9199593200a58b4d265687fb1ae", "text": "China is a large agricultural country with the largest population in the world. This creates a high demand for food, which is prompting the study of high quality and high-yielding crops. China's current agricultural production is sufficient to feed the nation; however, compared with developed countries agricultural farming is still lagging behind, mainly due to the fact that the system of growing agricultural crops is not based on maximizing output, the latter would include scientific sowing, irrigation and fertilization. In the past few years many seasonal fruits have been offered for sale in markets, but these crops are grown in traditional backward agricultural greenhouses and large scale changes are needed to modernize production. The reform of small-scale greenhouse agricultural production is relatively easy and could be implemented. The concept of the Agricultural Internet of Things utilizes networking technology in agricultural production, the hardware part of this agricultural IoT include temperature, humidity and light sensors and processors with a large data processing capability; these hardware devices are connected by short-distance wireless communication technology, such as Bluetooth, WIFI or Zigbee. In fact, Zigbee technology, because of its convenient networking and low power consumption, is widely used in the agricultural internet. The sensor network is combined with well-established web technology, in the form of a wireless sensor network, to remotely control and monitor data from the sensors.In this paper a smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies. The system consists of sensor networks and asoftware control system. The sensor network consists of the master control center and various sensors using Zigbee protocols. The hardware control center communicates with a middleware system via serial network interface converters. The middleware communicates with a hardware network using an underlying interface and it also communicates with a web system using an upper interface. The top web system provides users with an interface to view and manage the hardware facilities ; administrators can thus view the status of agricultural greenhouses and issue commands to the sensors through this system in order to remotely manage the temperature, humidity and irrigation in the greenhouses. The main topics covered in this paper are:1. To research the current development of new technologies applicable to agriculture and summarizes the strong points concerning the application of the Agricultural Internet of Things both at home and abroad. Also proposed are some new methods of agricultural greenhouse management.2. An analysis of system requirements, the users’ expectations of the system and the response to needs analysis, and the overall design of the system to determine it’s architecture.3. Using software engineering to ensure that functional modules of the system, as far as possible, meet the requirements of high cohesion and low coupling between modules, also detailed design and implementation of each module is considered.", "title": "" }, { "docid": "0366ab38a45f45a8655f4beb6d11d358", "text": "BACKGROUND\nDeep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.\n\n\nAIMS\nWe aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.\n\n\nMATERIALS & METHODS\nWe present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).\n\n\nRESULTS\nFrom ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).\n\n\nDISCUSSION/CONCLUSION\nWe proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.", "title": "" }, { "docid": "ca75798a9090810682f99400f6a8ff4e", "text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.", "title": "" }, { "docid": "a129f0b1c95e17d7e6a587121b267fa9", "text": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications.", "title": "" }, { "docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad", "text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.", "title": "" }, { "docid": "a0f24500f3729b0a2b6e562114eb2a45", "text": "In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network.", "title": "" }, { "docid": "35b286999957396e1f5cab6e2370ed88", "text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.", "title": "" }, { "docid": "013b0ae55c64f322d61e1bf7e8d4c55a", "text": "Binary neural networks for object recognition are desirable especially for small and embedded systems because of their arithmetic and memory efficiency coming from the restriction of the bit-depth of network weights and activations. Neural networks in general have a tradeoff between the accuracy and efficiency in choosing a model architecture, and this tradeoff matters more for binary networks because of the limited bit-depth. This paper then examines the performance of binary networks by modifying architecture parameters (depth and width parameters) and reports the best-performing settings for specific datasets. These findings will be useful for designing binary networks for practical uses.", "title": "" }, { "docid": "64bcd606e039f731aec7cc4722a4d3cb", "text": "Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.", "title": "" }, { "docid": "78e21364224b9aa95f86ac31e38916ef", "text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "568bc5272373a4e3fd38304f2c381e0f", "text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.", "title": "" }, { "docid": "8335faee33da234e733d8f6c95332ec3", "text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.", "title": "" }, { "docid": "0a967b130a6c4dbc93d6b135eeb3c0db", "text": "This paper presents a universal ontology for smart environments aiming to overcome the limitations of the existing ontologies. We enrich our ontology by adding new environmental aspects such as the referentiality and environmental change, that can be used to describe domains as well as applications. We show through a case study how our ontology is used and integrated in a self-organising middleware for smart environments.", "title": "" }, { "docid": "999c0785975052bda742f0620e95fe84", "text": "List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the Java Concurrency Package of JDK 1.6.0. However, Michael’s lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael’s lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael’s lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael’s. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael’s algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java’s RTTI mechanism to create pointers that can be atomically marked).", "title": "" }, { "docid": "7d7db3f70ba6bcb5f9bf615bd8110eba", "text": "Freshwater and energy are essential commodities for well being of mankind. Due to increasing population growth on the one hand, and rapid industrialization on the other, today’s world is facing unprecedented challenge of meeting the current needs for these two commodities as well as ensuring the needs of future generations. One approach to this global crisis of water and energy supply is to utilize renewable energy sources to produce freshwater from impaired water sources by desalination. Sustainable practices and innovative desalination technologies for water reuse and energy recovery (staging, waste heat utilization, hybridization) have the potential to reduce the stress on the existing water and energy sources with a minimal impact to the environment. This paper discusses existing and emerging desalination technologies and possible combinations of renewable energy sources to drive them and associated desalination costs. It is suggested that a holistic approach of coupling renewable energy sources with technologies for recovery, reuse, and recycle of both energy and water can be a sustainable and environment friendly approach to meet the world’s energy and water needs. High capital costs for renewable energy sources for small-scale applications suggest that a hybrid energy source comprising both grid-powered energy and renewable energy will reduce the desalination costs considering present economics of energy. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
8b36ff5c2e3231681101f569f07189d4
Physical Human Activity Recognition Using Wearable Sensors
[ { "docid": "e700afa9064ef35f7d7de40779326cb0", "text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.", "title": "" }, { "docid": "931c75847fdfec787ad6a31a6568d9e3", "text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.", "title": "" } ]
[ { "docid": "bdffdfe92df254d0b13c1a1c985c0400", "text": "We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting.", "title": "" }, { "docid": "d7d0fa6279b356d37c2f64197b3d721d", "text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.", "title": "" }, { "docid": "24a117cf0e59591514dd8630bcd45065", "text": "This work presents a coarse-grained distributed genetic algorithm (GA) for RNA secondary structure prediction. This research builds on previous work and contains two new thermodynamic models, INN and INN-HB, which add stacking-energies using base pair adjacencies. Comparison tests were performed against the original serial GA on known structures that are 122, 543, and 784 nucleotides in length on a wide variety of parameter settings. The effects of the new models are investigated, the predicted structures are compared to known structures and the GA is compared against a serial GA with identical models. Both algorithms perform well and are able to predict structures with high accuracy for short sequences.", "title": "" }, { "docid": "bd60ecd918eba443e0772d4edbec6ba4", "text": "Le ModeÁ le de Culture Fit explique la manieÁ re dont l'environnement socioculturel influence la culture interne au travail et les pratiques de la direction des ressources humaines. Ce modeÁ le a e te teste sur 2003 salarie s d'entreprises prive es dans 10 pays. Les participants ont rempli un questionnaire de 57 items, destine aÁ mesurer les perceptions de la direction sur 4 dimensions socioculturelles, 6 dimensions de culture interne au travail, et les pratiques HRM (Management des Ressources Humaines) dans 3 zones territoiriales. Une analyse ponde re e par re gressions multiples, au niveau individuel, a montre que les directeurs qui caracte risaient leurs environnement socio-culturel de facË on fataliste, supposaient aussi que les employe s n'e taient pas malle ables par nature. Ces directeurs ne pratiquaient pas l'enrichissement des postes et donnaient tout pouvoir au controà le et aÁ la re mune ration en fonction des performances. Les directeurs qui appre ciaient une grande loyaute des APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2000, 49 (1), 192±221", "title": "" }, { "docid": "b91833ae4e659fc1a0943eadd5da955d", "text": "In this paper, we present a factor graph framework to solve both estimation and deterministic optimal control problems, and apply it to an obstacle avoidance task on Unmanned Aerial Vehicles (UAVs). We show that factor graphs allow us to consistently use the same optimization method, system dynamics, uncertainty models and other internal and external parameters, which potentially improves the UAV performance as a whole. To this end, we extended the modeling capabilities of factor graphs to represent nonlinear dynamics using constraint factors. For inference, we reformulate Sequential Quadratic Programming as an optimization algorithm on a factor graph with nonlinear constraints. We demonstrate our framework on a simulated quadrotor in an obstacle avoidance application.", "title": "" }, { "docid": "dbde47a4142bffc2bcbda988781e5229", "text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.", "title": "" }, { "docid": "525f188960eeb7a66ef9734118609f79", "text": "Creativity is important for young children learning mathematics. However, much literature has claimed creativity in the learning of mathematics for young children is not adequately supported by teachers in the classroom due to such reasons as teachers’ poor college preparation in mathematics content knowledge, teachers’ negativity toward creative students, teachers’ occupational pressure, and low quality curriculum. The purpose of this grounded theory study was to generate a model that describes and explains how a particular group of early childhood teachers make sense of creativity in the learning of mathematics and how they think they can promote or fail to promote creativity in the classroom. In-depth interviews with 30 Kto Grade-3 teachers, participating in a graduate mathematics specialist certificate program in a medium-sized Midwestern city, were conducted. In contrast to previous findings, these teachers did view mathematics in young children (age 5 to 9) as requiring creativity, in ways that aligned with Sternberg and Lubart’s (1995) investment theory of creativity. Teachers felt they could support creativity in student learning and knew strategies for how to promote creativity in their practices.", "title": "" }, { "docid": "49f0371f84d7874a6ccc6f9dd0779d3b", "text": "Managing customer satisfaction has become a crucial issue in fast-food industry. This study aims at identifying determinant factor related to customer satisfaction in fast-food restaurant. Customer data are analyzed by using data mining method with two classification techniques such as decision tree and neural network. Classification models are developed using decision tree and neural network to determine underlying attributes of customer satisfaction. Generated rules are beneficial for managerial and practical implementation in fast-food industry. Decision tree and neural network yield more than 80% of predictive accuracy.", "title": "" }, { "docid": "f8f36ef5822446478b154c9d98847070", "text": "The objective of this research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones. Road surface condition information is seen useful for both travellers and for the road network maintenance. The problem we consider is to detect road surface anomalies that, when left unreported, can cause wear of vehicles, lesser driving comfort and vehicle controllability, or an accident. In this work we developed a pattern recognition system for detecting road condition from accelerometer and GPS readings. We present experimental results from real urban driving data that demonstrate the usefulness of the system. Our contributions are: 1) Performing a throughout spectral analysis of tri-axis acceleration signals in order to get reliable road surface anomaly labels. 2) Comprehensive preprocessing of GPS and acceleration signals. 3) Proposing a speed dependence removal approach for feature extraction and demonstrating its positive effect in multiple feature sets for the road surface anomaly detection task. 4) A framework for visually analyzing the classifier predictions over the validation data and labels.", "title": "" }, { "docid": "d9493bec4d01a39ce230b82a98800bb3", "text": "Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India’s Aadhaar Program and the United Arab Emirate’s border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. ! 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d956c805ee88d1b0ca33ce3f0f838441", "text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1", "title": "" }, { "docid": "9f74e665d5ca8c84d7b17806163a16ee", "text": "‘‘This is really still a nightmare — a German nightmare,’’ asserted Mechtilde Maier, Deutsche Telekom’s head of diversity. A multinational company with offices in about 50 countries, Deutsche Telekom is struggling at German headquarters to bring women into its leadership ranks. It is a startling result; at headquarters, one might expect the greatest degree of compliance to commands on high. With only 13% of its leadership positions represented by women, the headquarters is lagging far behind its offices outside Germany, which average 24%. Even progress has been glacial, with an improvement of a mere 0.5% since 2010 versus a 4% increase among its foreign subsidiaries. The phenomenon at Deutsche Telekom reflects a broader pattern, one that manifests in other organizations, in other nations, and in the highest reaches of leadership, including the boardroom. According to the Deloitte Global Centre for Corporate Governance, only about 12% of boardroom seats in the United States are held by women and less than 10% in the United Kingdom (9%), China (8.5%), and India (5%). In stark contrast, these rates are 2—3 times higher in Bulgaria (30%) and Norway (approximately 40%). Organizations are clearly successful in some nations more than others in promoting women to leadership ranks, but why? Instead of a culture’s wealth, values, or practices, our own research concludes that the emergence of women as leaders can be explained in part by a culture’s tightness. Cultural tightness refers to the degree to which a culture has strong norms and low tolerance for deviance. In a tight culture, people might be arrested for spitting, chewing gum, or jaywalking. In loose cultures, although the same behaviors may be met with disapproving glances or fines, they are not sanctioned to the same degree nor are they necessarily seen as taboo. We discovered that women are more likely to emerge as leaders in loose than tight cultures, but with an important exception. Women can emerge as leaders in tight cultures too. Our discoveries highlight that, to promote women to leadership positions, global leaders need to employ strategies that are compatible with the culture’s tightness. Before presenting our findings and their implications, we first discuss the process by which leaders tend to emerge.", "title": "" }, { "docid": "9983792c37341cca7666e2f0d7b42d2b", "text": "Domain modeling is an important step in the transition from natural-language requirements to precise specifications. For large systems, building a domain model manually is a laborious task. Several approaches exist to assist engineers with this task, whereby candidate domain model elements are automatically extracted using Natural Language Processing (NLP). Despite the existing work on domain model extraction, important facets remain under-explored: (1) there is limited empirical evidence about the usefulness of existing extraction rules (heuristics) when applied in industrial settings; (2) existing extraction rules do not adequately exploit the natural-language dependencies detected by modern NLP technologies; and (3) an important class of rules developed by the information retrieval community for information extraction remains unutilized for building domain models.\n Motivated by addressing the above limitations, we develop a domain model extractor by bringing together existing extraction rules in the software engineering literature, extending these rules with complementary rules from the information retrieval literature, and proposing new rules to better exploit results obtained from modern NLP dependency parsers. We apply our model extractor to four industrial requirements documents, reporting on the frequency of different extraction rules being applied. We conduct an expert study over one of these documents, investigating the accuracy and overall effectiveness of our domain model extractor.", "title": "" }, { "docid": "8fa721c98dac13157bcc891c06561ec7", "text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.", "title": "" }, { "docid": "74beaea9eccab976dc1ee7b2ddf3e4ca", "text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.", "title": "" }, { "docid": "36fbc5f485d44fd7c8726ac0df5648c0", "text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.", "title": "" }, { "docid": "d5eb643385b573706c48cbb2cb3262df", "text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.", "title": "" }, { "docid": "ec9f793761ebd5199c6a2cc8c8215ac4", "text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.", "title": "" }, { "docid": "b2aad34d91b5c38f794fc2577593798c", "text": "We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely but is assumed instead to lie between two extreme values min and max These bounds could be inferred from extreme values of the implied volatilities of liquid options or from high low peaks in historical stock or option implied volatilities They can be viewed as de ning a con dence interval for future volatility values We show that the extremal non arbitrageable prices for the derivative asset which arise as the volatility paths vary in such a band can be described by a non linear PDE which we call the Black Scholes Barenblatt equation In this equation the pricing volatility is selected dynamically from the two extreme values min max according to the convexity of the value function A simple algorithm for solving the equation by nite di erencing or a trinomial tree is presented We show that this model captures the importance of diversi cation in managing derivatives positions It can be used systematically to construct e cient hedges using other derivatives in conjunction with the underlying asset y Courant Institute of Mathematical Sciences Mercer st New York NY Institute for Advanced Study Princeton NJ J P Morgan Securities New York NY The uncertain volatility model According to Arbitrage Pricing Theory if the market presents no arbitrage opportunities there exists a probability measure on future scenarios such that the price of any security is the expectation of its discounted cash ows Du e Such a probability is known as a mar tingale measure Harrison and Kreps or a pricing measure Determining the appropriate martingale measure associated with a sector of the security space e g the stock of a company and a riskless short term bond permits the valuation of any contingent claim based on these securities However pricing measures are often di cult to calculate precisely and there may exist more than one measure consistent with a given market It is useful to view the non uniqueness of pricing measures as re ecting the many choices for derivative asset prices that can exist in an uncertain economy For example option prices re ect the market s expectation about the future value of the underlying asset as well as its projection of future volatility Since this projection changes as the market reacts to new information implied volatility uctuates unpredictably In these circumstances fair option values and perfectly replicating hedges cannot be determined with certainty The existence of so called volatility risk in option trading is a concrete manifestation of market incompleteness This paper addresses the issue of derivative asset pricing and hedging in an uncertain future volatility environment For this purpose instead of choosing a pricing model that incorporates a complete view of the forward volatility as a single number or a predetermined function of time and price term structure of volatilities or even a stochastic process with given statistics we propose to operate under the less stringent assumption that that the volatility of future prices is restricted to lie in a bounded set but is otherwise undetermined For simplicity we restrict our discussion to derivative securities based on a single liquidly traded stock which pays no dividends over the contract s lifetime and assume a constant interest rate The basic assumption then reduces to postulating that under all admissible pricing mea sures future volatility paths will be restricted to lie within a band Accordingly we assume that the paths followed by future stock prices are It o processes viz dSt St t dZt t dt where t and t are non anticipative functions such that", "title": "" }, { "docid": "9aefccc6fc6f628d374c1ffccfcc656a", "text": "Keeping up with rapidly growing research fields, especially when there are multiple interdisciplinary sources, requires substantial effort for researchers, program managers, or venture capital investors. Current theories and tools are directed at finding a paper or website, not gaining an understanding of the key papers, authors, controversies, and hypotheses. This report presents an effort to integrate statistics, text analytics, and visualization in a multiple coordinated window environment that supports exploration. Our prototype system, Action Science Explorer (ASE), provides an environment for demonstrating principles of coordination and conducting iterative usability tests of them with interested and knowledgeable users. We developed an understanding of the value of reference management, statistics, citation context extraction, natural language summarization for single and multiple documents, filters to interactively select key papers, and network visualization to see citation patterns and identify clusters. The three-phase usability study guided our revisions to ASE and led us to improve the testing methods.", "title": "" } ]
scidocsrr
f0f07e5aec207f7edfc75e2136b028a7
Author ' s personal copy The role of RFID in agriculture : Applications , limitations and challenges
[ { "docid": "dc67945b32b2810a474acded3c144f68", "text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.", "title": "" } ]
[ { "docid": "48168ed93d710d3b85b7015f2c238094", "text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.", "title": "" }, { "docid": "0685c33de763bdedf2a1271198569965", "text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.", "title": "" }, { "docid": "10d8bbea398444a3fb6e09c4def01172", "text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.", "title": "" }, { "docid": "f47019a78ee833dcb8c5d15a4762ccf9", "text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.", "title": "" }, { "docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13", "text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.", "title": "" }, { "docid": "70c6aaf0b0fc328c677d7cb2249b68bf", "text": "In this paper, we discuss and review how combined multiview imagery from satellite to street level can benefit scene analysis. Numerous works exist that merge information from remote sensing and images acquired from the ground for tasks such as object detection, robots guidance, or scene understanding. What makes the combination of overhead and street-level images challenging are the strongly varying viewpoints, the different scales of the images, their illuminations and sensor modality, and time of acquisition. Direct (dense) matching of images on a per-pixel basis is thus often impossible, and one has to resort to alternative strategies that will be discussed in this paper. For such purpose, we review recent works that attempt to combine images taken from the ground and overhead views for purposes like scene registration, reconstruction, or classification. After the theoretical review, we present three recent methods to showcase the interest and potential impact of such fusion on real applications (change detection, image orientation, and tree cataloging), whose logic can then be reused to extend the use of ground-based images in remote sensing and vice versa. Through this review, we advocate that cross fertilization between remote sensing, computer vision, and machine learning is very valuable to make the best of geographic data available from Earth observation sensors and ground imagery. Despite its challenges, we believe that integrating these complementary data sources will lead to major breakthroughs in Big GeoData. It will open new perspectives for this exciting and emerging field.", "title": "" }, { "docid": "b51fcfa32dbcdcbcc49f1635b44601ed", "text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.", "title": "" }, { "docid": "2956f80e896a660dbd268f9212e6d00f", "text": "Writing as a productive skill in EFL classes is outstandingly significant. In writing classes there needs to be an efficient relationship between the teacher and students. The teacher as the only audience in many writing classes responds to students’ writing. In the early part of the 21 century the range of technologies available for use in classes has become very diverse and the ways they are being used in classrooms all over the world might affect the outcome we expect from our classes. As the present generations of students are using new technologies, the application of these recent technologies in classes might be useful. Using technology in writing classes provides opportunities for students to hand their written work to the teacher without the need for any face-to-face interaction. This present study investigates the effect of Edmodo on EFL learners’ writing performance. A quasi-experimental design was used in this study. The participants were 40 female advanced-level students attending advanced writing classes at Irana English Institute, Razan Hamedan. The focus was on the composition writing ability. The students were randomly assigned to two groups, experimental and control. Edmodo was used in the experimental group. Mann-Whitney U test was used for data analysis; the results indicated that the use of Edmodo in writing was more effective on EFL learners’ writing performance participating in this study.", "title": "" }, { "docid": "1d8cd32e2a2748b9abd53cf32169d798", "text": "Optimizing the weights of Artificial Neural Networks (ANNs) is a great important of a complex task in the research of machine learning due to dependence of its performance to the success of learning process and the training method. This paper reviews the implementation of meta-heuristic algorithms in ANNs’ weight optimization by studying their advantages and disadvantages giving consideration to some meta-heuristic members such as Genetic algorithim, Particle Swarm Optimization and recently introduced meta-heuristic algorithm called Harmony Search Algorithm (HSA). Also, the application of local search based algorithms to optimize the ANNs weights and their benefits as well as their limitations are briefly elaborated. Finally, a comparison between local search methods and global optimization methods is carried out to speculate the trends in the progresses of ANNs’ weight optimization in the current resrearch.", "title": "" }, { "docid": "3ece1c9f619899d5bab03c24fd3cd34a", "text": "A new technique for obtaining high performance, low power, radio direction finding (RDF) using a single receiver is presented. For man-portable applications, multichannel systems consume too much power, are too expensive, and are too heavy to easily be carried by a single individual. Most single channel systems are not accurate enough or do not provide the capability to listen while direction finding (DF) is being performed. By employing feedback in a pseudo-Doppler system via a vector modulator in the IF of a single receiver and an adaptive algorithm to control it, the accuracy of a pseudoDoppler system can be enhanced to the accuracy of an interferometer based system without the expense of a multichannel receiver. And, it will maintain audio listenthrough while direction finding is being performed all with a single inexpensive low power receiver. The use of these techniques provides performance not attainable by other single channel methods.", "title": "" }, { "docid": "6ac3d776d686f873ab931071c75aeed2", "text": "GridRPC, which is an RPC mechanism tailored for the Grid, is an attractive programming model for Grid computing. This paper reports on the design and implementation of a GridRPC programming system called Ninf-G. Ninf-G is a reference implementation of the GridRPC API which has been proposed for standardization at the Global Grid Forum. In this paper, we describe the design, implementations and typical usage of Ninf-G. A preliminary performance evaluation in both WAN and LAN environments is also reported. Implemented on top of the Globus Toolkit, Ninf-G provides a simple and easy programming interface based on standard Grid protocols and the API for Grid Computing. The overhead of remote procedure calls in Ninf-G is acceptable in both WAN and LAN environments.", "title": "" }, { "docid": "f152838edb23a40e895dea2e1ee709d1", "text": "We present two uncommon cases of adolescent girls with hair-thread strangulation of the labia minora. The first 14-year-old girl presented with a painful pedunculated labial lump (Fig. 1). The lesion was covered with exudate. She was examined under sedation and found a coil of long hair forming a tourniquet around a labial segment. Thread removal resulted to immediate relief from pain, and gradual return to normal appearance. Another 10-year-old girl presented with a similar labial swelling. The recent experience of the first case led us straight to the problem. A long hair-thread was found at the neck of the lesion. Hair removal resulted in settling of the pain. The labial swelling subsided in few days.", "title": "" }, { "docid": "ef09bc08cc8e94275e652e818a0af97f", "text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.", "title": "" }, { "docid": "a1f05b8954434a782f9be3d9cd10bb8b", "text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.", "title": "" }, { "docid": "38301e7db178d7072baf0226a1747c03", "text": "We present an algorithm for ray tracing displacement maps that requires no additional storage over the base model. Displacement maps are rarely used in ray tracing due to the cost associated with storing and intersecting the displaced geometry. This is unfortunate because displacement maps allow the addition of large amounts of geometric complexity into models. Our method works for models composed of triangles with normals at the vertices. In addition, we discuss a special purpose displacement that creates a smooth surface that interpolates the triangle vertices and normals of a mesh. The combination allows relatively coarse models to be displacement mapped and ray traced effectively.", "title": "" }, { "docid": "3e8535bc48ce88ba6103a68dd3ad1d5d", "text": "This letter reports the concept and design of the active-braid, a novel bioinspired continuum manipulator with the ability to contract, extend, and bend in three-dimensional space with varying stiffness. The manipulator utilizes a flexible crossed-link helical array structure as its main supporting body, which is deformed by using two radial actuators and a total of six longitudinal tendons, analogously to the three major types of muscle layers found in muscular hydrostats. The helical array structure ensures that the manipulator behaves similarly to a constant volume structure (expanding while shortening and contracting while elongating). Numerical simulations and experimental prototypes are used in order to evaluate the feasibility of the concept.", "title": "" }, { "docid": "e0f84798289c06abcacd14df1df4a018", "text": "PARP inhibitors (PARPi), a cancer therapy targeting poly(ADP-ribose) polymerase, are the first clinically approved drugs designed to exploit synthetic lethality, a genetic concept proposed nearly a century ago. Tumors arising in patients who carry germline mutations in either BRCA1 or BRCA2 are sensitive to PARPi because they have a specific type of DNA repair defect. PARPi also show promising activity in more common cancers that share this repair defect. However, as with other targeted therapies, resistance to PARPi arises in advanced disease. In addition, determining the optimal use of PARPi within drug combination approaches has been challenging. Nevertheless, the preclinical discovery of PARPi synthetic lethality and the route to clinical approval provide interesting lessons for the development of other therapies. Here, we discuss current knowledge of PARP inhibitors and potential ways to maximize their clinical effectiveness.", "title": "" }, { "docid": "62d63357923c5a7b1ea21b8448e3cba3", "text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.", "title": "" }, { "docid": "931f8ada4fdf90466b0b9ff591fb67d1", "text": "Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.", "title": "" } ]
scidocsrr
3cf3840371b5e9515a49b1c4f17bd44e
ICT Governance: A Reference Framework
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "33a9c1b32f211ea13a70b1ce577b71dc", "text": "In this work, we propose a face recognition library, with the objective of lowering the implementation complexity of face recognition features on applications in general. The library is based on Convolutional Neural Networks; a special kind of Neural Network specialized for image data. We present the main motivations for the use of face recognition, as well as the main interface for using the library features. We describe the overall architecture structure of the library and evaluated it on a large scale scenario. The proposed library achieved an accuracy of 98.14% when using a required confidence of 90%, and an accuracy of 99.86% otherwise. Keywords—Artificial Intelligence, CNNs, Face Recognition, Image Recognition, Machine Learning, Neural Networks.", "title": "" }, { "docid": "1876319faa49a402ded2af46a9fcd966", "text": "One, and two, and three police persons spring out of the shadows Down the corner comes one more And we scream into that city night: \" three plus one makes four! \" Well, they seem to think we're disturbing the peace But we won't let them make us sad 'Cause kids like you and me baby, we were born to add Born To Add, Sesame Street (sung to the tune of Bruce Springsteen's Born to Run) to Ursula Preface In October 1996, I got a position as a research assistant working on the Twenty-One project. The project aimed at providing a software architecture that supports a multilingual community of people working on local Agenda 21 initiatives in exchanging ideas and publishing their work. Local Agenda 21 initiatives are projects of local governments, aiming at sustainable processes in environmental , human, and economic terms. The projects cover themes like combating poverty, protecting the atmosphere, human health, freshwater resources, waste management, education, etc. Documentation on local Agenda 21 initiatives are usually written in the language of the local government, very much unlike documentation on research in e.g. information retrieval for which English is the language of international communication. Automatic cross-language retrieval systems are therefore a helpful tool in the international cooperation between local governments. Looking back, I regret not being more involved in the non-technical aspects of the Twenty-One project. To make up for this loss, many of the examples in this thesis are taken from the project's domain. Working on the Twenty-One project convinced me that solutions to cross-language information retrieval should explicitly combine translation models and retrieval models into one unifying framework. Working in a language technology group, the use of language models seemed a natural choice. A choice that simplifies things considerably for that matter. The use of language models for information retrieval practically reduces ranking to simply adding the occurrences of terms: complex weighting algorithms are no longer needed. \" Born to add \" is therefore the motto of this thesis. By adding out loud, it hopefully annoys-no offence, and with all due respect-some of the well-established information retrieval approaches, like Bruce Stringbean and The Sesame Street Band annoys the Sesame Street police. Acknowledgements The research presented in this thesis is funded in part by the European Union projects Twenty-One, Pop-Eye and Olive, and the Telematics Institute project Druid. I am most grateful to Wessel Kraaij of TNO-TPD …", "title": "" }, { "docid": "8e6efa696b960cf08cf1616efc123cbd", "text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.", "title": "" }, { "docid": "e6d4d23df1e6d21bd988ca462526fe15", "text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.", "title": "" }, { "docid": "d58425a613f9daea2677d37d007f640e", "text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.", "title": "" }, { "docid": "ab2c0a23ed71295ee4aa51baf9209639", "text": "An expert system to diagnose the main childhood diseases among the tweens is proposed. The diagnosis is made taking into account the symptoms that can be seen or felt. The childhood diseases have many common symptoms and some of them are very much alike. This creates many difficulties for the doctor to reach at a right decision or diagnosis. The proposed system can remove these difficulties and it is having knowledge of many childhood diseases. The proposed expert system is implemented using SWI-Prolog.", "title": "" }, { "docid": "263ac34590609435b2a104a385f296ca", "text": "Efficient computation of curvature-based energies is important for practical implementations of geometric modeling and physical simulation applications. Building on a simple geometric observation, we provide a version of a curvature-based energy expressed in terms of the Laplace operator acting on the embedding of the surface. The corresponding energy--being quadratic in positions--gives rise to a constant Hessian in the context of isometric deformations. The resulting isometric bending model is shown to significantly speed up common cloth solvers, and when applied to geometric modeling situations built onWillmore flow to provide runtimes which are close to interactive rates.", "title": "" }, { "docid": "d82c11c5a6981f1d3496e0838519704d", "text": "This paper presents a detailed study of the nonuniform bipolar conduction phenomenon under electrostatic discharge (ESD) events in single-finger NMOS transistors and analyzes its implications for the design of ESD protection for deep-submicron CMOS technologies. It is shown that the uniformity of the bipolar current distribution under ESD conditions is severely degraded depending on device finger width ( ) and significantly influenced by the substrate and gate-bias conditions as well. This nonuniform current distribution is identified as a root cause of the severe reduction in ESD failure threshold current for the devices with advanced silicided processes. Additionally, the concept of an intrinsic second breakdown triggering current ( 2 ) is introduced, which is substrate-bias independent and represents the maximum achievable ESD failure strength for a given technology. With this improved understanding of ESD behavior involved in advanced devices, an efficient design window can be constructed for robust deep submicron ESD protection.", "title": "" }, { "docid": "89513d2cf137e60bf7f341362de2ba84", "text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.", "title": "" }, { "docid": "26abfdd9af796a2903b0f7cef235b3b4", "text": "Argumentation mining is an advanced form of human language understanding by the machine. This is a challenging task for a machine. When sufficient explicit discourse markers are present in the language utterances, the argumentation can be interpreted by the machine with an acceptable degree of accuracy. However, in many real settings, the mining task is difficult due to the lack or ambiguity of the discourse markers, and the fact that a substantial amount of knowledge needed for the correct recognition of the argumentation, its composing elements and their relationships is not explicitly present in the text, but makes up the background knowledge that humans possess when interpreting language. In this article1 we focus on how the machine can automatically acquire the needed common sense and world knowledge. As very few research has been done in this respect, many of the ideas proposed in this article are tentative, but start being researched. We give an overview of the latest methods for human language understanding that map language to a formal knowledge representation that facilitates other tasks (for instance, a representation that is used to visualize the argumentation or that is easily shared in a decision or argumentation support system). Most current systems are trained on texts that are manually annotated. Then we go deeper into the new field of representation learning that nowadays is very much studied in computational linguistics. This field investigates methods for representing language as statistical concepts or as vectors, allowing straightforward methods of compositionality. The methods often use deep learning and its underlying neural network technologies to learn concepts from large text collections in an unsupervised way (i.e., without the need for manual annotations). We show how these methods can help the argumentation mining process, but also demonstrate that these methods need further research to automatically acquire the necessary background knowledge and more specifically common sense and world knowledge. We propose a number of ways to improve the learning of common sense and world knowledge by exploiting textual and visual data, and touch upon how we can integrate the learned knowledge in the argumentation mining process.", "title": "" }, { "docid": "c049f188b31bbc482e16d22a8061abfa", "text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.", "title": "" }, { "docid": "2bf2e36bbbbdd9e091395636fcc2a729", "text": "An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio.", "title": "" }, { "docid": "6830ca98632f86ef2a0cb4c19183d9b4", "text": "In success or failure of any firm/industry or organization employees plays the most vital and important role. Airline industry is one of service industry the job of which is to sell seats to their travelers/costumers and passengers; hence employees inspiration towards their work plays a vital part in serving client’s requirements. This research focused on the influence of employee’s enthusiasm and its apparatuses e.g. pay and benefits, working atmosphere, vision of organization towards customer satisfaction and management systems in Pakistani airline industry. For analysis correlation and regression methods were used. Results of the research highlighted that workers motivation and its four major components e.g. pay and benefits, working atmosphere, vision of organization and management systems have a significant positive impact on customer’s gratification. Those employees of the industry who directly interact with client highly impact the client satisfaction level. It is obvious from results of this research that pay and benefits performs a key role in employee’s motivation towards achieving their organizational objectives of greater customer satisfaction.", "title": "" }, { "docid": "b27038accdabab12d8e0869aba20a083", "text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.", "title": "" }, { "docid": "6daa1bc00a4701a2782c1d5f82c518e2", "text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.", "title": "" }, { "docid": "5deae44a9c14600b1a2460836ed9572d", "text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.", "title": "" }, { "docid": "68a5192778ae203ea1e31ba4e29b4330", "text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.", "title": "" }, { "docid": "14a90781132fa3932d41b21b382ba362", "text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.", "title": "" }, { "docid": "67fb91119ba2464e883616ffd324f864", "text": "Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.", "title": "" }, { "docid": "e5c625ceaf78c66c2bfb9562970c09ec", "text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>", "title": "" } ]
scidocsrr
becbceca094c91340955e53721ce3f2e
Business-to-business interactions: issues and enabling technologies
[ { "docid": "34a5d59c8b72690c7d776871447af6d0", "text": "E lectronic commerce lets people purchase goods and exchange information on business transactions online. The most popular e-commerce channel is the Internet. Although the Internet's role as a business channel is a fairly recent phenomenon, its impact, financial and otherwise, has been substantially greater than that of other business channels in existence for several decades. E-commerce gives companies improved efficiency and reliability of business processes through transaction automation. There are two major types of e-commerce: business to consumer (B2C), in which consumers purchase products and services from businesses , and business to business (B2B), in which businesses buy and sell among themselves. A typical business depends on other businesses for several of the direct and indirect inputs to its end products. For example, Dell Computer depends on one company for microprocessor chips and another for hard drives. B2B e-commerce automates and streamlines the process of buying and selling these intermediate products. It provides more reliable updating of business data. For procurement transactions, buyers and sellers can meet in an electronic marketplace and exchange information. In addition, B2B makes product information available globally and updates it in real time. Hence, procuring organizations can take advantage of vast amounts of product information. B2C e-commerce is now sufficiently stable. Judging from its success, we can expect B2B to similarly improve business processes for a better return on investment. Market researchers predict that B2B transactions will amount to a few trillion dollars in the next few years, as compared to about 100 billion dollars' worth of B2C transactions. B2C was easier to achieve, given the relative simplicity of reaching its target: the individual consumer. That's not the case with B2B, which involves engineering the interactions of diverse, complex enterprises. Interoperability is therefore a key issue in B2B. To achieve interoperability, many companies have formed consortia to develop B2B frameworks—generic templates that provide functions enabling businesses to communicate efficiently over the Internet. The consor-tia aim to provide an industrywide standard that companies can easily adopt. Their work has resulted in several technical standards. Among the most popular are Open Buying on the Internet (OBI), eCo, RosettaNet, commerce XML (cXML), and BizTalk. The problem with these standards, and many others, is that they are incompatible. Businesses trying to implement a B2B framework are bewildered by a variety of standards that point in different directions. Each standard has its merits and demerits. To aid decision-makers in choosing …", "title": "" } ]
[ { "docid": "f6deeee48e0c8f1ed1d922093080d702", "text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.", "title": "" }, { "docid": "015326feea60387bc2a8cdc9ea6a7f81", "text": "Phosphorylation of the transcription factor CREB is thought to be important in processes underlying long-term memory. It is unclear whether CREB phosphorylation can carry information about the sign of changes in synaptic strength, whether CREB pathways are equally activated in neurons receiving or providing synaptic input, or how synapse-to-nucleus communication is mediated. We found that Ca(2+)-dependent nuclear CREB phosphorylation was rapidly evoked by synaptic stimuli including, but not limited to, those that induced potentiation and depression of synaptic strength. In striking contrast, high frequency action potential firing alone failed to trigger CREB phosphorylation. Activation of a submembranous Ca2+ sensor, just beneath sites of Ca2+ entry, appears critical for triggering nuclear CREB phosphorylation via calmodulin and a Ca2+/calmodulin-dependent protein kinase.", "title": "" }, { "docid": "e7473169711de31dc063ace07ec799f9", "text": "Two major tasks in spoken language understanding (SLU) are intent determination (ID) and slot filling (SF). Recurrent neural networks (RNNs) have been proved effective in SF, while there is no prior work using RNNs in ID. Based on the idea that the intent and semantic slots of a sentence are correlative, we propose a joint model for both tasks. Gated recurrent unit (GRU) is used to learn the representation of each time step, by which the label of each slot is predicted. Meanwhile, a max-pooling layer is employed to capture global features of a sentence for intent classification. The representations are shared by two tasks and the model is trained by a united loss function. We conduct experiments on two datasets, and the experimental results demonstrate that our model outperforms the state-of-theart approaches on both tasks.", "title": "" }, { "docid": "e5f30c0d2c25b6b90c136d1c84ba8a75", "text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.", "title": "" }, { "docid": "0488511dc0641993572945e98a561cc7", "text": "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.", "title": "" }, { "docid": "a9052b10f9750d58eb33b9e5d564ee6e", "text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.", "title": "" }, { "docid": "902aab15808014d55a9620bcc48621f5", "text": "Software developers are always looking for ways to boost their effectiveness and productivity and perform complex jobs more quickly and easily, particularly as projects have become increasingly large and complex. Programmers want to shed unneeded complexity and outdated methodologies and move to approaches that focus on making programming simpler and faster. With this in mind, many developers are increasingly using dynamic languages such as JavaScript, Perl, Python, and Ruby. Although software experts disagree on the exact definition, a dynamic language basically enables programs that can change their code and logical structures at runtime, adding variable types, module names, classes, and functions as they are running. These languages frequently are interpreted and generally check typing at runtime", "title": "" }, { "docid": "a8da8a2d902c38c6656ea5db841a4eb1", "text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.", "title": "" }, { "docid": "1f1a6df3b85a35af375a47a93584f498", "text": "Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance.", "title": "" }, { "docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21", "text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.", "title": "" }, { "docid": "056a1d216afd6ea3841b9d4f49c896b6", "text": "The first car was invented in 1870 by Siegfried Marcus (Guarnieri, 2011). Actually it was just a wagon with an engine but without a steering wheel and without brakes. Instead, it was controlled by the legs of the driver. Converting traditional vehicles into autonomous vehicles was not just one step. The first step was just 28 years after the invention of cars that is to say 1898. This step's concept was moving a vehicle by a remote controller (Nikola, 1898). Since this first step and as computers have been becoming advanced and sophisticated, many functions of modern vehicles have been converted to be entirely automatic with no need of even remote controlling. Changing gears was one of the first actions that could be done automatically without an involvement of the driver (Anthony, 1908), so such cars got the title of \"automatic cars\"; however, nowadays there are vehicles that can completely travel by themselves although they are not yet allowed to travel on public roads in most of the world. Such vehicles are called \"autonomous vehicles\" or \"driverless cars\".", "title": "" }, { "docid": "627aee14031293785224efdb7bac69f0", "text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.", "title": "" }, { "docid": "c83ec9a4ec6f58ea2fe57bf2e4fa0c37", "text": "Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.", "title": "" }, { "docid": "541eb97c2b008fefa6b50a5b372b2f31", "text": "Due to advancements in the mobile technology and the presence of strong mobile platforms, it is now possible to use the revolutionising augmented reality technology in mobiles. This research work is based on the understanding of different types of learning theories, concept of mobile learning and mobile augmented reality and discusses how applications using these advanced technologies can shape today's education systems.", "title": "" }, { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" }, { "docid": "2f8f1f2db01eeb9a47591e77bb1c835a", "text": "We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.", "title": "" }, { "docid": "a200c0d2d6a437eb3f9a019e4ed530eb", "text": "With the rising of online social networks, influence has been a complex and subtle force to govern users’ behaviors and relationship formation. Therefore, how to precisely identify and measure influence has been a hot research direction. Differentiating from existing researches, we are devoted to combining the status of users in the network and the contents generated from these users to synthetically measure the influence diffusion. In this paper, we firstly proposed a directed user-content bipartite graph model. Next, an iterative algorithm is designed to compute two scores: the users’ Influence and boards’ Reach. Finally, we conduct extensive experiments on the dataset extracted from the online community Pinterest. The experimental results verify our proposed model can discover most influential users and popular broads effectively and can also be expected to benefit various applications, e.g., viral marketing, personal recommendation, information retrieval, etc. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c2dfa94555085b6ca3b752d719688613", "text": "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., ‘positive’ and ‘negative’. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule’s state probability. A capsule’s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules’ attributes. The words well reflect the domain specificity of the dataset. ACM Reference Format: Yequan Wang1 Aixin Sun2 Jialong Han3 Ying Liu4 Xiaoyan Zhu1. 2018. Sentiment Analysis by Capsules. InWWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3178876.3186015", "title": "" }, { "docid": "753d840a62fc4f4b57f447afae07ba84", "text": "Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of realworld datasets demonstrate the effectiveness of the proposed framework REFS.", "title": "" }, { "docid": "f582f73b7a7a252d6c17766a9c5f8dee", "text": "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.", "title": "" } ]
scidocsrr
89e2fb2941f1b1656894e7cae810ffe8
Improving the grid power quality using virtual synchronous machines
[ { "docid": "e100a602848dcba4a2e9575148486f9c", "text": "The increasing integration of decentralized electrical sources is attended by problems with power quality, safe grid operation and grid stability. The concept of the Virtual Synchronous Machine (VISMA) [1] discribes an inverter to particularly connect renewable electrical sources to the grid that provides a wide variety of static an dynamic properties they are also suitable to achieve typical transient and oscillation phenomena in decentralized as well as weak grids. Furthermore in static operation, power plant controlled VISMA systems are capable to cope with critical surplus production of renewable electrical energy without additional communication systems only conducted by the grid frequency. This paper presents the dynamic properties \"damping\" and \"virtual mass\" of the VISMA and their contribution to the stabilization of the grid frequency and the attenuation of grid oscillations examined in an experimental grid set.", "title": "" } ]
[ { "docid": "84a32cdf9531b70d356ee06d4e2769df", "text": "In this article we present mechanical measurements of three representative elastomers used in soft robotic systems: Sylgard 184, Smooth-Sil 950, and EcoFlex 00-30. Our aim is to demonstrate the effects of the nonlinear, time-dependent properties of these materials to facilitate improved dynamic modeling of soft robotic components. We employ uniaxial pull-to-failure tests, cyclic loading tests, and stress relaxation tests to provide a qualitative assessment of nonlinear behavior, batch-to-batch repeatability, and effects of prestraining, cyclic loading, and viscoelastic stress relaxation. Strain gauges composed of the elastomers embedded with a microchannel of conductive liquid (eutectic gallium–indium) are also tested to quantify the interaction between material behaviors and measured strain output. It is found that all of the materials tested exhibit the Mullins effect, where the material properties in the first loading cycle differ from the properties in all subsequent cycles, as well as response sensitivity to loading rate and production variations. Although the materials tested show stress relaxation effects, the measured output from embedded resistive strain gauges is found to be uncoupled from the changes to the material properties and is only a function of strain.", "title": "" }, { "docid": "e06005f63efd6f8ca77f8b91d1b3b4a9", "text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.", "title": "" }, { "docid": "14fe4e2fb865539ad6f767b9fc9c1ff5", "text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.", "title": "" }, { "docid": "9c35b7e3bf0ef3f3117c6ba8a9ad1566", "text": "Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data, asynchronous parallelization of SGD has also been studied. Then, a natural question is whether these techniques can be seamlessly integrated with each other, and whether the integration has desirable theoretical guarantee on its convergence. In this paper, we provide our formal answer to this question. In particular, we consider the asynchronous parallelization of SGD, accelerated by leveraging variance reduction, coordinate sampling, and Nesterov’s method. We call the new algorithm asynchronous accelerated SGD (AASGD). Theoretically, we proved a convergence rate of AASGD, which indicates that (i) the three acceleration methods are complementary to each other and can make their own contributions to the improvement of convergence rate; (ii) asynchronous parallelization does not hurt the convergence rate, and can achieve considerable speedup under appropriate parameter setting. Empirically, we tested AASGD on a few benchmark datasets. The experimental results verified our theoretical findings and indicated that AASGD could be a highly effective and efficient algorithm for practical use.", "title": "" }, { "docid": "2ed9db3d174d95e5b97c4fe26ca6c8ac", "text": "One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road related accidents in 1999 alone, with an estimated cost of US$518 billion [11]. One way of combating this problem is to develop Intelligent Vehicles that are selfaware and act to increase the safety of the transportation system. This paper presents the development and application of a novel multiple-cue visual lane tracking system for research into Intelligent Vehicles (IV). Particle filtering and cue fusion technologies form the basis of the lane tracking system which robustly handles several of the problems faced by previous lane tracking systems such as shadows on the road, unreliable lane markings, dramatic lighting changes and discontinuous changes in road characteristics and types. Experimental results of the lane tracking system running at 15Hz will be discussed, focusing on the particle filter and cue fusion technology used.", "title": "" }, { "docid": "3f5f8e75af4cc24e260f654f8834a76c", "text": "The Balanced Scorecard (BSC) methodology focuses on major critical issues of modern business organisations: the effective measurement of corporate performance and the evaluation of the successful implementation of corporate strategy. Despite the increased adoption of the BSC methodology by numerous business organisations during the last decade, limited case studies concern non-profit organisations (e.g. public sector, educational institutions, healthcare organisations, etc.). The main aim of this study is to present the development of a performance measurement system for public health care organisations, in the context of BSC methodology. The proposed approach considers the distinguished characteristics of the aforementioned sector (e.g. lack of competition, social character of organisations, etc.). The proposed measurement system contains the most important financial performance indicators, as well as non-financial performance indicators that are able to examine the quality of the provided services, the satisfaction of internal and external customers, the selfimprovement system of the organisation and the ability of the organisation to adapt and change. These indicators play the role of Key Performance Indicators (KPIs), in the context of BSC methodology. The presented analysis is based on a MCDA approach, where the UTASTAR method is used in order to aggregate the marginal performance of KPIs. This approach is able to take into account the preferences of the management of the organisation regarding the achievement of the defined strategic objectives. The main results of the proposed approach refer to the evaluation of the overall scores for each one of the main dimensions of the BSC methodology (i.e. financial, customer, internal business process, and innovation-learning). These results are able to help the organisation to evaluate and revise its strategy, and generally to adopt modern management approaches in every day practise. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b59e332c086a8ce6d6ddc0526b8848c7", "text": "We propose Generative Adversarial Tree Search (GATS), a sample-efficient Deep Reinforcement Learning (DRL) algorithm. While Monte Carlo Tree Search (MCTS) is known to be effective for search and planning in RL, it is often sampleinefficient and therefore expensive to apply in practice. In this work, we develop a Generative Adversarial Network (GAN) architecture to model an environment’s dynamics and a predictor model for the reward function. We exploit collected data from interaction with the environment to learn these models, which we then use for model-based planning. During planning, we deploy a finite depth MCTS, using the learned model for tree search and a learned Q-value for the leaves, to find the best action. We theoretically show that GATS improves the bias-variance tradeoff in value-based DRL. Moreover, we show that the generative model learns the model dynamics using orders of magnitude fewer samples than the Q-learner. In non-stationary settings where the environment model changes, we find the generative model adapts significantly faster than the Q-learner to the new environment.", "title": "" }, { "docid": "4f2fa764996d666762e0b6ba01a799a2", "text": "A critical assumption of the Technology Acceptance Model (TAM) is that its belief constructs - perceived ease of use (PEOU) and perceived usefulness (PU) - fully mediate the influence of external variables on IT usage behavior. If this assumption is true, researchers can effectively \"assume away\" the effects of broad categories of external variables, those relating to the specific task, the technology, and user differences. One recent study did indeed find that belief constructs fully mediated individual differences, and its authors suggest that further studies with similar results could pave the way for simpler acceptance models that ignore such differences. To test the validity of these authors' results, we conducted a similar study to determine the effect of staff seniority, age, and education level on usage behavior. Our study involved 106 professional and administrative staff in the IT division of a large manufacturing company who voluntarily use email and word processing. We found that these individual user differences have significant direct effects on both the frequency and volume of usage. These effects are beyond the indirect effects as mediated through the TAM belief constructs. Thus, rather than corroborating the recent study, our findings underscore the importance of users' individual differences and suggest that TAM's belief constructs are accurate but incomplete predictors of usage behavior.", "title": "" }, { "docid": "74a91327b85ac9681f618d4ba6a86151", "text": "In this paper, a miniaturized planar antenna with enhanced bandwidth is designed for the ISM 433 MHz applications. The antenna is realized by cascading two resonant structures with meander lines, thus introducing two different radiating branches to realize two neighboring resonant frequencies. The techniques of shorting pin and novel ground plane are adopted for bandwidth enhancement. Combined with these structures, a novel antenna with a total size of 23 mm × 49.5 mm for the ISM band application is developed and fabricated. Measured results show that the proposed antenna has good performance with the -10 dB impedance bandwidth is about 12.5 MHz and the maximum gain is about -2.8 dBi.", "title": "" }, { "docid": "931af201822969eb10871ccf10d47421", "text": "Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model based on shift-reduce parsing, with competitive downstream performance and non-trivial induced trees, and (b) an analysis of the trees learned by our shift-reduce model and by a chart-based model.", "title": "" }, { "docid": "b6ad0aeb5efbde0a9b340e88e68c884a", "text": "Conservative non-pharmacological evidence-based management options for Chronic Obstructive Pulmonary Disease (COPD) primarily focus on developing physiological capacity. With co-morbidities, including those of the musculoskeletal system, contributing to the overall severity of the disease, further research was needed. This thesis presents a critical review of the active and passive musculoskeletal management approaches currently used in COPD. The evidence for using musculoskeletal interventions in COPD management was inconclusive. Whilst an evaluation of musculoskeletal changes and their influence on pulmonary function was required, it was apparent that this would necessitate a significant programme of research. In view of this a narrative review of musculoskeletal changes in the cervico-thoracic region was undertaken. With a paucity of literature exploring chest wall flexibility and recent clinical guidelines advocating research into thoracic mobility exercises in COPD, a focus on thoracic spine motion analysis literature was taken. On critically reviewing the range of current in vivo measurement techniques it was evident that soft tissue artefact was a potential source of measurement error. As part of this thesis, soft tissue artefact during thoracic spine axial rotation was quantified. Given the level was deemed unacceptable, an alternative approach was developed and tested for intra-rater reliability. This technique, in conjunction with a range of other measures, was subsequently used to evaluate cervico-thoracic musculoskeletal changes and their relationship with pulmonary function in COPD. In summary, subjects with COPD were found to have reduced spinal motion, altered posture and increased muscle sensitivity compared to controls. Reduced spinal motion and altered neck posture were associated with reduced pulmonary function and having diagnosed COPD. Results from this thesis provide evidence to support inception of a clinical trial of flexibility or mobility exercises", "title": "" }, { "docid": "91c6903902eb4edc3d9cf2c3dec66d9e", "text": "WordNets – lexical databases in which groups of synonyms are arranged according to the semantic relationships between them – are crucial resources in semantically-focused natural language processing tasks, but are extremely costly and labour intensive to produce. In languages besides English, this has led to growing interest in constructing and extending WordNets automatically, as an alternative to producing them from scratch. This paper describes various approaches to constructing WordNets automatically – by leveraging traditional lexical resources and newer trends such as word embeddings – and also offers a discussion of the issues affecting the evaluation of automatically constructed WordNets.", "title": "" }, { "docid": "20746cd01ff3b67b204cd2453f1d8ecb", "text": "Quantification of human group-behavior has so far defied an empirical, falsifiable approach. This is due to tremendous difficulties in data acquisition of social systems. Massive multiplayer online games (MMOG) provide a fascinating new way of observing hundreds of thousands of simultaneously socially interacting individuals engaged in virtual economic activities. We have compiled a data set consisting of practically all actions of all players over a period of 3 years from a MMOG played by 300,000 people. This largescale data set of a socio-economic unit contains all social and economic data from a single and coherent source. Players have to generate a virtual income through economic activities to ‘survive’ and are typically engaged in a multitude of social activities offered within the game. Our analysis of high-frequency log files focuses on three types of social networks, and tests a series of social-dynamics hypotheses. In particular we study the structure and dynamics of friend-, enemyand communication networks. We find striking differences in topological structure between positive (friend) and negative (enemy) tie networks. All networks confirm the recently observed phenomenon of network densification. We propose two approximate social laws in communication networks, the first expressing betweenness centrality as the inverse square of the overlap, the second relating communication strength to the cube of the overlap. These empirical laws provide strong quantitative evidence for the Weak ties hypothesis of Granovetter. Further, the analysis of triad significance profiles validates well-established assertions from social balance theory. We find overrepresentation (underrepresentation) of complete (incomplete) triads in networks of positive ties, and vice versa for networks of negative ties. Empirical transition probabilities between triad classes provide evidence for triadic closure with extraordinarily high precision. For the first time we provide empirical results for large-scale networks of negative social ties. Whenever possible we compare our findings with data from non-virtual human groups and provide further evidence that online game communities serve as a valid model for a wide class of human societies. With this setup we demonstrate the feasibility for establishing a ‘socio-economic laboratory’ which allows to operate at levels of precision approaching those of the natural sciences. All data used in this study is fully anonymized; the authors have the written consent to publish from the legal department of the Medical University of Vienna. © 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f6193fa2ac2ea17c7710241a42d34a33", "text": "BACKGROUND\nThe most common microcytic and hypochromic anemias are iron deficiency anemia and thalassemia trait. Several indices to discriminate iron deficiency anemia from thalassemia trait have been proposed as simple diagnostic tools. However, some of the best discriminative indices use parameters in the formulas that are only measured in modern counters and are not always available in small laboratories. The development of an index with good diagnostic accuracy based only on parameters derived from the blood cell count obtained using simple counters would be useful in the clinical routine. Thus, the aim of this study was to develop and validate a discriminative index to differentiate iron deficiency anemia from thalassemia trait.\n\n\nMETHODS\nTo develop and to validate the new formula, blood count data from 106 (thalassemia trait: 23 and iron deficiency: 83) and 185 patients (thalassemia trait: 30 and iron deficiency: 155) were used, respectively. Iron deficiency, β-thalassemia trait and α-thalassemia trait were confirmed by gold standard tests (low serum ferritin for iron deficiency anemia, HbA2>3.5% for β-thalassemia trait and using molecular biology for the α-thalassemia trait).\n\n\nRESULTS\nThe sensitivity, specificity, efficiency, Youden's Index, area under receiver operating characteristic curve and Kappa coefficient of the new formula, called the Matos & Carvalho Index were 99.3%, 76.7%, 95.7%, 76.0, 0.95 and 0.83, respectively.\n\n\nCONCLUSION\nThe performance of this index was excellent with the advantage of being solely dependent on the mean corpuscular hemoglobin concentration and red blood cell count obtained from simple automatic counters and thus may be of great value in underdeveloped and developing countries.", "title": "" }, { "docid": "950a6a611f1ceceeec49534c939b4e0f", "text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].", "title": "" }, { "docid": "bca81a5b34376e5a6090e528a583b4f4", "text": "There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information-theoretic analysis reveals how task-relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science.", "title": "" }, { "docid": "6f4479d224c1546040bee39d50eaba55", "text": "Bag-of-words (BOW) is now the most popular way to model text in statistical machine learning approaches in sentiment analysis. However, the performance of BOW sometimes remains limited due to some fundamental deficiencies in handling the polarity shift problem. We propose a model called dual sentiment analysis (DSA), to address this problem for sentiment classification. We first propose a novel data expansion technique by creating a sentiment-reversed review for each training and test review. On this basis, we propose a dual training algorithm to make use of original and reversed training reviews in pairs for learning a sentiment classifier, and a dual prediction algorithm to classify the test reviews by considering two sides of one review. We also extend the DSA framework from polarity (positive-negative) classification to 3-class (positive-negative-neutral) classification, by taking the neutral reviews into consideration. Finally, we develop a corpus-based method to construct a pseudo-antonym dictionary, which removes DSA's dependency on an external antonym dictionary for review reversion. We conduct a wide range of experiments including two tasks, nine datasets, two antonym dictionaries, three classification algorithms, and two types of features. The results demonstrate the effectiveness of DSA in supervised sentiment classification.", "title": "" }, { "docid": "73f6ba4ad9559cd3c6f7a88223e4b556", "text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.", "title": "" } ]
scidocsrr
315025d0cb659bcb820d9b1393503b08
Efficient placement of multi-component applications in edge computing systems
[ { "docid": "bbf5561f88f31794ca95dd991c074b98", "text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.", "title": "" } ]
[ { "docid": "1e82d6acef7e5b5f0c2446d62cf03415", "text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.", "title": "" }, { "docid": "bc49930fa967b93ed1e39b3a45237652", "text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).", "title": "" }, { "docid": "b56d61ac3e807219b3caa9ed4362abd9", "text": "Secure communication is critical in military environments where the network infrastructure is vulnerable to various attacks and compromises. A conventional centralized solution breaks down when the security servers are destroyed by the enemies. In this paper we design and evaluate a security framework for multi-layer ad-hoc wireless networks with unmanned aerial vehicles (UAVs). In battlefields, the framework adapts to the contingent damages on the network infrastructure. Depending on the availability of the network infrastructure, our design is composed of two modes. In infrastructure mode, security services, specifically the authentication services, are implemented on UAVs that feature low overhead and flexible managements. When the UAVs fail or are destroyed, our system seamlessly switches to infrastructureless mode, a backup mechanism that maintains comparable security services among the surviving units. In the infrastructureless mode, the security services are localized to each node’s vicinity to comply with the ad-hoc communication mechanism in the scenario. We study the instantiation of these two modes and the transitions between them. Our implementation and simulation measurements confirm the effectiveness of our design.", "title": "" }, { "docid": "59a16f229e5c205176639843521310d0", "text": "In the ancient Egypt seven goddesses, represented by seven cows, composed the celestial herd that provides the nourishment to her worshippers. This herd is observed in the sky as a group of stars, the Pleiades, close to Aldebaran, the main star in the Taurus constellation. For many ancient populations, Pleiades were relevant stars and their rising was marked as a special time of the year. In this paper, we will discuss the presence of these stars in ancient cultures. Moreover, we will report some results of archeoastronomy on the role for timekeeping of these stars, results which show that for hunter-gatherers at Palaeolithic times, they were linked to the seasonal cycles of aurochs.", "title": "" }, { "docid": "98a647d378a06c0314a60e220d10976a", "text": "Driven by the confluence between the need to collect data about people's physical, physiological, psychological, cognitive, and behavioral processes in spaces ranging from personal to urban and the recent availability of the technologies that enable this data collection, wireless sensor networks for healthcare have emerged in the recent years. In this review, we present some representative applications in the healthcare domain and describe the challenges they introduce to wireless sensor networks due to the required level of trustworthiness and the need to ensure the privacy and security of medical data. These challenges are exacerbated by the resource scarcity that is inherent with wireless sensor network platforms. We outline prototype systems spanning application domains from physiological and activity monitoring to large-scale physiological and behavioral studies and emphasize ongoing research challenges.", "title": "" }, { "docid": "760f9f91a845726bc79b874978d5b9ab", "text": "Data sharing is increasingly recognized as critical to cross-disciplinary research and to assuring scientific validity. Despite National Institutes of Health and National Science Foundation policies encouraging data sharing by grantees, little data sharing of clinical data has in fact occurred. A principal reason often given is the potential of inadvertent violation of the Health Insurance Portability and Accountability Act privacy regulations. While regulations specify the components of private health information that should be protected, there are no commonly accepted methods to de-identify clinical data objects such as images. This leads institutions to take conservative risk-averse positions on data sharing. In imaging trials, where images are coded according to the Digital Imaging and Communications in Medicine (DICOM) standard, the complexity of the data objects and the flexibility of the DICOM standard have made it especially difficult to meet privacy protection objectives. The recent release of DICOM Supplement 142 on image de-identification has removed much of this impediment. This article describes the development of an open-source software suite that implements DICOM Supplement 142 as part of the National Biomedical Imaging Archive (NBIA). It also describes the lessons learned by the authors as NBIA has acquired more than 20 image collections encompassing over 30 million images.", "title": "" }, { "docid": "d59e21319b9915c2f6d7a8931af5503c", "text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.", "title": "" }, { "docid": "4122fb29bb82d4432391f4362ddcf512", "text": "In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.", "title": "" }, { "docid": "d580f60d48331b37c55f1e9634b48826", "text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.", "title": "" }, { "docid": "fae3b6d1415e5f1d95aa2126c14e7a09", "text": "This paper presents an active RF phase shifter with 10 bit control word targeted toward the upcoming 5G wireless systems. The circuit is designed and fabricated using 45 nm CMOS SOI technology. An IQ vector modulator (IQVM) topology is used which provides both amplitude and phase control. The design is programmable with exhaustive digital controls available for parameters like bias voltage, resonance frequency, and gain. The frequency of operation is tunable from 12.5 GHz to 15.7 GHz. The mean angular separation between phase points is 1.5 degree at optimum amplitude levels. The rms phase error over the operating band is as low as 0.8 degree. Active area occupied is 0.18 square millimeter. The total DC power consumed from 1 V supply is 75 mW.", "title": "" }, { "docid": "37bdc258e652fb4a21d9516400428f8b", "text": "In many Internet of Things (IoT) applications, large numbers of small sensor data are delivered in the network, which may cause heavy traffics. To reduce the number of messages delivered from the sensor devices to the IoT server, a promising approach is to aggregate several small IoT messages into a large packet before they are delivered through the network. When the packets arrive at the destination, they are disaggregated into the original IoT messages. In the existing solutions, packet aggregation/disaggregation is performed by software at the server, which results in long delays and low throughputs. To resolve the above issue, this paper utilizes the programmable Software Defined Networking (SDN) switch to program quick packet aggregation and disaggregation. Specifically, we consider the Programming Protocol-Independent Packet Processor (P4) technology. We design and develop novel P4 programs for aggregation and disaggregation in commercial P4 switches. Our study indicates that packet aggregation can be achieved in a P4 switch with its line rate (without extra packet processing cost). On the other hand, to disaggregate a packet that combines N IoT messages, the processing time is about the same as processing N individual IoT messages. Our implementation conducts IoT message aggregation at the highest bit rate (100 Gbps) that has not been found in the literature. We further propose to provide a small buffer in the P4 switch to significantly reduce the processing power for disaggregating a packet.", "title": "" }, { "docid": "c091e5b24dc252949b3df837969e263a", "text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.", "title": "" }, { "docid": "b91f54fd70da385625d9df127834d8c7", "text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.", "title": "" }, { "docid": "46209913057e33c17d38a565e50097a3", "text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC", "title": "" }, { "docid": "ac4d208a022717f6389d8b754abba80b", "text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.", "title": "" }, { "docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a", "text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.", "title": "" }, { "docid": "3ff55193d10980cbb8da5ec757b9161c", "text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.", "title": "" }, { "docid": "da4b2452893ca0734890dd83f5b63db4", "text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.", "title": "" }, { "docid": "948295ca3a97f7449548e58e02dbdd62", "text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.", "title": "" }, { "docid": "4b95b6d7991ea1b774ac8730df6ec21c", "text": "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks1 that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.", "title": "" } ]
scidocsrr
1858e8fa3f0ff4249bd007abf7679481
The effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospital settings: a systematic review of quantitative evidence protocol.
[ { "docid": "e4628211d0d2657db387c093228e9b9b", "text": "BACKGROUND\nMindfulness-based stress reduction (MBSR) is a clinically standardized meditation that has shown consistent efficacy for many mental and physical disorders. Less attention has been given to the possible benefits that it may have in healthy subjects. The aim of the present review and meta-analysis is to better investigate current evidence about the efficacy of MBSR in healthy subjects, with a particular focus on its benefits for stress reduction.\n\n\nMATERIALS AND METHODS\nA literature search was conducted using MEDLINE (PubMed), the ISI Web of Knowledge, the Cochrane database, and the references of retrieved articles. The search included articles written in English published prior to September 2008, and identified ten, mainly low-quality, studies. Cohen's d effect size between meditators and controls on stress reduction and spirituality enhancement values were calculated.\n\n\nRESULTS\nMBSR showed a nonspecific effect on stress reduction in comparison to an inactive control, both in reducing stress and in enhancing spirituality values, and a possible specific effect compared to an intervention designed to be structurally equivalent to the meditation program. A direct comparison study between MBSR and standard relaxation training found that both treatments were equally able to reduce stress. Furthermore, MBSR was able to reduce ruminative thinking and trait anxiety, as well as to increase empathy and self-compassion.\n\n\nCONCLUSIONS\nMBSR is able to reduce stress levels in healthy people. However, important limitations of the included studies as well as the paucity of evidence about possible specific effects of MBSR in comparison to other nonspecific treatments underline the necessity of further research.", "title": "" } ]
[ { "docid": "460a296de1bd13378d71ce19ca5d807a", "text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].", "title": "" }, { "docid": "cf1e0d6a07674aa0b4c078550b252104", "text": "Industry-practiced agile methods must become an integral part of a software engineering curriculum. It is essential that graduates of such programs seeking careers in industry understand and have positive attitudes toward agile principles. With this knowledge they can participate in agile teams and apply these methods with minimal additional training. However, learning these methods takes experience and practice, both of which are difficult to achieve in a direct manner within the constraints of an academic program. This paper presents a novel, immersive boot camp approach to learning agile software engineering concepts with LEGO® bricks as the medium. Students construct a physical product while inductively learning the basic principles of agile methods. The LEGO®-based approach allows for multiple iterations in an active learning environment. In each iteration, students inductively learn agile concepts through their experiences and mistakes. Subsequent iterations then ground these concepts, visibly leading to an effective process. We assessed this approach using a combination of quantitative and qualitative methods. Our assessment shows that the students demonstrated positive attitudes toward the boot-camp approach compared to lecture-based instruction. However, the agile boot camp did not have an effect on the students' recall on class tests when compared to their recall of concepts taught in lecture-based instruction.", "title": "" }, { "docid": "66844a6bce975f8e3e32358f0e0d1fb7", "text": "The recent advent of DNA sequencing technologies facilitates the use of genome sequencing data that provide means for more informative and precise classification and identification of members of the Bacteria and Archaea. Because the current species definition is based on the comparison of genome sequences between type and other strains in a given species, building a genome database with correct taxonomic information is of paramount need to enhance our efforts in exploring prokaryotic diversity and discovering novel species as well as for routine identifications. Here we introduce an integrated database, called EzBioCloud, that holds the taxonomic hierarchy of the Bacteria and Archaea, which is represented by quality-controlled 16S rRNA gene and genome sequences. Whole-genome assemblies in the NCBI Assembly Database were screened for low quality and subjected to a composite identification bioinformatics pipeline that employs gene-based searches followed by the calculation of average nucleotide identity. As a result, the database is made of 61 700 species/phylotypes, including 13 132 with validly published names, and 62 362 whole-genome assemblies that were identified taxonomically at the genus, species and subspecies levels. Genomic properties, such as genome size and DNA G+C content, and the occurrence in human microbiome data were calculated for each genus or higher taxa. This united database of taxonomy, 16S rRNA gene and genome sequences, with accompanying bioinformatics tools, should accelerate genome-based classification and identification of members of the Bacteria and Archaea. The database and related search tools are available at www.ezbiocloud.net/.", "title": "" }, { "docid": "d470122d50dbb118ae9f3068998f8e14", "text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.", "title": "" }, { "docid": "16560cdfe50fc908ae46abf8b82e620f", "text": "While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.\n We expect field-programmable gate arrays (FPGAs or \"programmable hardware\") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a \"good\" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology.", "title": "" }, { "docid": "08f45368b85de5e6036fd4309f7c7a05", "text": "Inflammatory bowel disease (IBD) is a group of diseases characterized by inflammation of the small and large intestine and primarily includes ulcerative colitis and Crohn’s disease. Although the etiology of IBD is not fully understood, it is believed to result from the interaction of genetic, immunological, and environmental factors, including gut microbiota. Recent studies have shown a correlation between changes in the composition of the intestinal microbiota and IBD. Moreover, it has been suggested that probiotics and prebiotics influence the balance of beneficial and detrimental bacterial species, and thereby determine homeostasis versus inflammatory conditions. In this review, we focus on recent advances in the understanding of the role of prebiotics, probiotics, and synbiotics in functions of the gastrointestinal tract and the induction and maintenance of IBD remission. We also discuss the role of psychobiotics, which constitute a novel class of psychotropic agents that affect the central nervous system by influencing gut microbiota. (Inflamm Bowel Dis 2015;21:1674–1682)", "title": "" }, { "docid": "8016e80e506dcbae5c85fdabf1304719", "text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.", "title": "" }, { "docid": "2545af6c324fa7fb0e766bf6d68dfd90", "text": "Evidence of aberrant hypothalamic-pituitary-adrenocortical (HPA) activity in many psychiatric disorders, although not universal, has sparked long-standing interest in HPA hormones as biomarkers of disease or treatment response. HPA activity may be chronically elevated in melancholic depression, panic disorder, obsessive-compulsive disorder, and schizophrenia. The HPA axis may be more reactive to stress in social anxiety disorder and autism spectrum disorders. In contrast, HPA activity is more likely to be low in PTSD and atypical depression. Antidepressants are widely considered to inhibit HPA activity, although inhibition is not unanimously reported in the literature. There is evidence, also uneven, that the mood stabilizers lithium and carbamazepine have the potential to augment HPA measures, while benzodiazepines, atypical antipsychotics, and to some extent, typical antipsychotics have the potential to inhibit HPA activity. Currently, the most reliable use of HPA measures in most disorders is to predict the likelihood of relapse, although changes in HPA activity have also been proposed to play a role in the clinical benefits of psychiatric treatments. Greater attention to patient heterogeneity and more consistent approaches to assessing treatment effects on HPA function may solidify the value of HPA measures in predicting treatment response or developing novel strategies to manage psychiatric disease.", "title": "" }, { "docid": "37a8ec11d92dd8a83d757fa27b8f4118", "text": "Weed control is necessary in rice cultivation, but the excessive use of herbicide treatments has led to serious agronomic and environmental problems. Suitable site-specific weed management (SSWM) is a solution to address this problem while maintaining the rice production quality and quantity. In the context of SSWM, an accurate weed distribution map is needed to provide decision support information for herbicide treatment. UAV remote sensing offers an efficient and effective platform to monitor weeds thanks to its high spatial resolution. In this work, UAV imagery was captured in a rice field located in South China. A semantic labeling approach was adopted to generate the weed distribution maps of the UAV imagery. An ImageNet pre-trained CNN with residual framework was adapted in a fully convolutional form, and transferred to our dataset by fine-tuning. Atrous convolution was applied to extend the field of view of convolutional filters; the performance of multi-scale processing was evaluated; and a fully connected conditional random field (CRF) was applied after the CNN to further refine the spatial details. Finally, our approach was compared with the pixel-based-SVM and the classical FCN-8s. Experimental results demonstrated that our approach achieved the best performance in terms of accuracy. Especially for the detection of small weed patches in the imagery, our approach significantly outperformed other methods. The mean intersection over union (mean IU), overall accuracy, and Kappa coefficient of our method were 0.7751, 0.9445, and 0.9128, respectively. The experiments showed that our approach has high potential in accurate weed mapping of UAV imagery.", "title": "" }, { "docid": "85736b2fd608e3d109ce0f3c46dda9ac", "text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.", "title": "" }, { "docid": "80fe141d88740955f189e8e2bf4c2d89", "text": "Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.", "title": "" }, { "docid": "0a0cc3c3d3cd7e7c3e8b409554daa5a3", "text": "Purpose: We investigate the extent of voluntary disclosures in UK higher education institutions’ (HEIs) annual reports and examine whether internal governance structures influence disclosure in the period following major reform and funding constraints. Design/methodology/approach: We adopt a modified version of Coy and Dixon’s (2004) public accountability index, referred to in this paper as a public accountability and transparency index (PATI), to measure the extent of voluntary disclosures in 130 UK HEIs’ annual reports. Informed by a multitheoretical framework drawn from public accountability, legitimacy, resource dependence and stakeholder perspectives, we propose that the characteristics of governing and executive structures in UK universities influence the extent of their voluntary disclosures. Findings: We find a large degree of variability in the level of voluntary disclosures by universities and an overall relatively low level of PATI (44%), particularly with regards to the disclosure of teaching/research outcomes. We also find that audit committee quality, governing board diversity, governor independence, and the presence of a governance committee are associated with the level of disclosure. Finally, we find that the interaction between executive team characteristics and governance variables enhances the level of voluntary disclosures, thereby providing support for the continued relevance of a ‘shared’ leadership in the HEIs’ sector towards enhancing accountability and transparency in HEIs. Research limitations/implications: In spite of significant funding cuts, regulatory reforms and competitive challenges, the level of voluntary disclosure by UK HEIs remains low. Whilst the role of selected governance mechanisms and ‘shared leadership’ in improving disclosure, is asserted, the varying level and selective basis of the disclosures across the surveyed HEIs suggest that the public accountability motive is weaker relative to the other motives underpinned by stakeholder, legitimacy and resource dependence perspectives. Originality/value: This is the first study which explores the association between HEI governance structures, managerial characteristics and the level of disclosure in UK HEIs.", "title": "" }, { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "fe23c80ef28f59066b6574e9c0f8578b", "text": "Received: 1 September 2008 Revised: 30 May 2009 2nd Revision: 10 October 2009 3rd Revision: 17 December 2009 4th Revision: 28 September 2010 Accepted: 1 November 2010 Abstract This paper applies the technology acceptance model to explore the digital divide and transformational government (t-government) in the United States. Successful t-government is predicated on citizen adoption and usage of e-government services. The contribution of this research is to enhance our understanding of the factors associated with the usage of e-government services among members of a community on the unfortunate side of the divide. A questionnaire was administered to members, of a techno-disadvantaged public housing community and neighboring households, who partook in training or used the community computer lab. The results indicate that perceived access barriers and perceived ease of use (PEOU) are significantly associated with usage, while perceived usefulness (PU) is not. Among the demographic characteristics, educational level, employment status, and household income all have a significant impact on access barriers and employment is significantly associated with PEOU. Finally, PEOU is significantly related to PU. Overall, the results emphasize that t-government cannot cross the digital divide without accompanying employment programs and programs that enhance citizens’ ease in using such services. European Journal of Information Systems (2011) 20, 308–328. doi:10.1057/ejis.2010.64; published online 28 December 2010", "title": "" }, { "docid": "e9676faf7e8d03c64fdcf6aa5e09b008", "text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.", "title": "" }, { "docid": "d1c4e0da79ceb8893f63aa8ea7c8041c", "text": "This paper describes the GOLD (Generic Obstacle and Lane Detection) system, a stereo vision-based hardware and software architecture developed to increment road safety of moving vehicles: it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings). It has been implemented on the PAPRICA system and works at a rate of 10 Hz.", "title": "" }, { "docid": "7a1f409eea5e0ff89b51fe0a26d6db8d", "text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.", "title": "" }, { "docid": "3e0a731c76324ad0cea438a1d9907b68", "text": "ance. In addition, the salt composition of the soil water influences the composition of cations on the exchange Due in large measure to the prodigious research efforts of Rhoades complex of soil particles, which influences soil permeand his colleagues at the George E. Brown, Jr., Salinity Laboratory ability and tilth, depending on salinity level and exover the past two decades, soil electrical conductivity (EC), measured changeable cation composition. Aside from decreasing using electrical resistivity and electromagnetic induction (EM), is among the most useful and easily obtained spatial properties of soil crop yield and impacting soil hydraulics, salinity can that influences crop productivity. As a result, soil EC has become detrimentally impact ground water, and in areas where one of the most frequently used measurements to characterize field tile drainage occurs, drainage water can become a disvariability for application to precision agriculture. The value of spatial posal problem as demonstrated in the southern San measurements of soil EC to precision agriculture is widely acknowlJoaquin Valley of central California. edged, but soil EC is still often misunderstood and misinterpreted. From a global perspective, irrigated agriculture makes To help clarify misconceptions, a general overview of the application an essential contribution to the food needs of the world. of soil EC to precision agriculture is presented. The following areas While only 15% of the world’s farmland is irrigated, are discussed with particular emphasis on spatial EC measurements: roughly 35 to 40% of the total supply of food and fiber a brief history of the measurement of soil salinity with EC, the basic comes from irrigated agriculture (Rhoades and Lovetheories and principles of the soil EC measurement and what it actually day, 1990). However, vast areas of irrigated land are measures, an overview of the measurement of soil salinity with various threatened by salinization. Although accurate worldEC measurement techniques and equipment (specifically, electrical wide data are not available, it is estimated that roughly resistivity with the Wenner array and EM), examples of spatial EC half of all existing irrigation systems (totaling about 250 surveys and their interpretation, applications and value of spatial measurements of soil EC to precision agriculture, and current and million ha) are affected by salinity and waterlogging future developments. Precision agriculture is an outgrowth of techno(Rhoades and Loveday, 1990). logical developments, such as the soil EC measurement, which faciliSalinity within irrigated soils clearly limits productivtate a spatial understanding of soil–water–plant relationships. The ity in vast areas of the USA and other parts of the world. future of precision agriculture rests on the reliability, reproducibility, It is generally accepted that the extent of salt-affected and understanding of these technologies. soil is increasing. In spite of the fact that salinity buildup on irrigated lands is responsible for the declining resource base for agriculture, we do not know the exact T predominant mechanism causing the salt accuextent to which soils in our country are salinized, the mulation in irrigated agricultural soils is evapotransdegree to which productivity is being reduced by salinpiration. The salt contained in the irrigation water is ity, the increasing or decreasing trend in soil salinity left behind in the soil as the pure water passes back to development, and the location of contributory sources the atmosphere through the processes of evaporation of salt loading to ground and drainage waters. Suitable and plant transpiration. The effects of salinity are manisoil inventories do not exist and until recently, neither fested in loss of stand, reduced rates of plant growth, did practical techniques to monitor salinity or assess the reduced yields, and in severe cases, total crop failure (Rhoades and Loveday, 1990). Salinity limits water upAbbreviations: EC, electrical conductivity; ECa, apparent soil electritake by plants by reducing the osmotic potential and cal conductivity; ECe, electrical conductivity of the saturated soil paste thus the total soil water potential. Salinity may also extract; ECw, electrical conductivity of soil water; EM, electromagnetic cause specific ion toxicity or upset the nutritional balinduction; EMavg, the geometric mean of the vertical and horizontal electromagnetic induction readings; EMh, electromagnetic induction measurement in the horizontal coil-mode configuration; EMv, electroUSDA-ARS, George E. Brown, Jr., Salinity Lab., 450 West Big magnetic induction measurement in the vertical coil-mode configuraSprings Rd., Riverside, CA 92507-4617. Received 23 Apr. 2001. *Cortion; GIS, geographical information system; GPS, global positioning responding author ([email protected]). systems; NPS, nonpoint source; SP, saturation percentage; TDR, time domain reflectometry; w, total volumetric water content. Published in Agron. J. 95:455–471 (2003).", "title": "" }, { "docid": "8c301956112a9bfb087ae9921d80134a", "text": "This paper presents an operation analysis of a high frequency three-level (TL) PWM inverter applied for an induction heating applications. The feature of TL inverter is to achieve zero-voltage switching (ZVS) at above the resonant frequency. The circuit has been modified from the full-bridge inverter to reach high-voltage with low-harmonic output. The device voltage stresses are controlled in a half of the DC input voltage. The prototype operated between 70 and 78 kHz at the DC voltage rating of 580 V can supply the output power rating up to 3000 W. The iron has been heated and hardened at the temperature up to 800degC. In addition, the experiments have been successfully tested and compared with the simulations", "title": "" } ]
scidocsrr
a746849703daae985e9d1c5a62d6b9d3
t-FFD: free-form deformation by using triangular mesh
[ { "docid": "7d741e9073218fa073249e512161748d", "text": "Free-form deformation (FFD) is a powerful modeling tool, but controlling the shape of an object under complex deformations is often difficult. The interface to FFD in most conventional systems simply represents the underlying mathematics directly; users describe deformations by manipulating control points. The difficulty in controlling shape precisely is largely due to the control points being extraneous to the object; the deformed object does not follow the control points exactly. In addition, the number of degrees of freedom presented to the user can be overwhelming. We present a method that allows a user to control a free-form deformation of an object by manipulating the object directly, leading to better control of the deformation and a more intuitive interface. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling Curve, Surface, Solid, and Object Representations; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction Techniques. Additional", "title": "" } ]
[ { "docid": "b5c7b9f1f57d3d79d3fc8a97eef16331", "text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.", "title": "" }, { "docid": "2ce21d12502577882ced4813603e9a72", "text": "Positive psychology is the scientific study of positive experiences and positive individual traits, and the institutions that facilitate their development. A field concerned with well-being and optimal functioning, positive psychology aims to broaden the focus of clinical psychology beyond suffering and its direct alleviation. Our proposed conceptual framework parses happiness into three domains: pleasure, engagement, and meaning. For each of these constructs, there are now valid and practical assessment tools appropriate for the clinical setting. Additionally, mounting evidence demonstrates the efficacy and effectiveness of positive interventions aimed at cultivating pleasure, engagement, and meaning. We contend that positive interventions are justifiable in their own right. Positive interventions may also usefully supplement direct attempts to prevent and treat psychopathology and, indeed, may covertly be a central component of good psychotherapy as it is done now.", "title": "" }, { "docid": "b7aea71af6c926344286fbfa214c4718", "text": "Semantic segmentation is a task that covers most of the perception needs of intelligent vehicles in an unified way. ConvNets excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at the pixel level. However, current approaches normally involve complex architectures that are expensive in terms of computational resources and are not feasible for ITS applications. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our ConvNet is a novel layer that uses residual connections and factorized convolutions in order to remain highly efficient while still retaining remarkable performance. Our network is able to run at 83 FPS in a single Titan X, and at more than 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments demonstrates that our system, trained from scratch on the challenging Cityscapes dataset, achieves a classification performance that is among the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. This makes our model an ideal approach for scene understanding in intelligent vehicles applications.", "title": "" }, { "docid": "ac5c015aa485084431b8dba640f294b5", "text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.", "title": "" }, { "docid": "6bafdd357ad44debeda78d911a69da90", "text": "We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.", "title": "" }, { "docid": "69ad6c10f8a7ae4629ff2aee38da0ddb", "text": "A new hybrid security algorithm is presented for RSA cryptosystem named as Hybrid RSA. The system works on the concept of using two different keys- a private and a public for decryption and encryption processes. The value of public key (P) and private key (Q) depends on value of M, where M is the product of four prime numbers which increases the factorizing of variable M. moreover, the computation of P and Q involves computation of some more factors which makes it complex. This states that the variable x or M is transferred during encryption and decryption process, where x represents the multiplication of two prime numbers A and B. thus, it provides more secure path for encryption and decryption process. The proposed system is compared with the RSA and enhanced RSA (ERSA) algorithms to measure the key generation time, encryption and decryption time which is proved to be more efficient than RSA and ERSA.", "title": "" }, { "docid": "4789f548800a38c11f0fa2f91efc95c9", "text": "Most of the Low Dropout Regulators (LDRs) have limited operation range of load current due to their stability problem. This paper proposes a new frequency compensation scheme for LDR to optimize the regulator performance over a wide load current range. By introducing a tracking zero to cancel out the regulator output pole, the frequency response of the feedback loop becomes load current independent. The open-loop DC gain is boosted up by a low frequency dominant pole, which increases the regulator accuracy. To demonstrate the feasibility of the proposed scheme, a LDR utilizing the new frequency compensation scheme is designed and fabricated using TSMC 0.3511~1 digital CMOS process. Simulation results show that with output current from 0 pA to 100 mA the bandwidth variation is only 2.3 times and the minimum DC gain is 72 dB. Measurement of the dynamic response matches well with simulation.", "title": "" }, { "docid": "d0811a8c8b760b8dadfa9a51df568bd9", "text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.", "title": "" }, { "docid": "f7c4b71b970b7527cd2650ce1e05ab1b", "text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.", "title": "" }, { "docid": "274485dd39c0727c99fcc0a07d434b25", "text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.", "title": "" }, { "docid": "b27ab468a885a3d52ec2081be06db2ef", "text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.", "title": "" }, { "docid": "1709f180c56cab295bf9fd9c3e35d4ef", "text": "Harmonic radar systems provide an effective modality for tracking insect behavior. This letter presents a harmonic radar system proposed to track the migration of the Emerald Ash Borer (EAB). The system offers a unique combination of portability, low power and small tag design. It is comprised of a compact radar unit and a passive RF tag for mounting on the insect. The radar unit transmits a 5.96 GHz signal and detects at the 11.812 GHz band. A prototype of the radar unit was built and tested, and a new small tag was designed for the application. The new tag offers improved harmonic conversion efficiency and much smaller size as compared to previous harmonic radar systems for tracking insects. Unlike RFID detectors whose sensitivity allows detection up to a few meters, the developed radar can detect a tagged insect up to 58 m (190 ft).", "title": "" }, { "docid": "a600a19440b8e6799e0e603cf56ff141", "text": "In this work, we address the problem of distributed expert finding using chains of social referrals and profile matching with only local information in online social networks. By assuming that users are selfish, rational, and have privately known cost of participating in the referrals, we design a novel truthful efficient mechanism in which an expert-finding query will be relayed by intermediate users. When receiving a referral request, a participant will locally choose among her neighbors some user to relay the request. In our mechanism, several closely coupled methods are carefully designed to improve the performance of distributed search, including, profile matching, social acquaintance prediction, score function for locally choosing relay neighbors, and budget estimation. We conduct extensive experiments on several data sets of online social networks. The extensive study of our mechanism shows that the success rate of our mechanism is about 90 percent in finding closely matched experts using only local search and limited budget, which significantly improves the previously best rate 20 percent. The overall cost of finding an expert by our truthful mechanism is about 20 percent of the untruthful methods, e.g., the method that always selects high-degree neighbors. The median length of social referral chains is 6 using our localized search decision, which surprisingly matches the well-known small-world phenomenon of global social structures.", "title": "" }, { "docid": "fd91f09861da433d27d4db3f7d2a38a6", "text": "Herbert Simon’s research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon’s approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman’s biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon’s approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment.", "title": "" }, { "docid": "39eac1617b9b68f68022577951460fb5", "text": "Web services support software architectures that can evolve dynamically. In particular, here we focus on architectures where services are composed (orchestrated) through a workflow described in the BPEL language. We assume that the resulting composite service refers to external services through assertions that specify their expected functional and non-functional properties. Based on these assertions, the composite service may be verified at design time by checking that it ensures certain relevant properties. Because of the dynamic nature of Web services and the multiple stakeholders involved in their provision, however, the external services may evolve dynamically, and even unexpectedly. They may become inconsistent with respect to the assertions against which the workflow was verified during development. As a consequence, validation of the composition must extend to run time. We introduce an assertion language, called ALBERT, which can be used to specify both functional and non-functional properties. We also describe an environment which supports design-time verification of ALBERT assertions for BPEL workflows via model checking. At run time, the assertions can be turned into checks that a software monitor performs on the composite system to verify that it continues to guarantee its required properties. A TeleAssistance application is provided as a running example to illustrate our validation framework.", "title": "" }, { "docid": "2ecfc909301dcc6241bec2472b4d4135", "text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.", "title": "" }, { "docid": "301ce75026839f85bc15100a9a7cc5ca", "text": "This paper presents a novel visual-inertial integration system for human navigation in free-living environments, where the measurements from wearable inertial and monocular visual sensors are integrated. The preestimated orientation, obtained from magnet, angular rate, and gravity sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman filter is selected as the fusion estimator. Furthermore, the use of preestimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared with the RANdom SAmple Consensus algorithm. In addition, an adaptive-frame rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation, but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.", "title": "" }, { "docid": "1968573cf98307276bf0f10037aa3623", "text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.", "title": "" }, { "docid": "b85e9ef3652a99e55414d95bfed9cc0d", "text": "Regulatory T cells (Tregs) prevail as a specialized cell lineage that has a central role in the dominant control of immunological tolerance and maintenance of immune homeostasis. Thymus-derived Tregs (tTregs) and their peripherally induced counterparts (pTregs) are imprinted with unique Forkhead box protein 3 (Foxp3)-dependent and independent transcriptional and epigenetic characteristics that bestows on them the ability to suppress disparate immunological and non-immunological challenges. Thus, unidirectional commitment and the predominant stability of this regulatory lineage is essential for their unwavering and robust suppressor function and has clinical implications for the use of Tregs as cellular therapy for various immune pathologies. However, recent studies have revealed considerable heterogeneity or plasticity in the Treg lineage, acquisition of alternative effector or hybrid fates, and promotion rather than suppression of inflammation in extreme contexts. In addition, the absolute stability of Tregs under all circumstances has been questioned. Since these observations challenge the safety and efficacy of human Treg therapy, the issue of Treg stability versus plasticity continues to be enthusiastically debated. In this review, we assess our current understanding of the defining features of Foxp3(+) Tregs, the intrinsic and extrinsic cues that guide development and commitment to the Treg lineage, and the phenotypic and functional heterogeneity that shapes the plasticity and stability of this critical regulatory population in inflammatory contexts.", "title": "" }, { "docid": "d7ab8b7604d90e1a3bb6b4c1e54833a0", "text": "Invisibility devices have captured the human imagination for many years. Recent theories have proposed schemes for cloaking devices using transformation optics and conformal mapping. Metamaterials, with spatially tailored properties, have provided the necessary medium by enabling precise control over the flow of electromagnetic waves. Using metamaterials, the first microwave cloaking has been achieved but the realization of cloaking at optical frequencies, a key step towards achieving actual invisibility, has remained elusive. Here, we report the first experimental demonstration of optical cloaking. The optical 'carpet' cloak is designed using quasi-conformal mapping to conceal an object that is placed under a curved reflecting surface by imitating the reflection of a flat surface. The cloak consists only of isotropic dielectric materials, which enables broadband and low-loss invisibility at a wavelength range of 1,400-1,800 nm.", "title": "" } ]
scidocsrr
d229c9839339d596488653be4137fbf6
Sampling and Recovery of Pulse Streams
[ { "docid": "59786d8ea951639b8b9a4e60c9d43a06", "text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.", "title": "" } ]
[ { "docid": "bba4256906b1aee1c76d817b9926226c", "text": "In this paper, we present an analytical framework to evaluate the latency performance of connection-based spectrum handoffs in cognitive radio (CR) networks. During the transmission period of a secondary connection, multiple interruptions from the primary users result in multiple spectrum handoffs and the need of predetermining a set of target channels for spectrum handoffs. To quantify the effects of channel obsolete issue on the target channel predetermination, we should consider the three key design features: 1) general service time distribution of the primary and secondary connections; 2) different operating channels in multiple handoffs; and 3) queuing delay due to channel contention from multiple secondary connections. To this end, we propose the preemptive resume priority (PRP) M/G/1 queuing network model to characterize the spectrum usage behaviors with all the three design features. This model aims to analyze the extended data delivery time of the secondary connections with proactively designed target channel sequences under various traffic arrival rates and service time distributions. These analytical results are applied to evaluate the latency performance of the connection-based spectrum handoff based on the target channel sequences mentioned in the IEEE 802.22 wireless regional area networks standard. Then, to reduce the extended data delivery time, a traffic-adaptive spectrum handoff is proposed, which changes the target channel sequence of spectrum handoffs based on traffic conditions. Compared to the existing target channel selection methods, this traffic-adaptive target channel selection approach can reduce the extended data transmission time by 35 percent, especially for the heavy traffic loads of the primary users.", "title": "" }, { "docid": "bb7ac8c753d09383ecbf1c8cd7572d05", "text": "Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task. We present a framework that combines techniques in formal methods with reinforcement learning (RL). The methods we provide allows for convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards, and construct new skills from existing ones with little to no additional exploration. We evaluate the proposed methods in a simple grid world simulation as well as a more complicated kitchen environment in AI2Thor (Kolve et al. [2017]).", "title": "" }, { "docid": "406fab96a8fd49f4d898a9735ee1512f", "text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.", "title": "" }, { "docid": "517abd2ff0ed007c5011059d055e19e1", "text": "Long Short-Term Memory (LSTM) is a particular type of recurrent neural network (RNN) that can model long term temporal dynamics. Recently it has been shown that LSTM-RNNs can achieve higher recognition accuracy than deep feed-forword neural networks (DNNs) in acoustic modelling. However, speaker adaption for LSTM-RNN based acoustic models has not been well investigated. In this paper, we study the LSTM-RNN speaker-aware training that incorporates the speaker information during model training to normalise the speaker variability. We first present several speaker-aware training architectures, and then empirically evaluate three types of speaker representation: I-vectors, bottleneck speaker vectors and speaking rate. Furthermore, to factorize the variability in the acoustic signals caused by speakers and phonemes respectively, we investigate the speaker-aware and phone-aware joint training under the framework of multi-task learning. In AMI meeting speech transcription task, speaker-aware training of LSTM-RNNs reduces word error rates by 6.5% relative to a very strong LSTM-RNN baseline, which uses FMLLR features.", "title": "" }, { "docid": "7681a78f2d240afc6b2e48affa0612c1", "text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.", "title": "" }, { "docid": "a9d5220445f3cac82fd38b16c26c2bbc", "text": "Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a \"four-headed beast\"--it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the \"genomical\" challenges of the next decade.", "title": "" }, { "docid": "c935ba16ca618659c8fcaa432425db22", "text": "Dynamic Voltage/Frequency Scaling (DVFS) is a useful tool for improving system energy efficiency, especially in multi-core chips where energy is more of a limiting factor. Per-core DVFS, where cores can independently scale their voltages and frequencies, is particularly effective. We present a DVFS policy using machine learning, which learns the best frequency choices for a machine as a decision tree.\n Machine learning is used to predict the frequency which will minimize the expected energy per user-instruction (epui) or energy per (user-instruction)2 (epui2). While each core independently sets its frequency and voltage, a core is sensitive to other cores' frequency settings. Also, we examine the viability of using only partial training to train our policy, rather than full profiling for each program.\n We evaluate our policy on a 16-core machine running multiprogrammed, multithreaded benchmarks from the PARSEC benchmark suite against a baseline fixed frequency as well as a recently-proposed greedy policy. For 1ms DVFS intervals, our technique improves system epui2 by 14.4% over the baseline no-DVFS policy and 11.3% on average over the greedy policy.", "title": "" }, { "docid": "724388aac829af9671a90793b1b31197", "text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.", "title": "" }, { "docid": "e733b08455a5ca2a5afa596268789993", "text": "In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.", "title": "" }, { "docid": "866c1e87076da5a94b9adeacb9091ea3", "text": "Training a support vector machine (SVM) is usually done by ma pping the underlying optimization problem into a quadratic progr amming (QP) problem. Unfortunately, high quality QP solvers are not rea dily available, which makes research into the area of SVMs difficult for he those without a QP solver. Recently, the Sequential Minimal Optim ization algorithm (SMO) was introduced [1, 2]. SMO reduces SVM trainin g down to a series of smaller QP subproblems that have an analytical solution and, therefore, does not require a general QP solver. SMO has been shown to be very efficient for classification problems using l ear SVMs and/or sparse data sets. This work shows how SMO can be genera lized to handle regression problems.", "title": "" }, { "docid": "eb64f11d3795bd2e97eb6d440169a3f0", "text": "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others' positive experiences constitutes a positive experience for people.", "title": "" }, { "docid": "1e2767ace7b4d9f8ca2a5eee21684240", "text": "Modern data analytics applications typically process massive amounts of data on clusters of tens, hundreds, or thousands of machines to support near-real-time decisions.The quantity of data and limitations of disk and memory bandwidth often make it infeasible to deliver answers at interactive speeds. However, it has been widely observed that many applications can tolerate some degree of inaccuracy. This is especially true for exploratory queries on data, where users are satisfied with \"close-enough\" answers if they can come quickly. A popular technique for speeding up queries at the cost of accuracy is to execute each query on a sample of data, rather than the whole dataset. To ensure that the returned result is not too inaccurate, past work on approximate query processing has used statistical techniques to estimate \"error bars\" on returned results. However, existing work in the sampling-based approximate query processing (S-AQP) community has not validated whether these techniques actually generate accurate error bars for real query workloads. In fact, we find that error bar estimation often fails on real world production workloads. Fortunately, it is possible to quickly and accurately diagnose the failure of error estimation for a query. In this paper, we show that it is possible to implement a query approximation pipeline that produces approximate answers and reliable error bars at interactive speeds.", "title": "" }, { "docid": "1b4eb25d20cd2ca431c2b73588021086", "text": "Machine rule induction was examined on a difficult categorization problem by applying a Holland-style classifier system to a complex letter recognition task. A set of 20,000 unique letter images was generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts. The parent fonts represented a full range of character types including script, italic, serif, and Gothic. The features of each of the 20,000 characters were summarized in terms of 16 primitive numerical attributes. Our research focused on machine induction techniques for generating IF-THEN classifiers in which the IF part was a list of values for each of the 16 attributes and the THEN part was the correct category, i.e., one of the 26 letters of the alphabet. We examined the effects of different procedures for encoding attributes, deriving new rules, and apportioning credit among the rules. Binary and Gray-code attribute encodings that required exact matches for rule activation were compared with integer representations that employed fuzzy matching for rule activation. Random and genetic methods for rule creation were compared with instance-based generalization. The strength/specificity method for credit apportionment was compared with a procedure we call “accuracy/utility.”", "title": "" }, { "docid": "3e280f302493b9ed1caaea6937629d09", "text": "The increasing popularity of the framing concept in media analysis goes hand in hand with significant inconsistency in its application. This paper outlines an integrated process model of framing that includes production, content, and media use perspectives. A typology of generic and issue-specific frames is proposed based on previous studies of media frames. An example is given of how generic news frames may be identified and used to understand cross-national differences in news coverage. The paper concludes with an identification of contentious issues in current framing research.", "title": "" }, { "docid": "69179341377477af8ebe9013c664828c", "text": "1. Intensive agricultural practices drive biodiversity loss with potentially drastic consequences for ecosystem services. To advance conservation and production goals, agricultural practices should be compatible with biodiversity. Traditional or less intensive systems (i.e. with fewer agrochemicals, less mechanisation, more crop species) such as shaded coffee and cacao agroforests are highlighted for their ability to provide a refuge for biodiversity and may also enhance certain ecosystem functions (i.e. predation). 2. Ants are an important predator group in tropical agroforestry systems. Generally, ant biodiversity declines with coffee and cacao intensification yet the literature lacks a summary of the known mechanisms for ant declines and how this diversity loss may affect the role of ants as predators. 3. Here, how shaded coffee and cacao agroforestry systems protect biodiversity and may preserve related ecosystem functions is discussed in the context of ants as predators. Specifically, the relationships between biodiversity and predation, links between agriculture and conservation, patterns and mechanisms for ant diversity loss with agricultural intensification, importance of ants as control agents of pests and fungal diseases, and whether ant diversity may influence the functional role of ants as predators are addressed. Furthermore, because of the importance of homopteran-tending by ants in the ecological and agricultural literature, as well as to the success of ants as predators, the costs and benefits of promoting ants in agroforests are discussed. 4. Especially where the diversity of ants and other predators is high, as in traditional agroforestry systems, both agroecosystem function and conservation goals will be advanced by biodiversity protection.", "title": "" }, { "docid": "9d615d361cb1a357ae1663d1fe581d24", "text": "We report three patients with dissecting cellulitis of the scalp. Prolonged treatment with oral isotretinoin was highly effective in all three patients. Furthermore, long-term post-treatment follow-up in two of the patients has shown a sustained therapeutic benefit.", "title": "" }, { "docid": "bb6b34c125b79b515d0cac7299ed6376", "text": "Deep learning has been successful in various domains including image recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present Model R, a neural network model created to provide a deep learning approach to link weight prediction problem. This model extracts knowledge of nodes from known links' weights and uses this knowledge to predict unknown links' weights. We demonstrate the power of Model R through experiments and compare it with stochastic block model and its derivatives. Model R shows that deep learning can be successfully applied to link weight prediction and it outperforms stochastic block model and its derivatives by up to 73% in terms of prediction accuracy. We anticipate this new approach to provide effective solutions to more graph mining tasks.", "title": "" }, { "docid": "52be5bbccc0c4a840585dccc629e2412", "text": "A voltage scaling technique for energy-efficient operation requires an adaptive power-supply regulator to significantly reduce dynamic power consumption in synchronous digital circuits. A digitally controlled power converter that dynamically tracks circuit performance with a ring oscillator and regulates the supply voltage to the minimum required to operate at a desired frequency is presented. This paper investigates the issues involved in designing a fully digital power converter and describes a design fabricated in a MOSIS 0.8m process. A variable-frequency digital controller design takes advantage of the power savings available through adaptive supply-voltage scaling and demonstrates converter efficiency greater than 90% over a dynamic range of regulated voltage levels.", "title": "" }, { "docid": "1718c817d15b9bc1ab99d359ff8d1157", "text": "Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.", "title": "" } ]
scidocsrr
c1957d49ea08b47f516dcc7f032a3a71
Mining evolutionary multi-branch trees from text streams
[ { "docid": "2ecfc909301dcc6241bec2472b4d4135", "text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.", "title": "" } ]
[ { "docid": "5d318e2df97f539e227f0aef60d0732b", "text": "The concept of intuition has, until recently, received scant scholarly attention within and beyond the psychological sciences, despite its potential to unify a number of lines of inquiry. Presently, the literature on intuition is conceptually underdeveloped and dispersed across a range of domains of application, from education, to management, to health. In this article, we clarify and distinguish intuition from related constructs, such as insight, and review a number of theoretical models that attempt to unify cognition and affect. Intuition's place within a broader conceptual framework that distinguishes between two fundamental types of human information processing is explored. We examine recent evidence from the field of social cognitive neuroscience that identifies the potential neural correlates of these separate systems and conclude by identifying a number of theoretical and methodological challenges associated with the valid and reliable assessment of intuition as a basis for future research in this burgeoning field of inquiry.", "title": "" }, { "docid": "942be0aa4dab5904139919351d6d63d4", "text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.", "title": "" }, { "docid": "7d0ebf939deed43253d5360e325c3e8e", "text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.", "title": "" }, { "docid": "78e3d9bbfc9fdd9c3454c34f09e5abd4", "text": "This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM.", "title": "" }, { "docid": "846f8f33181c3143bb8f54ce8eb3e5cc", "text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.", "title": "" }, { "docid": "44f831d346d42fd39bab3f577e6feec4", "text": "We propose a training framework for sequence-to-sequence voice conversion (SVC). A well-known problem regarding a conventional VC framework is that acoustic-feature sequences generated from a converter tend to be over-smoothed, resulting in buzzy-sounding speech. This is because a particular form of similarity metric or distribution for parameter training of the acoustic model is assumed so that the generated feature sequence that averagely fits the training target example is considered optimal. This over-smoothing occurs as long as a manually constructed similarity metric is used. To overcome this limitation, our proposed SVC framework uses a similarity metric implicitly derived from a generative adversarial network, enabling the measurement of the distance in the high-level abstract space. This would enable the model to mitigate the oversmoothing problem caused in the low-level data space. Furthermore, we use convolutional neural networks to model the long-range context-dependencies. This also enables the similarity metric to have a shift-invariant property; thus, making the model robust against misalignment errors involved in the parallel data. We tested our framework on a non-native-to-native VC task. The experimental results revealed that the use of the proposed framework had a certain effect in improving naturalness, clarity, and speaker individuality.", "title": "" }, { "docid": "3c0e132f0738105eb7fff7f73c520ef7", "text": "Fan-out wafer-level-packaging (FO-WLP) technology gets more and more significant attention with its advantages of small form factor, higher I/O density, cost effective and high performance for wide range application. However, wafer warpage is still one critical issue which is needed to be addressed for successful subsequent processes for FO-WLP packaging. In this study, methodology to reduce wafer warpage of 12\" wafer at different processes was proposed in terms of geometry design, material selection, and process optimization through finite element analysis (FEA) and experiment. Wafer process dependent modeling results were validated by experimental measurement data. Solutions for reducing wafer warpage were recommended. Key parameters were identified based on FEA modeling results: thickness ratio of die to total mold thickness, molding compound and support wafer materials, dielectric material and RDL design.", "title": "" }, { "docid": "7fc49f042770caf691e8bf074605a7ed", "text": "Human prostate cancer is characterized by multiple gross chromosome alterations involving several chromosome regions. However, the specific genes involved in the development of prostate tumors are still largely unknown. Here we have studied the chromosome composition of the three established prostate cancer cell lines, LNCaP, PC-3, and DU145, by spectral karyotyping (SKY). SKY analysis showed complex karyotypes for all three cell lines, with 87, 58/113, and 62 chromosomes, respectively. All cell lines were shown to carry structural alterations of chromosomes 1, 2, 4, 6, 10, 15, and 16; however, no recurrent breakpoints were detected. Compared to previously published findings on these cell lines using comparative genomic hybridization, SKY revealed several balanced translocations and pinpointed rearrangement breakpoints. The SKY analysis was validated by fluorescence in situ hybridization using chromosome-specific, as well as locus-specific, probes. Identification of chromosome alterations in these cell lines by SKY may prove to be helpful in attempts to clone the genes involved in prostate cancer tumorigenesis.", "title": "" }, { "docid": "1569bcea0c166d9bf2526789514609c5", "text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.", "title": "" }, { "docid": "92b2b7fb95624a187f5304c882d31dca", "text": "Automatically predicting human eye fixations is a useful technique that can facilitate many multimedia applications, e.g., image retrieval, action recognition, and photo retargeting. Conventional approaches are frustrated by two drawbacks. First, psychophysical experiments show that an object-level interpretation of scenes influences eye movements significantly. Most of the existing saliency models rely on object detectors, and therefore, only a few prespecified categories can be discovered. Second, the relative displacement of objects influences their saliency remarkably, but current models cannot describe them explicitly. To solve these problems, this paper proposes weakly supervised fixations prediction, which leverages image labels to improve accuracy of human fixations prediction. The proposed model hierarchically discovers objects as well as their spatial configurations. Starting from the raw image pixels, we sample superpixels in an image, thereby seamless object descriptors termed object-level graphlets (oGLs) are generated by random walking on the superpixel mosaic. Then, a manifold embedding algorithm is proposed to encode image labels into oGLs, and the response map of each prespecified object is computed accordingly. On the basis of the object-level response map, we propose spatial-level graphlets (sGLs) to model the relative positions among objects. Afterward, eye tracking data is employed to integrate these sGLs for predicting human eye fixations. Thorough experiment results demonstrate the advantage of the proposed method over the state-of-the-art.", "title": "" }, { "docid": "352c61af854ffc6dab438e7a1be56fcb", "text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.", "title": "" }, { "docid": "63ed24b818f83ab04160b5c690075aac", "text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.", "title": "" }, { "docid": "84c37ea2545042a2654b162491846628", "text": "Ever since the agile manifesto was created in 2001, the research community has devoted a great deal of attention to agile software development. This article examines publications and citations to illustrate how the research on agile has progressed in the 10 years following the articulation of the manifesto. nformation systems Xtreme programming, XP", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "3177e9dd683fdc66cbca3bd985f694b1", "text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].", "title": "" }, { "docid": "c450ac5c84d962bb7f2262cf48e1280a", "text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.", "title": "" }, { "docid": "6a2c7d43cde643f295ace71f5681285f", "text": "Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.", "title": "" }, { "docid": "95c4a2cfd063abdac35572927c4dcfc1", "text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.", "title": "" }, { "docid": "646572f76cffd3ba225105d6647a588f", "text": "Context: Cyber-physical systems (CPSs) have emerged to be the next generation of engineered systems driving the so-called fourth industrial revolution. CPSs are becoming more complex, open and more prone to security threats, which urges security to be engineered systematically into CPSs. Model-Based Security Engineering (MBSE) could be a key means to tackle this challenge via security by design, abstraction, and", "title": "" } ]
scidocsrr
2216152ffc364a34083d958d3b7f3fae
A Survey of Metrics for UML Class Diagrams
[ { "docid": "b2e493de6e09766c4ddbac7de071e547", "text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development", "title": "" }, { "docid": "6cf18bea11ea8e95f24b7db69d3924e2", "text": "Experimentation in software engineering is necessar y but difficult. One reason is that there are a lar ge number of context variables, and so creating a cohesive under standing of experimental results requires a mechani sm for motivating studies and integrating results. It requ ires a community of researchers that can replicate studies, vary context variables, and build models that represent the common observations about the discipline. This paper discusses the experience of the authors, based upon a c llection of experiments, in terms of a framewo rk f r organizing sets of related studies. With such a fra mework, experiments can be viewed as part of common families of studies, rather than being isolated events. Common families of studies can contribute to important and relevant hypotheses that may not be suggested by individual experiments. A framework also facilitates building knowledge in an incremental manner through the replication of experiments within families of studies. To support the framework, this paper discusses the exp riences of the authors in carrying out empirica l studies, with specific emphasis on persistent problems encountere d in xperimental design, threats to validity, crit eria for evaluation, and execution of experiments in the dom ain of software engineering.", "title": "" }, { "docid": "7cd87a6e9890b55cdac1c6231833d63f", "text": "Although the benefits of Object-Orientation are manifold and it is, for certain, one of the mainstays for software production in the future, it will only achieve widespread practical acceptance when the management aspects of the software development process using this technology are carefully addressed. Here, software metrics play an important role allowing, among other things, better planning, the assessment of improvements, the reduction of unpredictability, early identification of potential problems and productivity evaluation. This paper proposes a set of metrics suitable for evaluating the use of the main abstractions of the Object-Oriented paradigm such as inheritance, encapsulation, information hiding or polymorphism and the consequent emphasis on reuse that, together, are believed to be responsible for the increase in software quality and development productivity. Those metrics are aimed at helping to establish comparisons throughout the practitioners’ community and setting design recommendations that may eventually become organization standards. Some desirable properties for such a metrics set are also presented. Future lines of research are envisaged.", "title": "" } ]
[ { "docid": "5528b738695f6ff0ac17f07178a7e602", "text": "Multiple genetic pathways act in response to developmental cues and environmental signals to promote the floral transition, by regulating several floral pathway integrators. These include FLOWERING LOCUS T (FT) and SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1). We show that the flowering repressor SHORT VEGETATIVE PHASE (SVP) is controlled by the autonomous, thermosensory, and gibberellin pathways, and directly represses SOC1 transcription in the shoot apex and leaf. Moreover, FT expression in the leaf is also modulated by SVP. SVP protein associates with the promoter regions of SOC1 and FT, where another potent repressor FLOWERING LOCUS C (FLC) binds. SVP consistently interacts with FLC in vivo during vegetative growth and their function is mutually dependent. Our findings suggest that SVP is another central regulator of the flowering regulatory network, and that the interaction between SVP and FLC mediated by various flowering genetic pathways governs the integration of flowering signals.", "title": "" }, { "docid": "e2c6437d257559211d182b5707aca1a4", "text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.", "title": "" }, { "docid": "0d9a378b4f8b6650fe5e31e7a9327812", "text": "The nocturnal orb-web spider Larinioides sclopetarius lives near water and frequently builds webs on bridges. In Vienna, Austria, this species is particularly abundant along the artificially lit handrails of a footbridge. Fewer individuals placed their webs on structurally identical but unlit handrails of the same footbridge. A census of the potential prey available to the spiders and the actual prey captured in the webs revealed that insect activity was significantly greater and consequently webs captured significantly more prey in the lit habitat compared to the unlit habitat. A laboratory experiment showed that adult female spiders actively choose artificially lit sites for web construction. Furthermore, this behaviour appears to be genetically predetermined rather than learned, as laboratory-reared individuals which had previously never foraged in artificial light exhibited the same preference. This orb-web spider seems to have evolved a foraging behaviour that exploits the attraction of insects to artificial lights.", "title": "" }, { "docid": "d78cd7f5736a0ee5f4feaf390971da61", "text": "Cloud computing is changing the way that organizations manage their data, due to its robustness, low cost and ubiquitous nature. Privacy concerns arise whenever sensitive data is outsourced to the cloud. This paper introduces a cloud database storage architecture that prevents the local administrator as well as the cloud administrator to learn about the outsourced database content. Moreover, machine readable rights expressions are used in order to limit users of the database to a need-to-know basis. These limitations are not changeable by administrators after the database related application is launched, since a new role of rights editors is defined once an application is launced. Furthermore, trusted computing is applied to bind cryptographic key information to trusted states. By limiting the necessary trust in both corporate as well as external administrators and service providers, we counteract the often criticized privacy and confidentiality risks of corporate cloud computing.", "title": "" }, { "docid": "107436d5f38f3046ef28495a14cc5caf", "text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.", "title": "" }, { "docid": "8dec8f3fd456174bb460e24161eb6903", "text": "Developments in pervasive computing introduced a new world of computing where networked processors embedded and distributed in everyday objects communicating with each other over wireless links. Computers in such environments work in the background while establishing connections among them dynamically and hence will be less visible and intrusive. Such a vision raises questions about how to manage issues like privacy, trust and identity in those environments. In this paper, we review the technical challenges that face pervasive computing environments in relation to each of these issues. We then present a number of security related considerations and use them as a basis for comparison between pervasive and traditional computing. We will argue that these considerations pose particular concerns and challenges to the design and implementation of pervasive environments which are different to those usually found in traditional computing environments. To address these concerns and challenges, further research is needed. We will present a number of directions and topics for possible future research with respect to each of the three issues.", "title": "" }, { "docid": "2f30301143dc626a3013eb24629bfb45", "text": "A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio, ...). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality.\n Within the limit of our study (current SNN and machine-learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.", "title": "" }, { "docid": "97595aebb100bb4b0597ebaf8b81aa70", "text": "Redundancy and diversity are commonly applied principles for fault tolerance against accidental faults. Their use in security, which is attracting increasing interest, is less general and less of an accepted principle. In particular, redundancy without diversity is often argued to be useless against systematic attack, and diversity to be of dubious value. This paper discusses their roles and limits, and to what extent lessons from research on their use for reliability can be applied to security, in areas such as intrusion detection. We take a probabilistic approach to the problem, and argue its validity for security. We then discuss the various roles of redundancy and diversity for security, and show that some basic insights from probabilistic modelling in reliability and safety indeed apply to examples of design for security. We discuss the factors affecting the efficacy of redundancy and diversity, the role of “independence” between layers of defense, and some of the trade-offs facing designers.", "title": "" }, { "docid": "b7965cf7a1e4746cfd0e93993ea72bf2", "text": "The accuracy of the positions of a pedestrian is very important and useful information for the statistics, advertisement, and safety of different applications. Although the GPS chip in a smartphone is currently the most convenient device to obtain the positions, it still suffers from the effect of multipath and nonline-of-sight propagation in urban canyons. These reflections could greatly degrade the performance of a GPS receiver. This paper describes an approach to estimate a pedestrian position by the aid of a 3-D map and a ray-tracing method. The proposed approach first distributes the numbers of position candidates around a reference position. The weighting of the position candidates is evaluated based on the similarity between the simulated pseudorange and the observed pseudorange. Simulated pseudoranges are calculated using a ray-tracing simulation and a 3-D map. Finally, the proposed method was verified through field experiments in an urban canyon in Tokyo. According to the results, the proposed approach successfully estimates the reflection and direct paths so that the estimate appears very close to the ground truth, whereas the result of a commercial GPS receiver is far from the ground truth. The results show that the proposed method has a smaller error distance than the conventional method.", "title": "" }, { "docid": "a1757ee58eb48598d3cd6e257b53cd10", "text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.", "title": "" }, { "docid": "ac4b6ec32fe607e5e9981212152901f5", "text": "As an important matrix factorization model, Nonnegative Matrix Factorization (NMF) has been widely used in information retrieval and data mining research. Standard Nonnegative Matrix Factorization is known to use the Frobenius norm to calculate the residual, making it sensitive to noises and outliers. It is desirable to use robust NMF models for practical applications, in which usually there are many data outliers. It has been studied that the 2,1, or 1-norm can be used for robust NMF formulations to deal with data outliers. However, these alternatives still suffer from the extreme data outliers. In this paper, we present a novel robust capped norm orthogonal Nonnegative Matrix Factorization model, which utilizes the capped norm for the objective to handle these extreme outliers. Meanwhile, we derive a new efficient optimization algorithm to solve the proposed non-convex non-smooth objective. Extensive experiments on both synthetic and real datasets show our proposed new robust NMF method consistently outperforms related approaches.", "title": "" }, { "docid": "55e587291229b8c9889a95f99d68d88b", "text": "Power system loads are one of the crucial elements of modern power systems and, as such, must be properly modelled in stability studies. However, the static and dynamic characteristics of a load are commonly unknown, extremely nonlinear, and are usually time varying. Consequently, a measurement-based approach for determining the load characteristics would offer a significant advantage since it could update the parameters of load models directly from the available system measurements. For this purpose and in order to accurately determine load model parameters, a suitable parameter estimation method must be applied. The conventional approach to this problem favors the use of standard nonlinear estimators or artificial intelligence (AI)-based methods. In this paper, a new solution for determining the unknown load model parameters is proposed-an improved particle swarm optimization (IPSO) method. The proposed method is an AI-type technique similar to the commonly used genetic algorithms (GAs) and is shown to provide a promising alternative. This paper presents a performance comparison of IPSO and GA using computer simulations and measured data obtained from realistic laboratory experiments.", "title": "" }, { "docid": "fa52d586e7e6c92444845881ab1990cf", "text": "This paper proposes a novel rotor contour design for variable reluctance (VR) resolvers by injecting auxiliary air-gap permeance harmonics. Based on the resolver model with nonoverlapping tooth-coil windings, the influence of air-gap length function is first investigated by finite element (FE) method, and the detection accuracy of designs with higher values of fundamental wave factor may deteriorate due to the increasing third order of output voltage harmonics. Further, the origins of the third harmonics are investigated by analytical derivation and FE analyses of output voltages. Furthermore, it is proved that the voltage harmonics and the detection accuracy are significantly improved by injecting auxiliary air-gap permeance harmonics in the design of rotor contour. In addition, the proposed design can also be employed to eliminate voltage tooth harmonics in a conventional VR resolver topology. Finally, VR resolver prototypes with the conventional and the proposed rotors are fabricated and tested respectively to verify the analyses.", "title": "" }, { "docid": "1368a00839a5dd1edc7dbaced35e56f1", "text": "Nowadays, transfer of the health care from ambulance to patient's home needs higher demand on patient's mobility, comfort and acceptance of the system. Therefore, the goal of this study is to proof the concept of a system which is ultra-wearable, less constraining and more suitable for long term measurements than conventional ECG monitoring systems which use conductive electrolytic gels for low impedance electrical contact with skin. The developed system is based on isolated capacitive coupled electrodes without any galvanic contact to patient's body and does not require the common right leg electrode. Measurements performed under real conditions show that it is possible to acquire well known ECG waveforms without the common electrode when the patient is sitting and even during walking. Results of the validation process demonstrate that the system performance is comparable to the conventional ECG system while the wearability is increased.", "title": "" }, { "docid": "d18a636768e6aea2e84c7fc59593ec89", "text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.", "title": "" }, { "docid": "da6771ebd128ce1dc58f2ab1d56b065f", "text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.", "title": "" }, { "docid": "e4e161231b53096a2b4124a92a0a270f", "text": "Soft typing is a generalization of static type checking that accommodates both dynamic typing and static typing in one framework. A soft type checker infers types for identifiers and inserts explicit run-time checks to transform untypable programs into typable form. Soft Scheme is a practical soft type system for R4RS Scheme. The type checker uses a representation for types that is expressive, easy to interpret, and supports efficient type inference. Soft Scheme supports all of R4RS Scheme, including uncurried procedures of fixed and variable arity, assignment, and continuations.", "title": "" }, { "docid": "5547f8ad138a724c2cc05ce65f50ebd2", "text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.", "title": "" }, { "docid": "4ffa0e5a75eff20ae22f41067d22ee73", "text": "In digital advertising, advertisers want to reach the right audience over media channels such as display, mobile, video, or social at the appropriate cost. The right audience for an advertiser consists of existing customers as well as valuable prospects, those that can potentially be turned into future customers. Identifying valuable prospects is called the audience extension problem because advertisers find new customers by extending the desirable criteria for their starting point, which is their existing audience or customers. The complexity of the audience extension problem stems from the difficulty of defining desirable criteria objectively, the number of desirable criteria (such as similarity, diversity, performance) to simultaneously satisfy, and the expected runtime (a few minutes) to find a solution over billions of cookie-based users. In this paper, we formally define the audience extension problem, propose an algorithm that extends a given audience set efficiently under multiple desirable criteria, and experimentally validate its performance. Instead of iterating over individual users, the algorithm takes in Boolean rules that define the seed audience and returns a new set of Boolean rules that corresponds to the extended audience that satisfy the multiple criteria.", "title": "" }, { "docid": "7a0b0d314042bb753c8aa9da22e25a62", "text": "We present a new morphological analysis model that considers semantic plausibility of word sequences by using a recurrent neural network language model (RNNLM). In unsegmented languages, since language models are learned from automatically segmented texts and inevitably contain errors, it is not apparent that conventional language models contribute to morphological analysis. To solve this problem, we do not use language models based on raw word sequences but use a semantically generalized language model, RNNLM, in morphological analysis. In our experiments on two Japanese corpora, our proposed model significantly outperformed baseline models. This result indicates the effectiveness of RNNLM in morphological analysis.", "title": "" } ]
scidocsrr
ef40484cb8399d22d793fb4cb714570b
Competition in the Cryptocurrency Market
[ { "docid": "f6fc0992624fd3b3e0ce7cc7fc411154", "text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.", "title": "" }, { "docid": "165aa4bad30a95866be4aff878fbd2cf", "text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to [email protected].", "title": "" } ]
[ { "docid": "bbb91ddd9df0d5f38b8c1317a8e84f60", "text": "Poisson regression model is widely used in software quality modeling. W h e n the response variable of a data set includes a large number of zeros, Poisson regression model will underestimate the probability of zeros. A zero-inflated model changes the mean structure of the pure Poisson model. The predictive quality is therefore improved. I n this paper, we examine a full-scale industrial software system and develop two models, Poisson regression and zero-inflated Poisson regression. To our knowledge, this is the first study that introduces the zero-inflated Poisson regression model in software reliability. Comparing the predictive qualities of the two competing models, we conclude that for this system, the zero-inflated Poisson regression model is more appropriate in theory and practice.", "title": "" }, { "docid": "7d197033396c7a55593da79a5a70fa96", "text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.", "title": "" }, { "docid": "4690d2b1dbde438329644b3e76b6427f", "text": "In this work, we investigate how illuminant estimation can be performed exploiting the color statistics extracted from the faces automatically detected in the image. The proposed method is based on two observations: first, skin colors tend to form a cluster in the color space, making it a cue to estimate the illuminant in the scene; second, many photographic images are portraits or contain people. The proposed method has been tested on a public dataset of images in RAW format, using both a manual and a real face detector. Experimental results demonstrate the effectiveness of our approach. The proposed method can be directly used in many digital still camera processing pipelines with an embedded face detector working on gray level images.", "title": "" }, { "docid": "0c9a76222f885b95f965211e555e16cd", "text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.", "title": "" }, { "docid": "6eda7075de9d47851b2b5be026af7d84", "text": "Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design.", "title": "" }, { "docid": "2f471c24ccb38e70627eba6383c003e0", "text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.", "title": "" }, { "docid": "21a2347f9bb5b5638d63239b37c9d0e6", "text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.", "title": "" }, { "docid": "5297929e65e662360d8ff262e877b08a", "text": "Frontal electroencephalographic (EEG) alpha asymmetry is widely researched in studies of emotion, motivation, and psychopathology, yet it is a metric that has been quantified and analyzed using diverse procedures, and diversity in procedures muddles cross-study interpretation. The aim of this article is to provide an updated tutorial for EEG alpha asymmetry recording, processing, analysis, and interpretation, with an eye towards improving consistency of results across studies. First, a brief background in alpha asymmetry findings is provided. Then, some guidelines for recording, processing, and analyzing alpha asymmetry are presented with an emphasis on the creation of asymmetry scores, referencing choices, and artifact removal. Processing steps are explained in detail, and references to MATLAB-based toolboxes that are helpful for creating and investigating alpha asymmetry are noted. Then, conceptual challenges and interpretative issues are reviewed, including a discussion of alpha asymmetry as a mediator/moderator of emotion and psychopathology. Finally, the effects of two automated component-based artifact correction algorithms-MARA and ADJUST-on frontal alpha asymmetry are evaluated.", "title": "" }, { "docid": "dea3bce3f636c87fad95f255aceec858", "text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).", "title": "" }, { "docid": "046ae00fa67181dff54e170e48a9bacf", "text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.", "title": "" }, { "docid": "00bf4f81944c1e98e58b891ace95797e", "text": "Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the l1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.", "title": "" }, { "docid": "5e9d63bfc3b4a66e0ead79a2d883adfe", "text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.", "title": "" }, { "docid": "a95f77c59a06b2d101584babc74896fb", "text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.", "title": "" }, { "docid": "45cee79008d25916e8f605cd85dd7f3a", "text": "In exploring the emotional climate of long-term marriages, this study used an observational coding system to identify specific emotional behaviors expressed by middle-aged and older spouses during discussions of a marital problem. One hundred and fifty-six couples differing in age and marital satisfaction were studied. Emotional behaviors expressed by couples differed as a function of age, gender, and marital satisfaction. In older couples, the resolution of conflict was less emotionally negative and more affectionate than in middle-aged marriages. Differences between husbands and wives and between happy and unhappy marriages were also found. Wives were more affectively negative than husbands, whereas husbands were more defensive than wives, and unhappy marriages involved greater exchange of negative affect than happy marriages.", "title": "" }, { "docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86", "text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.", "title": "" }, { "docid": "3ced47ece49eeec3edc5d720df9bb864", "text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.", "title": "" }, { "docid": "b952967acb2eaa9c780bffe211d11fa0", "text": "Cryptographic message authentication is a growing need for FPGA-based embedded systems. In this paper a customized FPGA implementation of a GHASH function that is used in AES-GCM, a widely-used message authentication protocol, is described. The implementation limits GHASH logic utilization by specializing the hardware implementation on a per-key basis. The implemented module can generate a 128bit message authentication code in both pipelined and unpipelined versions. The pipelined GHASH version achieves an authentication throughput of more than 14 Gbit/s on a Spartan-3 FPGA and 292 Gbit/s on a Virtex-6 device. To promote adoption in the field, the complete source code for this work has been made publically-available.", "title": "" }, { "docid": "5cc666e8390b0d3cefaee2d55ad7ee38", "text": "The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000 grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller). Keywords—Incubator; neonatal; model; temperature; Arduino; GPC", "title": "" }, { "docid": "7b36abede1967f89b79975883074a34d", "text": "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding-based kernel achieves the best performance. Furthermore, we present episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for VIN and GVIN. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and realworld street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).", "title": "" }, { "docid": "8014e07969adad7e6db3bb222afaf7d2", "text": "Scratch is a visual programming environment that is widely used by young people. We investigated if Scratch can be used to teach concepts of computer science. We developed new learning materials for middle-school students that were designed according to the constructionist philosophy of Scratch and evaluated them in two schools. The classes were normal classes, not extracurricular activities whose participants are self-selected. Questionnaires and a test were constructed based upon a novel combination of the Revised Bloom Taxonomy and the SOLO taxonomy. These quantitative instruments were augmented with a qualitative analysis of observations within the classes. The results showed that in general students could successfully learn important concepts of computer science, although there were some problems with initialization, variables and concurrency; these problems can be overcome by modifications to the teaching process.", "title": "" } ]
scidocsrr
fab0ee5355e022087d7eebd37bafc471
A Fully Integrated 60-GHz CMOS Direct-Conversion Doppler Radar RF Sensor With Clutter Canceller for Single-Antenna Noncontact Human Vital-Signs Detection
[ { "docid": "c87e46e7221fb9b8486317cd2c3d4774", "text": "A microprocessor-controlled automatic cluttercancellation subsystem, consisting of a programmable microwave attenuator and a programmable microwave phase-shifter controlled by a microprocessor-based control unit, has been developed for a microwave life-detection system (L-band 2 GHz or X-band 10 GHz) which can remotely sense breathing and heartbeat movements of living subjects. This automatic cluttercancellation subsystem has drastically improved a very slow p~ocess .of manual clutter-cancellation adjustment in our preVIOU.S mlcro~av.e sys~em. ~his is very important for some potential applications mcludmg location of earthquake or avalanche-trapped victims through rubble. A series of experiments have been conducted to demonstrate the applicability of this microwave life-detection system for rescue purposes. The automatic clutter-canceler may also have a potential application in some CW radar systems.", "title": "" }, { "docid": "4c5ac799c97f99d3a64bcbea6b6cb88d", "text": "This paper presents a new type of monolithic microwave integrated circuit (MMIC)-based active quasi-circulator using phase cancellation and combination techniques for simultaneous transmit and receive (STAR) phased-array applications. The device consists of a passive core of three quadrature hybrids and active components to provide active quasi-circulation operation. The core of three quadrature hybrids can be implemented using Lange couplers. The device is capable of high isolation performance, high-frequency operation, broadband performance, and improvement of the noise figure (NF) at the receive port by suppressing transmit noise. For passive quasi-circulation operation, the device can achieve 35-dB isolation between the transmit and receive port with 2.6-GHz bandwidth (BW) and insertion loss of 4.5 dB at X-band. For active quasi-operation, the device is shown to have 2.3-GHz BW of 30-dB isolation with 1.5-dB transmit-to-antenna gain and 4.7-dB antenna-to-receive insertion loss, while the NF at the receive port is approximately 5.5 dB. The device is capable of a power stress test up to 34 dBm at the output ports at 10.5 GHz. For operation with typical 25-dB isolation, the device is capable of operation up to 5.6-GHz BW at X-band. The device is also shown to be operable up to W -band by simulation with ~15-GHz BW of 20-dB isolation. The proposed architecture is suitable for MMIC integration and system-on-chip applications.", "title": "" } ]
[ { "docid": "0d8bf8c63bd70a3fe84a49e1e67590b7", "text": "]Using computer-aided design system to design an elegant 3D garment for a virtual human is often tedious and labor-intensive. Moreover, the garment is usually designed for a reference human model and generally not fitted to other individuals, which largely reduces the reusability of existing 3D garments. In this paper, we introduce proxy mesh to fit 3D garment to another human model whose topology or shape is different from the garment’s reference human model. Firstly, a proxy mesh is generated for the reference human model and the specified human model respectively. Secondly, the garment is parameterized based on the proxy mesh of the reference model and an independent dataset is obtained. Thirdly, the dataset is decoded to the proxy mesh of the other human model and a roughly fitted garment is gained. Lastly, local shape constrains are enforced to the fitted garment and garment-body penetrations are resolved to get a well fitted garment. Our approach is efficient, simple to implement and is potential to be applied to existing applications such as virtual try-on and virtual clothing design.", "title": "" }, { "docid": "692e1f69880490b0c607ad830da860e6", "text": "Fast switching of high voltage using stacked MOSFETs has been studied. It is shown that the effective drain-source capacitance has a negative influence in the turn-off process, especially when the load impedance is relatively high. In order to solve this problem, an alternative circuit configuration is tested where additional switching modules are used to deal with the drain-source capacitance. The experimental results have demonstrated fast switching performance, even for a high impedance load.", "title": "" }, { "docid": "b836d265721810e481822457fde8788f", "text": "In this paper we describe a method for optimizing frequency reconfigurable pixel antennas. The method utilizes a multi-objective function that is efficiently computed by using only one full electromagnetic simulation in the entire genetic algorithm optimization process. Minimization of the number of switches in the design is also attempted. The method is demonstrated using an antenna structure consisting of a rectangular grid of pixels adjacent to a ground plane and using RF MEMS switches for achieving the reconfigurability. The effects of the RF MEMS switches on the antenna performance as well as the control feed lines for them are also addressed. We provide both simulation and experimental results for a reconfigurable dual-band antenna that reconfigures the bands 820-1140 and 1720-1900 MHz to the bands 860-1160 and 1890-2300 MHz with dimensions of 39 mm × 24 mm on a ground plane of 40 mm × 65 mm with one switch only. The results demonstrate that reconfigurable antennas can be designed effectively with a minimum number of switches using an efficient optimization method.", "title": "" }, { "docid": "933398ff8f74a99bec6ea6e794910a8e", "text": "Cognitive computing is an interdisciplinary research field that simulates human thought processes in a computerized model. One application for cognitive computing is sentiment analysis on online reviews, which reflects opinions and attitudes toward products and services experienced by consumers. A high level of classification performance facilitates decision making for both consumers and firms. However, while much effort has been made to propose advanced classification algorithms to improve the performance, the importance of the textual quality of the data has been ignored. This research explores the impact of two influential textual features, namely the word count and review readability, on the performance of sentiment classification. We apply three representative deep learning techniques, namely SRN, LSTM, and CNN, to sentiment analysis tasks on a benchmark movie reviews dataset. Multiple regression models are further employed for statistical analysis. Our findings show that the dataset with reviews having a short length and high readability could achieve the best performance compared with any other combinations of the levels of word count and readability and that controlling the review length is more effective for garnering a higher level of accuracy than increasing the readability. Based on these findings, a practical application, i.e., a text evaluator or a website plug-in for text evaluation, can be developed to provide a service of review editorials and quality control for crowd-sourced review websites. These findings greatly contribute to generating more valuable reviews with high textual quality to better serve sentiment analysis and decision making.", "title": "" }, { "docid": "b2c60198f29f734e000dd67cb6bdd08a", "text": "OBJECTIVE\nTo assess adolescents' perceptions about factors influencing their food choices and eating behaviors.\n\n\nDESIGN\nData were collected in focus-group discussions.\n\n\nSUBJECTS/SETTING\nThe study population included 141 adolescents in 7th and 10th grade from 2 urban schools in St Paul, Minn, who participated in 21 focus groups.\n\n\nANALYSIS\nData were analyzed using qualitative research methodology, specifically, the constant comparative method.\n\n\nRESULTS\nFactors perceived as influencing food choices included hunger and food cravings, appeal of food, time considerations of adolescents and parents, convenience of food, food availability, parental influence on eating behaviors (including the culture or religion of the family), benefits of foods (including health), situation-specific factors, mood, body image, habit, cost, media, and vegetarian beliefs. Major barriers to eating more fruits, vegetables, and dairy products and eating fewer high-fat foods included a lack of sense of urgency about personal health in relation to other concerns, and taste preferences for other foods. Suggestions for helping adolescents eat a more healthful diet include making healthful food taste and look better, limiting the availability of unhealthful options, making healthful food more available and convenient, teaching children good eating habits at an early age, and changing social norms to make it \"cool\" to eat healthfully.\n\n\nAPPLICATIONS/CONCLUSIONS\nThe findings suggest that if programs to improve adolescent nutrition are to be effective, they need to address a broad range of factors, in particular environmental factors (e.g., the increased availability and promotion of appealing, convenient foods within homes schools, and restaurants).", "title": "" }, { "docid": "998f2515ea7ceb02f867b709d4a987f9", "text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.", "title": "" }, { "docid": "9f746a67a960b01c9e33f6cd0fcda450", "text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "title": "" }, { "docid": "7c1b3ba1b8e33ed866ae90b3ddf80ce6", "text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.", "title": "" }, { "docid": "ae5497a11458851438d6cc86daec189a", "text": "Automated activity recognition enables a wide variety of applications related to child and elderly care, disease diagnosis and treatment, personal health or sports training, for which it is key to seamlessly determine and log the user’s motion. This work focuses on exploring the use of smartphones to perform activity recognition without interfering in the user’s lifestyle. Thus, we study how to build an activity recognition system to be continuously executed in a mobile device in background mode. The system relies on device’s sensing, processing and storing capabilities to estimate significant movements/postures (walking at different paces—slow, normal, rush, running, sitting, standing). In order to evaluate the combinations of sensors, features and algorithms, an activity dataset of 16 individuals has been gathered. The performance of a set of lightweight classifiers (Naïve Bayes, Decision Table and Decision Tree) working on different sensor data has been fully evaluated and optimized in terms of accuracy, computational cost and memory fingerprint. Results have pointed out that a priori information on the relative position of the mobile device with respect to the user’s body enhances the estimation accuracy. Results show that computational low-cost Decision Tables using the best set of features among mean and variance and considering all the sensors (acceleration, gravity, linear acceleration, magnetometer, gyroscope) may be enough to get an activity estimation accuracy of around 88 % (78 % is the accuracy of the Naïve Bayes algorithm with the same characteristics used as a baseline). To demonstrate its applicability, the activity recognition system has been used to enable a mobile application to promote active lifestyles.", "title": "" }, { "docid": "87aef15dc90a8981bda3fcc5b8045d7c", "text": "Human groups show structured levels of genetic similarity as a consequence of factors such as geographical subdivision and genetic drift. Surveying this structure gives us a scientific perspective on human origins, sheds light on evolutionary processes that shape both human adaptation and disease, and is integral to effectively carrying out the mission of global medical genetics and personalized medicine. Surveys of population structure have been ongoing for decades, but in the past three years, single-nucleotide-polymorphism (SNP) array technology has provided unprecedented detail on human population structure at global and regional scales. These studies have confirmed well-known relationships between distantly related populations and uncovered previously unresolvable relationships among closely related human groups. SNPs represent the first dense genome-wide markers, and as such, their analysis has raised many challenges and insights relevant to the study of population genetics with whole-genome sequences. Here we draw on the lessons from these studies to anticipate the directions that will be most fruitful to pursue during the emerging whole-genome sequencing era.", "title": "" }, { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" }, { "docid": "b5a8577b02f7f44e9fc5abd706e096d4", "text": "Automotive Safety Integrity Level (ASIL) decomposition is a technique presented in the ISO 26262: Road Vehicles Functional Safety standard. Its purpose is to satisfy safety-critical requirements by decomposing them into less critical ones. This procedure requires a system-level validation, and the elements of the architecture to which the decomposed requirements are allocated must be analyzed in terms of Common-Cause Faults (CCF). In this work, we present a generic method for a bottomup ASIL decomposition, which can be used during the development of a new product. The system architecture is described in a three-layer model, from which fault trees are generated, formed by the application, resource, and physical layers and their mappings. A CCF analysis is performed on the fault trees to verify the absence of possible common faults between the redundant elements and to validate the ASIL decomposition.", "title": "" }, { "docid": "2b0e62a76c56a4a658cb45b397f8752f", "text": "In this paper we present and analyze a queueingtheoretical model for autonomous mobility-on-demand (MOD) systems where robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the entire network. We cast an autonomous MOD system within a closed Jackson network model with passenger loss. It is shown that an optimal rebalancing algorithm minimizing the number of (autonomously) rebalancing vehicles and keeping vehicles availabilities balanced throughout the network can be found by solving a linear program. The theoretical insights are used to design a robust, real-time rebalancing algorithm, which is applied to a case study of New York City. The case study shows that the current taxi demand in Manhattan can be met with about 8,000 robotic vehicles (roughly 70% of the size of the current taxi fleet operating in Manhattan). Finally, we extend our queueingtheoretical setup to include congestion effects, and we study the impact of autonomously rebalancing vehicles on overall congestion. Collectively, this paper provides a rigorous approach to the problem of system-wide coordination of autonomously driving vehicles, and provides one of the first characterizations of the sustainability benefits of robotic transportation networks.", "title": "" }, { "docid": "a2db518321489965c996516f010594fc", "text": "In order to improve the wide-angle scanning performance of the phased array antennas, a wide-beam microstrip antenna with metal walls is proposed in this letter. The beamwidth of the antenna is broadened by the horizontal current on the radiating patch and the vertical current on the metal walls. The half-power beamwidth of the E- and H-planes is 221° and 168° at 4.0 GHz. Furthermore, the wide-beam antenna element is employed in a nine-element E-plane linear array antenna. The main beam of the E-plane scanning linear array antenna can scan from −70° to +70° in the frequency band from 3.7 to 4.3 GHz with a gain fluctuation less than 2.7 dB and variation in maximum sidelobe level less than −5.8 dB. The E-plane scanning linear array antenna with nine elements is fabricated and tested. The measured results achieve a good agreement with the simulated results.", "title": "" }, { "docid": "3b78223f5d11a56dc89a472daf23ca49", "text": "Shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. This paper introduces the Adaptive Shadow Map (ASM) as a solution to this problem. An ASM removes aliasing by resolving pixel size mismatches between the eye view and the light source view. It achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. As pixels are transformed from the eye view to the light source view, the ASM is refined to create higher-resolution pieces of the shadow map when needed. This is done by evaluating the contributions of shadow map pixels to the overall image quality. The improvement process is view-driven, progressive, and confined to a user-specifiable memory footprint. We show that ASMs enable dramatic improvements in shadow quality while maintaining interactive rates.", "title": "" }, { "docid": "31dbedbcdb930ead1f8274ff2c181fcb", "text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.", "title": "" }, { "docid": "55aea20148423bdb7296addac847d636", "text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "8f7375f788d7d152477c7816852dee0d", "text": "Many decentralized, inter-organizational environments such as supply chains are characterized by high transactional uncertainty and risk. At the same time, blockchain technology promises to mitigate these issues by introducing certainty into economic transactions. This paper discusses the findings of a Design Science Research project involving the construction and evaluation of an information technology artifact in collaboration with Maersk, a leading international shipping company, where central documents in shipping, such as the Bill of Lading, are turned into a smart contract on blockchain. Based on our insights from the project, we provide first evidence for preliminary design principles for applications that aim to mitigate the transactional risk and uncertainty in decentralized environments using blockchain. Both the artifact and the first evidence for emerging design principles are novel, contributing to the discourse on the implications that the advent of blockchain technology poses for governing economic activity.", "title": "" }, { "docid": "8622057f337dd25edcbe448f0f5c9803", "text": "Climbing robots that integrate an articulated arm as their main climbing mechanism can eventually take advantage of their arm for plane transition and thus to operate on 3D structures rather than only climbing planar surfaces. However, they are usually slower than wheel based climbing robots. Within this research we address this problem by integration of a light-weight arm and adhesion mechanism into an omnidirectional wheel based climbing robot, thus forming a hybrid mechanism that is agile in climbing and still able to perform plane transition. A 2DOF (Degree of Freedom) plannar mechanism with 2 linear actuators was designed as a light-weight manipulator for the transition mechanism. Furthermore, we customized and developed actuated switchable magnets both for the robot chassis and also as the adhesion unit of the arm. These units allow us to control the amount of magnetic adhesion force, resulting in better adaptation to different surface characteristics. The adhesion units are safe for climbing applications with a very small power consumption. The conceptual and the detailed design of the mechanisms are presented. The robots were developed and successfully tested on a ferromagnetic structure. © 2016 Elsevier Ltd. All rights reserved. a t o n r h M s T m c p d f W t t j t w", "title": "" } ]
scidocsrr
b1df5e52590bfd47b05d355916ad42f2
Explainable Sentiment Analysis with Applications in Medicine
[ { "docid": "2729b248b279cacbc6008f85373f1906", "text": "Major depressive disorder, a debilitating and burdensome disease experienced by individuals worldwide, can be defined by several depressive symptoms (e.g., anhedonia (inability to feel pleasure), depressed mood, difficulty concentrating, etc.). Individuals often discuss their experiences with depression symptoms on public social media platforms like Twitter, providing a potentially useful data source for monitoring population-level mental health risk factors. In a step towards developing an automated method to estimate the prevalence of symptoms associated with major depressive disorder over time in the United States using Twitter, we developed classifiers for discerning whether a Twitter tweet represents no evidence of depression or evidence of depression. If there was evidence of depression, we then classified whether the tweet contained a depressive symptom and if so, which of three subtypes: depressed mood, disturbed sleep, or fatigue or loss of energy. We observed that the most accurate classifiers could predict classes with high-to-moderate F1-score performances for no evidence of depression (85), evidence of depression (52), and depressive symptoms (49). We report moderate F1-scores for depressive symptoms ranging from 75 (fatigue or loss of energy) to 43 (disturbed sleep) to 35 (depressed mood). Our work demonstrates baseline approaches for automatically encoding Twitter data with granular depressive symptoms associated with major depressive disorder.", "title": "" }, { "docid": "ea0ee8011eacdd00cdc8ba3df4eeee6f", "text": "Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result over the standard neural network for classification and recognition problems. It is a hot machine learning area proven both useful and innovative. This paper has thoroughly reviewed various rule extraction algorithms, considering the classification scheme: decompositional, pedagogical, and eclectics. It also presents the evaluation of these algorithms based on the neural network structure with which the algorithm is intended to work. The main contribution of this review is to show that there is a limited study of rule extraction algorithm from DNN. KeywordsArtificial neural network; Deep neural network; Rule extraction; Decompositional; Pedagogical; Eclectic.", "title": "" } ]
[ { "docid": "b5e7cabce6982aa3b1a198d76524e0c5", "text": "BACKGROUND\nAdvancements in technology have always had major impacts in medicine. The smartphone is one of the most ubiquitous and dynamic trends in communication, in which one's mobile phone can also be used for communicating via email, performing Internet searches, and using specific applications. The smartphone is one of the fastest growing sectors in the technology industry, and its impact in medicine has already been significant.\n\n\nOBJECTIVE\nTo provide a comprehensive and up-to-date summary of the role of the smartphone in medicine by highlighting the ways in which it can enhance continuing medical education, patient care, and communication. We also examine the evidence base for this technology.\n\n\nMETHODS\nWe conducted a review of all published uses of the smartphone that could be applicable to the field of medicine and medical education with the exclusion of only surgical-related uses.\n\n\nRESULTS\nIn the 60 studies that were identified, we found many uses for the smartphone in medicine; however, we also found that very few high-quality studies exist to help us understand how best to use this technology.\n\n\nCONCLUSIONS\nWhile the smartphone's role in medicine and education appears promising and exciting, more high-quality studies are needed to better understand the role it will have in this field. We recommend popular smartphone applications for physicians that are lacking in evidence and discuss future studies to support their use.", "title": "" }, { "docid": "7f8075f6c7ab8511c399720eab4d6a6b", "text": "This paper presents a novel control system for the operation of a switched reluctance generator (SRG) driven by a variable speed wind turbine. The SRG is controlled to drive a wind energy conversion system (WECS) to the point of maximum aerodynamic efficiency using closed loop control of the power output. In the medium and low speed range, the SRG phase current is regulated using pulsewidth-modulation (PWM) control of the magnetizing voltage. For high speeds the generator is controlled using a single pulse mode. In order to interface the SRG to the grid (or ac load) a voltage-source PWM inverter is used. A 2.5-kW experimental prototype has been constructed. Wind turbine characteristics are emulated using a cage induction machine drive. The performance of the system has been tested over the whole speed range using wind profiles and power impacts. Experimental results are presented confirming the system performance.", "title": "" }, { "docid": "4490283c45021a4135ab0e862c5ff5ab", "text": "A brain-computer interface (BCI) is a communication system that translates brain-activity into commands for a computer or other devices. In other words, a BCI allows users to act on their environment by using only brain-activity, without using peripheral nerves and muscles. In this paper, we present a BCI that achieves high classification accuracy and high bitrates for both disabled and able-bodied subjects. The system is based on the P300 evoked potential and is tested with five severely disabled and four able-bodied subjects. For four of the disabled subjects classification accuracies of 100% are obtained. The bitrates obtained for the disabled subjects range between 10 and 25bits/min. The effect of different electrode configurations and machine learning algorithms on classification accuracy is tested. Further factors that are possibly important for obtaining good classification accuracy in P300-based BCI systems for disabled subjects are discussed.", "title": "" }, { "docid": "3e2f4a96462ed5a12fbe0462272d013c", "text": "Exfoliative cheilitis is an uncommon condition affecting the vermilion zone of the upper, lower or both lips. It is characterized by the continuous production and desquamation of unsightly, thick scales of keratin; when removed, these leave a normal appearing lip beneath. The etiology is unknown, although some cases may be factitious. Attempts at treatment by a wide variety of agents and techniques have been unsuccessful. Three patients with this disease are reported and its relationship to factitious cheilitis and candidal cheilitis is discussed.", "title": "" }, { "docid": "fb00601b60bcd1f7a112e34d93d55d01", "text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <[email protected]>.", "title": "" }, { "docid": "1164a33e84a333628ec8fe74aab45f4c", "text": "Low rank approximation is an important tool in many applications. Given an observed matrix with elements corrupted by Gaussian noise it is possible to find the best approximating matrix of a given rank through singular value decomposition. However, due to the non-convexity of the formulation it is not possible to incorporate any additional knowledge of the sought matrix without resorting to heuristic optimization techniques. In this paper we propose a convex formulation that is more flexible in that it can be combined with any other convex constraints and penalty functions. The formulation uses the so called convex envelope, which is the provably best possible convex relaxation. We show that for a general class of problems the envelope can be efficiently computed and may in some cases even have a closed form expression. We test the algorithm on a number of real and synthetic data sets and show state-of-the-art results.", "title": "" }, { "docid": "94da7ecfe2267092953780b03c6ecd55", "text": "Low-power design has become a key technology for battery-power biomedical devices in Wireless Body Area Network. In order to meet the requirement of low-power dissipation for electrocardiogram related applications, a down-sampling QRS complex detection algorithm is proposed. Based on Wavelet Transform (WT), this letter characterizes the energy distribution of QRS complex corresponding to the frequency band of WT. Then this letter details for the first time the process of down-sampled filter design, and presents the time and frequency response of the filter. The algorithm is evaluated in fixed point on MIT-BIH and QT database. Compared with other existing results, our work reduces the power dissipation by 23%, 61%, and 72% for 1 ×, 2 ×, and 3 × down-sampling rate, respectively, while maintaining almost constant detection performance.", "title": "" }, { "docid": "cc6111093376f0bae267fe686ecd22cd", "text": "This paper overviews the diverse information technologies that are used to provide athletes with relevant feedback. Examples taken from various sports are used to illustrate selected applications of technology-based feedback. Several feedback systems are discussed, including vision, audition and proprioception. Each technology described here is based on the assumption that feedback would eventually enhance skill acquisition and sport performance and, as such, its usefulness to athletes and coaches in training is critically evaluated.", "title": "" }, { "docid": "85cca0e20998926d582f5eefbb1958e1", "text": "Traditionally, active storage techniques have been proposed to move computation tasks to storage nodes in order to exploit data locality. However, we argue in this paper that active storage is ill-suited for cloud storage for two reasons: 1. Lack of elasticity: Computing can only scale out with the number of storage nodes; and 2. Resource Contention: Sharing compute resources can produce interferences in the storage system. Serverless computing is now emerging as a promising alternative for ensuring painless scalability, and also, for simplifying the development of disaggregated computing tasks.\n Here we present an innovative data-driven serverless computing middleware for object storage. It is a lightweight compute solution that allows users to create small, stateless functions that intercept and operate on data flows in a scalable manner without the need to manage a server or a runtime environment. We demonstrate through different use cases how our solution scales with minimal overhead, while getting rid of the resource contention problems incurred by active storage tasks.", "title": "" }, { "docid": "bf563ecfc0dbb9a8a1b20356bde3dcad", "text": "This paper presents a parallel architecture of an QR decomposition systolic array based on the Givens rotations algorithm on FPGA. The proposed architecture adopts a direct mapping by 21 fixed-point CORDIC-based process units that can compute the QR decomposition for an 4×4 real matrix. In order to achieve a comprehensive resource and performance evaluation, the computational error analysis, the resource utilized, and speed achieved on Virtex5 XC5VTX150T FPGA, are evaluated with the different precision of the intermediate word lengthes. The evaluation results show that 1) the proposed systolic array satisfies 99.9% correct 4×4 QR decomposition for the 2-13 accuracy requirement when the word length of the data path is lager than 25-bit, 2) occupies about 2, 810 (13%) slices, and achieves about 2.06 M/sec updates by running at the maximum frequency 111 MHz.", "title": "" }, { "docid": "ef96ba2a3fde7f645c7920443176af88", "text": "Caulerpa racemosa, a common and opportunistic species widely distributed in tropical and warm-temperate regions, is known to form monospecific stands outside its native range (Verlaque et al. 2003). In October 2011, we observed an alteration in benthic community due to a widespread overgrowth of C. racemosa around the inhabited island of Magoodhoo (3 04¢N; 72 57¢E, Republic of Maldives). The algal mats formed a continuous dense meadow (Fig. 1a) that occupied an area of 95 · 120 m (~11,000 m) previously dominated by the branching coral Acropora muricata. Partial mortality and total mortality (Fig. 1b, c) were recorded on 45 and 30% of A. muricata colonies, respectively. The total area of influence of C. racemosa was, however, much larger (~25,000 m) including smaller coral patches near to the meadow, where mortality in contact with the algae was also observed on colonies of Isopora palifera, Lobophyllia corymbosa, Pavona varians, Pocillopora damicornis, and Porites solida. Although species of the genus Caulerpa are not usually abundant on oligotrophic coral reefs, nutrient enrichment from natural and/or anthropogenic sources is known to promote green algal blooms (Lapointe and Bedford 2009). Considering the current state of regression of many reefs in the Maldives (Lasagna et al. 2010), we report an unusual phenomenon that could possibly become more common.", "title": "" }, { "docid": "91fdd315f12d8192e0cdada412abfda4", "text": "The design of neural architectures for structured objects is typically guided by experimental insights rather than a formal process. In this work, we appeal to kernels over combinatorial structures, such as sequences and graphs, to derive appropriate neural operations. We introduce a class of deep recurrent neural operations and formally characterize their associated kernel spaces. Our recurrent modules compare the input to virtual reference objects (cf. filters in CNN) via the kernels. Similar to traditional neural operations, these reference objects are parameterized and directly optimized in end-to-end training. We empirically evaluate the proposed class of neural architectures on standard applications such as language modeling and molecular graph regression, achieving state-of-the-art or competitive results across these applications. We also draw connections to existing architectures such as LSTMs.", "title": "" }, { "docid": "00e06f34117dc96ec6f7a5fba47b3f5f", "text": "This paper presents a new algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is compelling with the simplicity of its implementation and the novel properties it offers. It ensures low hand-shaking cost between peers who intend to download a file (or parts of a file) from each other. Furthermore, it achieves maximal file availability, meaning that any two peers with partial knowledge of a given file will almost always be able to fully benefit from each other’s knowledge– i.e., overlapping knowledge will rarely occur. Our algorithm is made possible by the recent introduction of linear-time rateless erasure codes.", "title": "" }, { "docid": "c721a66169e3ded24c814b16604855f2", "text": "When it comes to smart cities, one of the most important components is data. To enable smart city applications, data needs to be collected, stored, and processed to accomplish intelligent tasks. In this paper we discuss smart cities and the use of new and existing technologies to improve multiple aspects of these cities. There are also social and environmental aspects that have become important in smart cities that create concerns regarding ethics and ethical conduct. Thus we discuss various issues relating to the appropriate and ethical use of smart city applications and their data. Many smart city projects are being implemented and here we showcase several examples to provide context for our ethical analysis. Law enforcement, structure efficiency, utility efficiency, and traffic flow control applications are some areas that could have the most gains in smart cities; yet, they are the most pervasive as the applications performing these activities must collect and process the most private data about the citizens. The secure and ethical use of this data must be a top priority within every project. The paper also provides a list of challenges for smart city applications pertaining in some ways to ethics. These challenges are drawn from the studied examples of smart city projects to bring attention to ethical issues and raise awareness of the need to address and regulate such use of data.", "title": "" }, { "docid": "dbec1cf4a0904af336e0c75c211f49b7", "text": "BACKGROUND\nBoron neutron capture therapy (BNCT) is based on the nuclear reaction that occurs when boron-10 is irradiated with low-energy thermal neutrons to yield high linear energy transfer alpha particles and recoiling lithium-7 nuclei. Clinical interest in BNCT has focused primarily on the treatment of high-grade gliomas and either cutaneous primaries or cerebral metastases of melanoma, most recently, head and neck and liver cancer. Neutron sources for BNCT currently are limited to nuclear reactors and these are available in the United States, Japan, several European countries, and Argentina. Accelerators also can be used to produce epithermal neutrons and these are being developed in several countries, but none are currently being used for BNCT.\n\n\nBORON DELIVERY AGENTS\nTwo boron drugs have been used clinically, sodium borocaptate (Na(2)B(12)H(11)SH) and a dihydroxyboryl derivative of phenylalanine called boronophenylalanine. The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting to achieve boron concentrations ( approximately 20 microg/g tumor) sufficient to deliver therapeutic doses of radiation to the tumor with minimal normal tissue toxicity. Over the past 20 years, other classes of boron-containing compounds have been designed and synthesized that include boron-containing amino acids, biochemical precursors of nucleic acids, DNA-binding molecules, and porphyrin derivatives. High molecular weight delivery agents include monoclonal antibodies and their fragments, which can recognize a tumor-associated epitope, such as epidermal growth factor, and liposomes. However, it is unlikely that any single agent will target all or even most of the tumor cells, and most likely, combinations of agents will be required and their delivery will have to be optimized.\n\n\nCLINICAL TRIALS\nCurrent or recently completed clinical trials have been carried out in Japan, Europe, and the United States. The vast majority of patients have had high-grade gliomas. Treatment has consisted first of \"debulking\" surgery to remove as much of the tumor as possible, followed by BNCT at varying times after surgery. Sodium borocaptate and boronophenylalanine administered i.v. have been used as the boron delivery agents. The best survival data from these studies are at least comparable with those obtained by current standard therapy for glioblastoma multiforme, and the safety of the procedure has been established.\n\n\nCONCLUSIONS\nCritical issues that must be addressed include the need for more selective and effective boron delivery agents, the development of methods to provide semiquantitative estimates of tumor boron content before treatment, improvements in clinical implementation of BNCT, and a need for randomized clinical trials with an unequivocal demonstration of therapeutic efficacy. If these issues are adequately addressed, then BNCT could move forward as a treatment modality.", "title": "" }, { "docid": "d0cbdd5230d97d16b9955013699df5aa", "text": "There has been a great deal of recent interest in statistical models of 2D landmark data for generating compact deformable models of a given object. This paper extends this work to a class of parametrised shapes where there are no landmarks available. A rigorous statistical framework for the eigenshape model is introduced, which is an extension to the conventional Linear Point Distribution Model. One of the problems associated with landmark free methods is that a large degree of variability in any shape descriptor may be due to the choice of parametrisation. An automated training method is described which utilises an iterative feedback method to overcome this problem. The result is an automatically generated compact linear shape model. The model has been successfully applied to a problem of tracking the outline of a walking pedestrian in real time.", "title": "" }, { "docid": "9fcf513f9f8c7f3e00ae78b55618af8b", "text": "Graph analysis is becoming increasingly important in many research fields - biology, social sciences, data mining - and daily applications - path finding, product recommendation. Many different large-scale graph-processing systems have been proposed for different platforms. However, little effort has been placed on designing systems for hybrid CPU-GPU platforms.In this work, we present HyGraph, a novel graph-processing systems for hybrid platforms which delivers performance by using CPUs and GPUs concurrently. Its core feature is a specialized data structure which enables dynamic scheduling of jobs onto both the CPU and the GPUs, thus (1) supersedes the need for static workload distribution, (2) provides load balancing, and (3) minimizes inter-process communication overhead by overlapping computation and communication.Our preliminary results demonstrate that HyGraph outperforms CPU-only and GPU-only solutions, delivering close-to-optimal performance on the hybrid system. Moreover, it supports large-scale graphs which do not fit into GPU memory, and it is competitive against state-of-the-art systems.", "title": "" }, { "docid": "59f083611e4dc81c5280fc118e05401c", "text": "We propose a low area overhead and power-efficient asynchronous-logic quasi-delay-insensitive (QDI) sense-amplifier half-buffer (SAHB) approach with quad-rail (i.e., 1-of-4) data encoding. The proposed quad-rail SAHB approach is targeted for area- and energy-efficient asynchronous network-on-chip (ANoC) router designs. There are three main features in the proposed quad-rail SAHB approach. First, the quad-rail SAHB is designed to use four wires for selecting four ANoC router directions, hence reducing the number of transistors and area overhead. Second, the quad-rail SAHB switches only one out of four wires for 2-bit data propagation, hence reducing the number of transistor switchings and dynamic power dissipation. Third, the quad-rail SAHB abides by QDI rules, hence the designed ANoC router features high operational robustness toward process-voltage-temperature (PVT) variations. Based on the 65-nm CMOS process, we use the proposed quad-rail SAHB to implement and prototype an 18-bit ANoC router design. When benchmarked against the dual-rail counterpart, the proposed quad-rail SAHB ANoC router features 32% smaller area and dissipates 50% lower energy under the same excellent operational robustness toward PVT variations. When compared to the other reported ANoC routers, our proposed quad-rail SAHB ANoC router is one of the high operational robustness, smallest area, and most energy-efficient designs.", "title": "" }, { "docid": "1a3c01a10c296ca067452d98847240d6", "text": "The second edition of Creswell's book has been significantly revised and updated. The author clearly sets out three approaches to research: quantitative, qualitative and mixed methods. As someone who has used mixed methods in my research, it is refreshing to read a textbook that addresses this. The differences between the approaches are clearly identified and a rationale for using each methodological stance provided.", "title": "" }, { "docid": "30ef95dffecc369aabdd0ea00b0ce299", "text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.", "title": "" } ]
scidocsrr
ae15929c2b4f225f097efb90c0c3721f
CATalyst: Defeating last-level cache side channel attacks in cloud computing
[ { "docid": "de8415d1674a0e5e84cfc067fd3940cc", "text": "We apply the FLUSH+RELOAD side-channel attack based on cache hits/misses to extract a small amount of data from OpenSSL ECDSA signature requests. We then apply a “standard” lattice technique to extract the private key, but unlike previous attacks we are able to make use of the side-channel information from almost all of the observed executions. This means we obtain private key recovery by observing a relatively small number of executions, and by expending a relatively small amount of post-processing via lattice reduction. We demonstrate our analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able to achieve a reasonable level of success in recovering the secret key for a 256-bit curve. This is significantly better than prior methods of applying lattice reduction techniques to similar side channel information.", "title": "" } ]
[ { "docid": "8dfa68e87eee41dbef8e137b860e19cc", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "e02207c42eda7ec15db5dcd26ee55460", "text": "This paper focuses on a new task, i.e. transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design an functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e. the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.", "title": "" }, { "docid": "ee6925a80a6c49fb37181377d7287bb6", "text": "In two articles Timothy Noakes proposes a new physiological model in which skeletal muscle recruitment is regulated by a central \"govenor,\" specifically to prevent the development of a progressive myocardial ischemia that would precede the development of skeletal muscle anaerobiosis during maximal exercise. In this rebuttal to the Noakes' papers, we argue that Noakes has ignored data supporting the existing hypothesis that under normal conditions cardiac output is limiting maximal aerobic power during dynamic exercise engaging large muscle groups.", "title": "" }, { "docid": "44543067012ee060c00aa21af9c1320d", "text": "We present, visualize and analyse the similarities and differences between the controversial topics related to “edit wars” identified in 10 different language versions of Wikipedia. After a brief review of the related work we describe the methods developed to locate, measure, and categorize the controversial topics in the different languages. Visualizations of the degree of overlap between the top 100 list of most controversial articles in different languages and the content related geographical locations will be presented. We discuss what the presented analysis and visualizations can tell us about the multicultural aspects of Wikipedia, and, in general, about cultures of peer-production with focus on universal and specifically, local features. We demonstrate that Wikipedia is more than just an encyclopaedia; it is also a window into divergent social-spatial priorities, interests and preferences.", "title": "" }, { "docid": "f2ab1e48647b20265b9ce8a1c4de9988", "text": "Urinary tract infections (UTIs) are one of the most common bacterial infections with global expansion. These infections are predominantly caused by uropathogenic Escherichia coli (UPEC). Totally, 123 strains of Escherichia coli isolated from UTIs patients, using bacterial culture method were subjected to polymerase chain reactions for detection of various O- serogroups, some urovirulence factors, antibiotic resistance genes and resistance to 13 different antibiotics. According to data, the distribution of O1, O2, O6, O7 and O16 serogroups were 2.43%, besides O22, O75 and O83 serogroups were 1.62%. Furthermore, the distribution of O4, O8, O15, O21 and O25 serogroups were 5.69%, 3.25%, 21.13%, 4.06% and 26.01%, respectively. Overall, the fim virulence gene had the highest (86.17%) while the usp virulence gene had the lowest distributions of virulence genes in UPEC strains isolated from UTIs patients. The vat and sen virulence genes were not detected in any UPEC strains. Totally, aadA1 (52.84%), and qnr (46.34%) were the most prevalent antibiotic resistance genes while the distribution of cat1 (15.44%), cmlA (15.44%) and dfrA1 (21.95%) were the least. Resistance to penicillin (100%) and tetracycline (73.98%) had the highest while resistance to nitrofurantoin (5.69%) and trimethoprim (16.26%) had the lowest frequencies. This study indicated that the UPEC strains which harbored the high numbers of virulence and antibiotic resistance genes had the high ability to cause diseases that are resistant to most antibiotics. In the current situation, it seems that the administration of penicillin and tetracycline for the treatment of UTIs is vain.", "title": "" }, { "docid": "3f30c821132e07838de325c4f2183f84", "text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.", "title": "" }, { "docid": "8f5028ec9b8e691a21449eef56dc267e", "text": "It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision boundaries. This technique yields decision surfaces which approach the Bayes optimal under certain conditions. There is a continuous control of the linearity of the decision boundaries, from linear for small training sets to any degree of nonlinearity justified by larger training sets. A four-layer neural network of the type proposed can map any input pattern to any number of classifications. The input variables can be either continuous or binary. Modification of the decision boundaries based on new data can be accomplished in real time simply by defining a set of weights equal to the new training vector. The decision boundaries can be implemented using analog 'neurons', which operate entirely in parallel. The organization proposed takes into account the projected pin limitations of neural-net chips of the near future. By a change in architecture, these same components could be used as associative memories, to compute nonlinear multivariate regression surfaces, or to compute a posteriori probabilities of an event.<<ETX>>", "title": "" }, { "docid": "46658067ffc4fd2ecdc32fbaaa606170", "text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.", "title": "" }, { "docid": "c86c10428bfca028611a5e989ca31d3f", "text": "In the study, we discussed the ARCH/GARCH family models and enhanced them with artificial neural networks to evaluate the volatility of daily returns for 23.10.1987–22.02.2008 period in Istanbul Stock Exchange. We proposed ANN-APGARCH model to increase the forecasting performance of APGARCH model. The ANN-extended versions of the obtained GARCH models improved forecast results. It is noteworthy that daily returns in the ISE show strong volatility clustering, asymmetry and nonlinearity characteristics. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fcc94c9c9f388386b7eadc42c432f273", "text": "Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice spoofing attacks are becoming more and more capable, and error rates close to zero are being reached for the ASVspoof2015 database. However, speech synthesis and voice conversion paradigms that are not considered in the ASVspoof2015 database are appearing. Such examples include direct waveform modelling and generative adversarial networks. We also need to investigate the feasibility of training spoofing systems using only low-quality found data. For that purpose, we developed a generative adversarial networkbased speech enhancement system that improves the quality of speech data found in publicly available sources. Using the enhanced data, we trained state-of-the-art text-to-speech and voice conversion models and evaluated them in terms of perceptual speech quality and speaker similarity. The results show that the enhancement models significantly improved the SNR of low-quality degraded data found in publicly available sources and that they significantly improved the perceptual cleanliness of the source speech without significantly degrading the naturalness of the voice. However, the results also show limitations when generating speech with the low-quality found data.", "title": "" }, { "docid": "5a38a2d349838b32bc5c41d362a220ac", "text": "This article considers the challenges associated with completing risk assessments in countering violent extremism. In particular, it is concerned with risk assessment of those who come to the attention of government and nongovernment organizations as being potentially on a trajectory toward terrorism and where there is an obligation to consider the potential future risk that they may pose. Risk assessment in this context is fraught with difficulty, primarily due to the variable nature of terrorism, the low base-rate problem, and the dearth of strong evidence on relevant risk and resilience factors. Statistically, this will lead to poor predictive value. Ethically, it can lead to the labeling of an individual who is not on a trajectory toward violence as being \"at risk\" of engaging in terrorism and the imposing of unnecessary risk management actions. The article argues that actuarial approaches to risk assessment in this context cannot work. However, it further argues that approaches that help assessors to process and synthesize information in a structured way are of value and are in line with good practice in the broader field of violence risk assessment. (PsycINFO Database Record", "title": "" }, { "docid": "d0ebee0648beecbd00faaf67f76f256c", "text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and", "title": "" }, { "docid": "9fdb52d61c5f6d278c656f75d22aa10d", "text": "BACKGROUND\nIncreasing demand for memory assessment in clinical settings in Iran, as well as the absence of a comprehensive and standardized task based upon the Persian culture and language, requires an appropriate culture- and language-specific version of the commonly used neuropsychological measure of verbal learning and memory, the Rey Auditory Verbal Learning Test (RAVLT).\n\n\nMETHODS\nThe Persian adapted version of the original RAVLT and two other alternate word lists were generated based upon criteria previously set for developing new word lists. A total of 90 subjects (three groups of 30 persons), aged 29.75±7.10 years, volunteered to participate in our study and were tested using the original word list. The practice effect was assessed by retesting the first and second groups using the same word list after 30 and 60 days, respectively. The test-retest reliability was evaluated by retesting the third group of participants twice using two new alternate word lists with an interval of 30 days.\n\n\nRESULTS\nThe re-administration of the same list after one or even two months led to significant practice effects. However, the use of alternate forms after a one-month delay yielded no significant difference across the forms. The first and second trials, as well as the total, immediate, and delayed recall scores showed the best reliability in retesting by the alternate list.\n\n\nCONCLUSION\nThe difference between the generated forms was minor, and it seems that the Persian version of the RAVLT is a reliable instrument for repeated neuropsychological testing as long as alternate forms are used and scores are carefully chosen.  ", "title": "" }, { "docid": "184da4d4589a3a9dc1f339042e6bc674", "text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.", "title": "" }, { "docid": "4ce67aeca9e6b31c5021712f148108e2", "text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.", "title": "" }, { "docid": "d47c543f396059cc0ab6c5d98f8db35c", "text": "Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets.", "title": "" }, { "docid": "a53a81b0775992ea95db85b045463ddf", "text": "We start by asking an interesting yet challenging question, “If a large proportion (e.g., more than 90% as shown in Fig. 1) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated?” Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (rBTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.", "title": "" }, { "docid": "b92484f67bf2d3f71d51aee9fb7abc86", "text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.", "title": "" }, { "docid": "e0301bf133296361b4547730169d2672", "text": "Radar warning receivers (RWRs) classify the intercepted pulses into clusters utilizing multiple parameter deinterleaving. In order to make classification more elaborate time-of-arrival (TOA) deinterleaving should be performed for each cluster. In addition, identification of the classified pulse sequences has been exercised at last. It is essential to identify the classified sequences with a minimum number of pulses. This paper presents a method for deinterleaving of intercepted signals having small number of pulses that belong to stable or jitter pulse repetition interval (PRI) types in the presence of missed pulses. It is necessary for both stable and jitter PRI TOA deinterleaving algorithms to utilize predefined PRI range. However, jitter PRI TOA deinterleaving also requires variation about mean PRI value of emitter of interest as a priori.", "title": "" } ]
scidocsrr
4e96acf21ce5f9c02e1664d7ee6b5eb5
THE EFFECT OF BRAND IMAGE ON OVERALL SATISFACTION AND LOYALTY INTENTION IN THE CONTEXT OF COLOR COSMETIC
[ { "docid": "3da6fadaf2363545dfd0cea87fe2b5da", "text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030", "title": "" }, { "docid": "80ce6c8c9fc4bf0382c5f01d1dace337", "text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.", "title": "" } ]
[ { "docid": "13774d2655f2f0ac575e11991eae0972", "text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.", "title": "" }, { "docid": "513224bb1034217b058179f3805dd37f", "text": "Existing work on subgraph isomorphism search mainly focuses on a-query-at-a-time approaches: optimizing and answering each query separately. When multiple queries arrive at the same time, sequential processing is not always the most efficient. In this paper, we study multi-query optimization for subgraph isomorphism search. We first propose a novel method for efficiently detecting useful common subgraphs and a data structure to organize them. Then we propose a heuristic algorithm based on the data structure to compute a query execution order so that cached intermediate results can be effectively utilized. To balance memory usage and the time for cached results retrieval, we present a novel structure for caching the intermediate results. We provide strategies to revise existing single-query subgraph isomorphism algorithms to seamlessly utilize the cached results, which leads to significant performance improvement. Extensive experiments verified the effectiveness of our solution.", "title": "" }, { "docid": "fbfb6b7cb2dc3e774197c470c55a928b", "text": "The integrated modular avionics (IMA) architectures have ushered in a new wave of thought regarding avionics integration. IMA architectures utilize shared, configurable computing, communication, and I/O resources. These architectures allow avionics system integrators to benefit from increased system scalability, as well as from a form of platform management that reduces the workload for aircraft-level avionics integration activities. In order to realize these architectural benefits, the avionics suppliers must engage in new philosophies for sharing a set of system-level resources that are managed a level higher than each individual avionics system. The mechanisms for configuring and managing these shared intersystem resources are integral to managing the increased level of avionics integration that is inherent to the IMA architectures. This paper provides guidance for developing the methodology and tools to efficiently manage the set of shared intersystem resources. This guidance is based upon the author's experience in developing the Genesis IMA architecture at Smiths Aerospace. The Genesis IMA architecture was implemented on the Boeing 787 Dreamliner as the common core system (CCS)", "title": "" }, { "docid": "401cb3ebbc226ae117303f6a6bb6714c", "text": "Brain-related disorders such as epilepsy can be diagnosed by analyzing electroencephalograms (EEG). However, manual analysis of EEG data requires highly trained clinicians, and is a procedure that is known to have relatively low inter-rater agreement (IRA). Moreover, the volume of the data and the rate at which new data becomes available make manual interpretation a time-consuming, resource-hungry, and expensive process. In contrast, automated analysis of EEG data offers the potential to improve the quality of patient care by shortening the time to diagnosis and reducing manual error. In this paper, we focus on one of the first steps in interpreting an EEG session identifying whether the brain activity is abnormal or normal. To address this specific task, we propose a novel recurrent neural network (RNN) architecture termed ChronoNet which is inspired by recent developments from the field of image classification and designed to work efficiently with EEG data. ChronoNet is formed by stacking multiple 1D convolution layers followed by deep gated recurrent unit (GRU) layers where each 1D convolution layer uses multiple filters of exponentially varying lengths and the stacked GRU layers are densely connected in a feed-forward manner. We used the recently released TUH Abnormal EEG Corpus dataset for evaluating the performance of ChronoNet. Unlike previous studies using this dataset, ChronoNet directly takes time-series EEG as input and learns meaningful representations of brain activity patterns. ChronoNet outperforms previously reported results on this dataset thereby setting a new benchmark. Furthermore, we demonstrate the domain-independent nature of ChronoNet by successfully applying it to classify speech commands.", "title": "" }, { "docid": "357ff730c3d0f8faabe1fa14d4b04463", "text": "In this paper, we propose a novel two-stage video captioning framework composed of 1) a multi-channel video encoder and 2) a sentence-generating language decoder. Both of the encoder and decoder are based on recurrent neural networks with long-short-term-memory cells. Our system can take videos of arbitrary lengths as input. Compared with the previous sequence-to-sequence video captioning frameworks, the proposed model is able to handle multiple channels of video representations and jointly learn how to combine them. The proposed model is evaluated on two large-scale movie datasets (MPII Corpus and Montreal Video Description) and one YouTube dataset (Microsoft Video Description Corpus) and achieves the state-of-the-art performances. Furthermore, we extend the proposed model towards automatic American Sign Language recognition. To evaluate the performance of our model on this novel application, a new dataset for ASL video description is collected based on YouTube videos. Results on this dataset indicate that the proposed framework on ASL recognition is promising and will significantly benefit the independent communication between ASL users and", "title": "" }, { "docid": "3bcf0e33007feb67b482247ef6702901", "text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.", "title": "" }, { "docid": "3ae9da3a27b00fb60f9e8771de7355fe", "text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.", "title": "" }, { "docid": "301cf9a13184f2e7587f16b3de16222d", "text": "Recently, highly accurate positioning devices enable us to provide various types of location-based services. On the other hand, because position data obtained by such devices include deeply personal information, protection of location privacy is one of the most significant issues of location-based services. Therefore, we propose a technique to anonymize position data. In our proposed technique, the psrsonal user of a location-based service generates several false position data (dummies) sent to the service provider with the true position data of the user. Because the service provider cannot distinguish the true position data, the user’s location privacy is protected. We conducted performance study experiments on our proposed technique using practical trajectory data. As a result of the experiments, we observed that our proposed technique protects the location privacy of users.", "title": "" }, { "docid": "58b5c0628b2b964aa75d65a241f028d7", "text": "This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "a7e6a2145b9ae7ca2801a3df01f42f5e", "text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.", "title": "" }, { "docid": "7dd3c935b6a5a38284b36ddc1dc1d368", "text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "da9b9a32db674e5f6366f6b9e2c4ee10", "text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.", "title": "" }, { "docid": "28f106c6d6458f619cdc89967d5648cd", "text": "Term graphs constructed from document collections as well as external resources, such as encyclopedias (DBpedia) and knowledge bases (Freebase and ConceptNet), have been individually shown to be effective sources of semantically related terms for query expansion, particularly in case of difficult queries. However, it is not known how they compare with each other in terms of retrieval effectiveness. In this work, we use standard TREC collections to empirically compare the retrieval effectiveness of these types of term graphs for regular and difficult queries. Our results indicate that the term association graphs constructed from document collections using information theoretic measures are nearly as effective as knowledge graphs for Web collections, while the term graphs derived from DBpedia, Freebase and ConceptNet are more effective than term association graphs for newswire collections. We also found out that the term graphs derived from ConceptNet generally outperformed the term graphs derived from DBpedia and Freebase.", "title": "" }, { "docid": "d7b479be278251dab459411628ca1744", "text": "0950-7051/$ see front matter 2013 Elsevier B.V. A http://dx.doi.org/10.1016/j.knosys.2013.01.018 ⇑ Corresponding author. Tel.: +34 953 213016; fax: E-mail addresses: [email protected] (A. ugr.es (V. López), [email protected] (M. Galar Jesus), [email protected] (F. Herrera). The imbalanced class problem is related to the real-world application of classification in engineering. It is characterised by a very different distribution of examples among the classes. The condition of multiple imbalanced classes is more restrictive when the aim of the final system is to obtain the most accurate precision for each of the concepts of the problem. The goal of this work is to provide a thorough experimental analysis that will allow us to determine the behaviour of the different approaches proposed in the specialised literature. First, we will make use of binarization schemes, i.e., one versus one and one versus all, in order to apply the standard approaches to solving binary class imbalanced problems. Second, we will apply several ad hoc procedures which have been designed for the scenario of imbalanced data-sets with multiple classes. This experimental study will include several well-known algorithms from the literature such as decision trees, support vector machines and instance-based learning, with the intention of obtaining global conclusions from different classification paradigms. The extracted findings will be supported by a statistical comparative analysis using more than 20 data-sets from the KEEL repository. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0a335ec3a17c202e92341b51a90d9f61", "text": "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "title": "" }, { "docid": "e81cffe3f2f716520ede92d482ddab34", "text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.", "title": "" }, { "docid": "37e904ddffdb9f7eee75b6415efde722", "text": "Different actors like teachers, course designers and content providers need to gain more information about the way the resources provided with Moodle are used by the students so they can adjust and adapt their offer better. In this contribution we show that Excel Pivot Tables can be used to conduct a flexible analytical processing of usage data and gain valuable information. An advantage of Excel Pivot Tables is that they can be mastered by persons with good IT-skills but not necessarily computer scientists.", "title": "" }, { "docid": "e5f2101e7937c61a4d6b11d4525a7ed8", "text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.", "title": "" }, { "docid": "1dbaaa804573e9a834616cce38547d8d", "text": "This paper combines traditional fundamentals, such as earnings and cash flows, with measures tailored for growth firms, such as earnings stability, growth stability and intensity of R&D, capital expenditure and advertising, to create an index – GSCORE. A long–short strategy based on GSCORE earns significant excess returns, though most of the returns come from the short side. Results are robust in partitions of size, analyst following and liquidity and persist after controlling for momentum, book-tomarket, accruals and size. High GSCORE firms have greater market reaction and analyst forecast surprises with respect to future earnings announcements. Further, the results are inconsistent with a riskbased explanation as returns are positive in most years, and firms with lower risk earn higher returns. Finally, a contextual approach towards fundamental analysis works best, with traditional analysis appropriate for high BM stocks and growth oriented fundamental analysis appropriate for low BM stocks.", "title": "" } ]
scidocsrr
dde98cb5f741899d9bc63d0cefec8c62
Conditional Random Field (CRF)-Boosting: Constructing a Robust Online Hybrid Boosting Multiple Object Tracker Facilitated by CRF Learning
[ { "docid": "2959be17f8186f6db5c479d39cc928db", "text": "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8% and 40.5% respectively on PASCAL VOC 2009.", "title": "" }, { "docid": "d158d2d0b24fe3766b6ddb9bff8e8010", "text": "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.", "title": "" }, { "docid": "e702b39e13d308fa264cb6a421792f5c", "text": "Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.", "title": "" } ]
[ { "docid": "7540f1a40efd116ad712562d1fb5de23", "text": "Optical coherence tomography (OCT) enables noninvasive high-resolution 3D imaging of the human retina and thus, plays a fundamental role in detecting a wide range of ocular diseases. Despite OCT’s diagnostic value, managing and analyzing resulting data is challenging. We apply two visual analytics strategies for supporting retinal assessment in practice. First, we provide an interface for unifying and structuring data from different sources into a common basis. Fusing that basis with medical records and augmenting it with analytically derived information facilitates thorough investigations. Second, we present a tailored visual analysis tool for presenting, selecting, and emphasizing different aspects of the attributed data. This enables free exploration, reducing the data to relevant subsets, and focusing on details. By applying both strategies, we effectively enhance the management and the analysis of OCT data for assisting medical diagnoses.", "title": "" }, { "docid": "566e703c70f4d43bf1890761dc5a3861", "text": "In this paper a novel technique for detecting and correcting errors in the RNS representation is presented. It is based on the selection of a particular subset of the legitimate range of the RNS representation characterized by the property that each element is a multiple of a suitable integer number m. This method allows to detect and correct any single error in the modular processors of the RNS based computational unit. This subset of the legitimate range can be used to perform addition and multiplication in the RNS domain allowing the design of complex arithmetic structures like FIR filters. In the paper, the architecture of a FIR filter with error detection and correction capabilities is presented showing the advantages with respect to filters in which the error detection and correction are obtained by using the traditional RNS technique.", "title": "" }, { "docid": "b152e2a688321659c7c18cd1a7304854", "text": "Mobile Ad Hoc Networking (MANET) is a key technology enabler in the tactical communication domain for the Network Centric Warfare.[1] A self-forming, self-healing, infrastructure-less network consisting of mobile nodes is very attractive for beyond line of sight (BLOS) voice and data range extension as well as tactical networking applications in general. Current research and development mostly focus on implementing MANET over new wideband data waveforms. However, a large number of currently fielded tactical radios and the next generation software defined radios (SDR) support various legacy tactical radio waveforms. A mobile ad hoc network over such legacy tactical radio links not only provides war fighters mission critical networking applications such as Situational Awareness and short payload messaging, the MANET nodes can also support voice and legacy data interoperation with the existing fielded legacy radios. Furthermore, the small spectrum footprint of current narrowband tactical radio waveforms can be complementary to the new wideband data waveforms for providing networking access in a spectrum constrained environment. This paper first describes the networking usage requirements for MANET over legacy narrowband tactical waveforms. Next, the common characteristics of legacy tactical radio waveforms and the implications of such characteristics for the MANET implementation are discussed. Then an actual MANET implementation over a legacy tactical radio waveform on a SDR is presented with the results of actual field tests. Finally, several improvements to this implementation are proposed.", "title": "" }, { "docid": "6e8b6f8d0d69d7fcdec560a536c5cd57", "text": "Networks have become multipath: mobile devices have multiple radio interfaces, datacenters have redundant paths and multihoming is the norm for big server farms. Meanwhile, TCP is still only single-path. Is it possible to extend TCP to enable it to support multiple paths for current applications on today’s Internet? The answer is positive. We carefully review the constraints—partly due to various types of middleboxes— that influenced the design of Multipath TCP and show how we handled them to achieve its deployability goals. We report our experience in implementing Multipath TCP in the Linux kernel and we evaluate its performance. Our measurements focus on the algorithms needed to efficiently use paths with different characteristics, notably send and receive buffer tuning and segment reordering. We also compare the performance of our implementation with regular TCP on web servers. Finally, we discuss the lessons learned from designing MPTCP.", "title": "" }, { "docid": "69d296d1302d9e0acd7fb576f551118d", "text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.", "title": "" }, { "docid": "d03cda3a3e4deb5e249af7f3bcec0bee", "text": "In this research, we investigate the process of producing allicin in garlic. With regard to the chemical compositions of garlic (Allium Sativum L.), allicin is among the active sulfuric materials in garlic that has a lot of benefits such as anti-bacterial, anti-oxidant and deradicalizing properties.", "title": "" }, { "docid": "8d848e28f5b1187b0abea06ed53eed7b", "text": "Vector Space Model (VSM) has been at the core of information retrieval for the past decades. VSM considers the documents as vectors in high dimensional space.In such a vector space, techniques like Latent Semantic Indexing (LSI), Support Vector Machines (SVM), Naive Bayes, etc., can be then applied for indexing and classification. However, in some cases, the dimensionality of the document space might be extremely large, which makes these techniques infeasible due to the curse of dimensionality. In this paper, we propose a novel Tensor Space Model for document analysis. We represent documents as the second order tensors, or matrices. Correspondingly, a novel indexing algorithm called Tensor Latent Semantic Indexing (TensorLSI) is developed in the tensor space. Our theoretical analysis shows that TensorLSI is much more computationally efficient than the conventional Latent Semantic Indexing, which makes it applicable for extremely large scale data set. Several experimental results on standard document data sets demonstrate the efficiency and effectiveness of our algorithm.", "title": "" }, { "docid": "56ed1b2d57e2a76ce35f8ac93baf185e", "text": "This study investigated the relationship between sprint start performance (5-m time) and strength and power variables. Thirty male athletes [height: 183.8 (6.8) cm, and mass: 90.6 (9.3) kg; mean (SD)] each completed six 10-m sprints from a standing start. Sprint times were recorded using a tethered running system and the force-time characteristics of the first ground contact were recorded using a recessed force plate. Three to six days later subjects completed three concentric jump squats, using a traditional and split technique, at a range of external loads from 30–70% of one repetition maximum (1RM). Mean (SD) braking impulse during acceleration was negligible [0.009 (0.007) N/s/kg) and showed no relationship with 5 m time; however, propulsive impulse was substantial [0.928 (0.102) N/s/kg] and significantly related to 5-m time (r=−0.64, P<0.001). Average and peak power were similar during the split squat [7.32 (1.34) and 17.10 (3.15) W/kg] and the traditional squat [7.07 (1.25) and 17.58 (2.85) W/kg], and both were significantly related to 5-m time (r=−0.64 to −0.68, P<0.001). Average power was maximal at all loads between 30% and 60% of 1RM for both squats. Split squat peak power was also maximal between 30% and 60% of 1RM; however, traditional squat peak power was maximal between 50% and 70% of 1RM. Concentric force development is critical to sprint start performance and accordingly maximal concentric jump power is related to sprint acceleration.", "title": "" }, { "docid": "415076b6961220393217bc18d9ae99ce", "text": "Support Vector Machines (SVM) have been extensively studied and have shown remarkable success in many applications. However the success of SVM is very limited when it is applied to the problem of learning from imbalanced datasets in which negative instances heavily outnumber the positive instances (e.g. in gene profiling and detecting credit card fraud). This paper discusses the factors behind this failure and explains why the common strategy of undersampling the training data may not be the best choice for SVM. We then propose an algorithm for overcoming these problems which is based on a variant of the SMOTE algorithm by Chawla et al, combined with Veropoulos et al’s different error costs algorithm. We compare the performance of our algorithm against these two algorithms, along with undersampling and regular SVM and show that our algorithm outperforms all of them.", "title": "" }, { "docid": "b0c62e2049ea4f8ada0d506e06adb4bb", "text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.", "title": "" }, { "docid": "3d267b494eda6271ca9ce5037a2a4c5c", "text": "The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.", "title": "" }, { "docid": "b9e71509ab12a3963f069ad8fa6d3baf", "text": "Data mining can provide support for bank managers to effectively analyze and predict customer churn in the era of big data. After analyzing the reasons for the bank customer churn and the defects of FCM algorithm as a data mining algorithm, a new method of calculating the effectiveness function to improve the FCM algorithm was raised. At the same time, it has been applied to predict bank customer churn. Through data mining experiments of customer information conducted on a commercial bank, it's found out the clients have been lost and will be lost. Contrast of confusion matrixes shows that the improved FCM algorithm has high accuracy, which can provide new ideas and new methods for the analysis and prediction of bank customer churn.", "title": "" }, { "docid": "e644b698d2977a2c767fe86a1445e23c", "text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.", "title": "" }, { "docid": "db31a02d996b0a36d0bf215b7b7e8354", "text": "This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed and recognize the most contributing and important frequency signatures at different levels of task familiarity.", "title": "" }, { "docid": "cfe5d769b9d479dccd543f8a4d23fcf9", "text": "This paper aims to describe the role of advanced sensing systems in the electric grid of the future. In detail, the project, development, and experimental validation of a smart power meter are described in the following. The authors provide an outline of the potentialities of the sensing systems and IoT to monitor efficiently the energy flow among nodes of an electric network. The described power meter uses the metrics proposed in the IEEE Standard 1459–2010 to analyze and process voltage and current signals. Information concerning the power consumption and power quality could allow the power grid to route efficiently the energy by means of more suitable decision criteria. The new scenario has changed the way to exchange energy in the grid. Now, energy flow must be able to change its direction according to needs. Energy cannot be now routed by considering just only the criterion based on the simple shortening of transmission path. So, even energy coming from a far node should be preferred, if it has higher quality standards. In this view, the proposed smart power meter intends to support the smart power grid to monitor electricity among different nodes in an efficient and effective way.", "title": "" }, { "docid": "e159ffe1f686e400b28d398127edfc5c", "text": "In this paper, we present an in-vehicle computing system capable of localizing lane markings and communicating them to drivers. To the best of our knowledge, this is the first system that combines the Maximally Stable Extremal Region (MSER) technique with the Hough transform to detect and recognize lane markings (i.e., lines and pictograms). Our system begins by localizing the region of interest using the MSER technique. A three-stage refinement computing algorithm is then introduced to enhance the results of MSER and to filter out undesirable information such as trees and vehicles. To achieve the requirements of real-time systems, the Progressive Probabilistic Hough Transform (PPHT) is used in the detection stage to detect line markings. Next, the recognition of the color and the form of line markings is performed; this it is based on the results of the application of the MSER to left and right line markings. The recognition of High-Occupancy Vehicle pictograms is performed using a new algorithm, based on the results of MSER regions. In the tracking stage, Kalman filter is used to track both ends of each detected line marking. Several experiments are conducted to show the efficiency of our system. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f996b9911692cc835e55e561c3a501db", "text": "This study proposes a clustering-based Wi-Fi fingerprinting localization algorithm. The proposed algorithm first presents a novel support vector machine based clustering approach, namely SVM-C, which uses the margin between two canonical hyperplanes for classification instead of using the Euclidean distance between two centroids of reference locations. After creating the clusters of fingerprints by SVM-C, our positioning system embeds the classification mechanism into a positioning task and compensates for the large database searching problem. The proposed algorithm assigns the matched cluster surrounding the test sample and locates the user based on the corresponding cluster's fingerprints to reduce the computational complexity and remove estimation outliers. Experimental results from realistic Wi-Fi test-beds demonstrated that our approach apparently improves the positioning accuracy. As compared to three existing clustering-based methods, K-means, affinity propagation, and support vector clustering, the proposed algorithm reduces the mean localization errors by 25.34%, 25.21%, and 26.91%, respectively.", "title": "" }, { "docid": "597d49edde282e49703ba0d9e02e3f1e", "text": "BACKGROUND\nThe vitamin D receptor (VDR) pathway is important in the prevention and potentially in the treatment of many cancers. One important mechanism of VDR action is related to its interaction with the Wnt/beta-catenin pathway. Agonist-bound VDR inhibits the oncogenic Wnt/beta-catenin/TCF pathway by interacting directly with beta-catenin and in some cells by increasing cadherin expression which, in turn, recruits beta-catenin to the membrane. Here we identify TCF-4, a transcriptional regulator and beta-catenin binding partner as an indirect target of the VDR pathway.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this work, we show that TCF-4 (gene name TCF7L2) is decreased in the mammary gland of the VDR knockout mouse as compared to the wild-type mouse. Furthermore, we show 1,25(OH)2D3 increases TCF-4 at the RNA and protein levels in several human colorectal cancer cell lines, the effect of which is completely dependent on the VDR. In silico analysis of the human and mouse TCF7L2 promoters identified several putative VDR binding elements. Although TCF7L2 promoter reporters responded to exogenous VDR, and 1,25(OH)2D3, mutation analysis and chromatin immunoprecipitation assays, showed that the increase in TCF7L2 did not require recruitment of the VDR to the identified elements and indicates that the regulation by VDR is indirect. This is further confirmed by the requirement of de novo protein synthesis for this up-regulation.\n\n\nCONCLUSIONS/SIGNIFICANCE\nAlthough it is generally assumed that binding of beta-catenin to members of the TCF/LEF family is cancer-promoting, recent studies have indicated that TCF-4 functions instead as a transcriptional repressor that restricts breast and colorectal cancer cell growth. Consequently, we conclude that the 1,25(OH)2D3/VDR-mediated increase in TCF-4 may have a protective role in colon cancer as well as diabetes and Crohn's disease.", "title": "" }, { "docid": "aa83af152739ac01ba899d186832ee62", "text": "Predicting user \"ratings\" on items is a crucial task in recommender systems. Matrix factorization methods that computes a low-rank approximation of the incomplete user-item rating matrix provide state-of-the-art performance, especially for users and items with several past ratings (warm starts). However, it is a challenge to generalize such methods to users and items with few or no past ratings (cold starts). Prior work [4][32] have generalized matrix factorization to include both user and item features for performing better regularization of factors as well as provide a model for smooth transition from cold starts to warm starts. However, the features were incorporated via linear regression on factor estimates. In this paper, we generalize this process to allow for arbitrary regression models like decision trees, boosting, LASSO, etc. The key advantage of our approach is the ease of computing --- any new regression procedure can be incorporated by \"plugging\" in a standard regression routine into a few intermediate steps of our model fitting procedure. With this flexibility, one can leverage a large body of work on regression modeling, variable selection, and model interpretation. We demonstrate the usefulness of this generalization using the MovieLens and Yahoo! Buzz datasets.", "title": "" } ]
scidocsrr
d5e1646fa4a6fa74251f19eebe3cc2c5
Lowering the barriers to large-scale mobile crowdsensing
[ { "docid": "b8808d637dcb8bbb430d68196587b3a4", "text": "Crowd sourcing is based on a simple but powerful concept: Virtually anyone has the potential to plug in valuable information. The concept revolves around large groups of people or community handling tasks that have traditionally been associated with a specialist or small group of experts. With the advent of the smart devices, many mobile applications are already tapping into crowd sourcing to report community issues and traffic problems, but more can be done. While most of these applications work well for the average user, it neglects the information needs of particular user communities. We present CROWDSAFE, a novel convergence of Internet crowd sourcing and portable smart devices to enable real time, location based crime incident searching and reporting. It is targeted to users who are interested in crime information. The system leverages crowd sourced data to provide novel features such as a Safety Router and value added crime analytics. We demonstrate the system by using crime data in the metropolitan Washington DC area to show the effectiveness of our approach. Also highlighted is its ability to facilitate greater collaboration between citizens and civic authorities. Such collaboration shall foster greater innovation to turn crime data analysis into smarter and safe decisions for the public.", "title": "" }, { "docid": "513ae13c6848f3a83c36dc43d34b43a5", "text": "In this paper, we describe the design, analysis, implementation, and operational deployment of a real-time trip information system that provides passengers with the expected fare and trip duration of the taxi ride they are planning to take. This system was built in cooperation with a taxi operator that operates more than 15,000 taxis in Singapore. We first describe the overall system design and then explain the efficient algorithms used to achieve our predictions based on up to 21 months of historical data consisting of approximately 250 million paid taxi trips. We then describe various optimisations (involving region sizes, amount of history, and data mining techniques) and accuracy analysis (involving routes and weather) we performed to increase both the runtime performance and prediction accuracy. Our large scale evaluation demonstrates that our system is (a) accurate --- with the mean fare error under 1 Singapore dollar (~ 0.76 US$) and the mean duration error under three minutes, and (b) capable of real-time performance, processing thousands to millions of queries per second. Finally, we describe the lessons learned during the process of deploying this system into a production environment.", "title": "" } ]
[ { "docid": "f720554ba9cff8bec781f4ad2ec538aa", "text": "English. Hate speech is prevalent in social media platforms. Systems that can automatically detect offensive content are of great value to assist human curators with removal of hateful language. In this paper, we present machine learning models developed at UW Tacoma for detection of misogyny, i.e. hate speech against women, in English tweets, and the results obtained with these models in the shared task for Automatic Misogyny Identification (AMI) at EVALITA2018. Italiano. Commenti offensivi nei confronti di persone con diversa orientazione sessuale o provenienza sociale sono oggigiorno prevalenti nelle piattaforme di social media. A tale fine, sistemi automatici in grado di rilevare contenuti offensivi nei confronti di alcuni gruppi sociali sono importanti per facilitare il lavoro dei moderatori di queste piattaforme a rimuovere ogni commento offensivo usato nei social media. In questo articolo, vi presentiamo sia dei modelli di apprendimento automatico sviluppati all’Università di Washington in Tacoma per il rilevamento della misoginia, ovvero discorsi offensivi usati nei tweet in lingua inglese contro le donne, sia i risultati ottenuti con questi modelli nel processo per l’identificazione automatica della misoginia in EVALITA2018.", "title": "" }, { "docid": "78fe279ca9a3e355726ffacb09302be5", "text": "In present, dynamically developing organizations, that often realize business tasks using the project-based approach, effective project management is of paramount importance. Numerous reports and scientific papers present lists of critical success factors in project management, and communication management is usually at the very top of the list. But even though the communication practices are found to be associated with most of the success dimensions, they are not given enough attention and the communication processes and practices formalized in the company's project management methodology are neither followed nor prioritized by project managers. This paper aims at supporting project managers and teams in more effective implementation of best practices in communication management by proposing a set of communication management patterns, which promote a context-problem-solution approach to communication management in projects.", "title": "" }, { "docid": "f70bd0a47eac274a1bb3b964f34e0a63", "text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.", "title": "" }, { "docid": "02ea5b61b22d5af1b9362ca46ead0dea", "text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.", "title": "" }, { "docid": "1aa3d2456e34c8ab59a340fd32825703", "text": "It is well known that guided soft tissue healing with a provisional restoration is essential to obtain optimal anterior esthetics in the implant prosthesis. What is not well known is how to transfer a record of beautiful anatomically healed tissue to the laboratory. With the advent of emergence profile healing abutments and corresponding impression copings, there has been a dramatic improvement over the original 4.0-mm diameter design. This is a great improvement, however, it still does not accurately transfer a record of anatomically healed tissue, which is often triangularly shaped, to the laboratory, because the impression coping is a round cylinder. This article explains how to fabricate a \"custom impression coping\" that is an exact record of anatomically healed tissue for accurate duplication. This technique is significant because it allows an even closer replication of the natural dentition.", "title": "" }, { "docid": "3005c32c7cf0e90c59be75795e1c1fbc", "text": "In this paper, a novel AR interface is proposed that provides generic solutions to the tasks involved in augmenting simultaneously different types of virtual information and processing of tracking data for natural interaction. Participants within the system can experience a real-time mixture of 3D objects, static video, images, textual information and 3D sound with the real environment. The user-friendly AR interface can achieve maximum interaction using simple but effective forms of collaboration based on the combinations of human–computer interaction techniques. To prove the feasibility of the interface, the use of indoor AR techniques are employed to construct innovative applications and demonstrate examples from heritage to learning systems. Finally, an initial evaluation of the AR interface including some initial results is presented.", "title": "" }, { "docid": "695766e9a526a0a25c4de430242e46d2", "text": "In the large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs),choosing the appropriate position to place cloudlet is very important for reducing the user's access delay. For service provider, it isalways very costly to deployment cloudlets. How many cloudletsshould be placed in a WMAN and how much resource eachcloudlet should have is very important for the service provider. In this paper, we study the cloudlet placement and resourceallocation problem in a large-scale Wireless WMAN, we formulatethe problem as an novel cloudlet placement problem that givenan average access delay between mobile users and the cloudlets, place K cloudlets to some strategic locations in the WMAN withthe objective to minimize the number of use cloudlet K. Wethen propose an exact solution to the problem by formulatingit as an Integer Linear Programming (ILP). Due to the poorscalability of the ILP, we devise a clustering algorithm K-Medoids(KM) for the problem. For a special case of the problem whereall cloudlets computing capabilities have been given, we proposean efficient heuristic for it. We finally evaluate the performanceof the proposed algorithms through experimental simulations. Simulation result demonstrates that the proposed algorithms areeffective.", "title": "" }, { "docid": "d4f3dc5efe166df222b2a617d5fbd5e4", "text": "IKEA is the largest furniture retailer in the world. Their critical success factor is that IKEA can seamlessly integrate and optimize end-to-end supply chain to maximize customer value, eventually build their dominate position in entire value chain. This article summarizes and analyzes IKEA's successful practices of value chain management. Hopefully it can be a good reference or provide strategic insight for Chinese enterprises.", "title": "" }, { "docid": "935282c2cbfa34ed24bc598a14a85273", "text": "Cybersecurity is a national priority in this big data era. Because of negative externalities and the resulting lack of economic incentives, companies often underinvest in security controls, despite government and industry recommendations. Although many existing studies on security have explored technical solutions, only a few have looked at the economic motivations. To fill the gap, we propose an approach to increase the incentives of organizations to address security problems. Specifically, we utilize and process existing security vulnerability data, derive explicit security performance information, and disclose the information as feedback to organizations and the public. We regularly release information on the organizations with the worst security behaviors, imposing reputation loss on them. The information is also used by organizations for self-evaluation in comparison to others. Therefore, additional incentives are solicited out of reputation concern and social comparison. To test the effectiveness of our approach, we conducted a field quasi-experiment for outgoing spam for 1,718 autonomous systems in eight countries and published SpamRankings.net, the website we created to release information. We found that the treatment group subject to information disclosure reduced outgoing spam approximately by 16%. We also found that the more observed outgoing spam from the top spammer, the less likely an organization would be to reduce its own outgoing spam, consistent with the prediction by social comparison theory. Our results suggest that social information and social comparison can be effectively leveraged to encourage desirable behavior. Our study contributes to both information architecture design and public policy by suggesting how information can be used as intervention to impose economic incentives. The usual disclaimers apply for NSF grants 1228990 and 0831338.", "title": "" }, { "docid": "72485a3c94c2dfa5121e91f2a3fc0f4a", "text": "Four experiments support the hypothesis that syntactically relevant information about verbs is encoded in the lexicon in semantic event templates. A verb's event template represents the participants in an event described by the verb and the relations among the participants. The experiments show that lexical decision times are longer for verbs with more complex templates than verbs with less complex templates and that, for both transitive and intransitive sentences, sentences containing verbs with more complex templates take longer to process. In contrast, sentence processing times did not depend on the probabilities with which the verbs appear in transitive versus intransitive constructions in a large corpus of naturally produced sentences.", "title": "" }, { "docid": "ef4272cd4b0d4df9aa968cc9ff528c1e", "text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.", "title": "" }, { "docid": "f376948c1b8952b0b19efad3c5ca0471", "text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …", "title": "" }, { "docid": "4ad3c199ad1ba51372e9f314fc1158be", "text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.", "title": "" }, { "docid": "c89ca701d947ba6594be753470f152ac", "text": "The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different “images of interest.” We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.", "title": "" }, { "docid": "e787a1486a6563c15a74a07ed9516447", "text": "This chapter describes how engineering principles can be used to estimate joint forces. Principles of static and dynamic analysis are reviewed, with examples of static analysis applied to the hip and elbow joints and to the analysis of joint forces in human ancestors. Applications to indeterminant problems of joint mechanics are presented and utilized to analyze equine fetlock joints.", "title": "" }, { "docid": "ade59b46fca7fbf99800370435e1afe6", "text": "etretinate to PUVA was associated with better treatment response. In our patients with psoriasis, topical PUVA achieved improvement rates comparable with oral PUVA, with a mean cumulative UVA dose of 187.5 J ⁄ cm. Our study contradicts previous observations made in other studies on vitiligo and demonstrates that topical PUVA does have a limited therapeutic effect in vitiligo. Oral and topical PUVA showed low but equal efficacy in patients with vitiligo with a similar mean number of treatments to completion. Approximately one-quarter of our patients with vitiligo had discontinued PUVA therapy, which probably affected the outcome. It has been shown that at least 1 year of continuous and regular therapy with oral PUVA is needed to achieve a sufficient degree of repigmentation. Shorter periods were not found to be sufficient to observe clinical signs of repigmentation. Currently it is not known if the same holds true for topical PUVA. In conclusion, our results show that the efficacy of topical PUVA is comparable with that of oral PUVA, and favoured topical PUVA treatment especially in the eczema group with respect to cumulative UVA doses and success rates. Given the necessity for long-term treatment with local PUVA therapies, successful management requires maintenance of full patient compliance. Because of this, the results in this study should not only be attributed to the therapies. Because of its safety and the simplicity, topical PUVA should be considered as an alternative therapy to other phototherapy methods.", "title": "" }, { "docid": "5a0fe40414f7881cc262800a43dfe4d0", "text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.", "title": "" }, { "docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2", "text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.", "title": "" }, { "docid": "e5d323fe9bf2b5800043fa0e4af6849a", "text": "A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.", "title": "" } ]
scidocsrr
db54145f9c2868e71344d248df9765f3
Feature-Rich Unsupervised Word Alignment Models
[ { "docid": "8acd410ff0757423d09928093e7e8f63", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" } ]
[ { "docid": "2c49c7c3694358d9e3ee6101f5f2ffe5", "text": "We present a system that approximates the answer to complex ad-hoc queries in big-data clusters by injecting samplers on-the-fly and without requiring pre-existing samples. Improvements can be substantial when big-data queries take multiple passes over data and when samplers execute early in the query plan. We present a new, universe, sampler which is able to sample multiple join inputs. By incorporating samplers natively into a cost-based query optimizer, we automatically generate plans with appropriate samplers at appropriate locations. We devise an accuracy analysis method using which we ensure that query plans with samplers will not miss groups and that aggregate values are within a small ratio of their true value. An implementation on a cluster with tens of thousands of machines shows that queries in the TPC-DS benchmark use a median of 2X fewer resources. In contrast, approaches that construct input samples even when given 10X the size of the input to store samples improve only 22% of the queries, i.e., a median speed up of 0X.", "title": "" }, { "docid": "10d203d3aab332d3e8775993097544be", "text": "Web cookies are used widely by publishers and 3rd parties to track users and their behaviors. Despite the ubiquitous use of cookies, there is little prior work on their characteristics such as standard attributes, placement policies, and the knowledge that can be amassed via 3rd party cookies. In this paper, we present an empirical study of web cookie characteristics, placement practices and information transmission. To conduct this study, we implemented a lightweight web crawler that tracks and stores the cookies as it navigates to websites. We use this crawler to collect over 3.2M cookies from the two crawls, separated by 18 months, of the top 100K Alexa web sites. We report on the general cookie characteristics and add context via a cookie category index and website genre labels. We consider privacy implications by examining specific cookie attributes and placement behavior of 3rd party cookies. We find that 3rd party cookies outnumber 1st party cookies by a factor of two, and we illuminate the connection between domain genres and cookie attributes. We find that less than 1% of the entities that place cookies can aggregate information across 75% of web sites. Finally, we consider the issue of information transmission and aggregation by domains via 3rd party cookies. We develop a mathematical framework to quantify user information leakage for a broad class of users, and present findings using real world domains. In particular, we demonstrate the interplay between a domain’s footprint across the Internet and the browsing behavior of users, which has significant impact on information transmission.", "title": "" }, { "docid": "b7944edc9e6704cbf59489f112f46c11", "text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001", "title": "" }, { "docid": "437b448b27cbc77969664d73895d93f2", "text": "In this manuscript, we study the problem of detecting coordinated free text campaigns in large-scale social media. These campaigns—ranging from coordinated spam messages to promotional and advertising campaigns to political astro-turfing—are growing in significance and reach with the commensurate rise in massive-scale social systems. Specifically, we propose and evaluate a content-driven framework for effectively linking free text posts with common “talking points” and extracting campaigns from large-scale social media. Three of the salient features of the campaign extraction framework are: (i) first, we investigate graph mining techniques for isolating coherent campaigns from large message-based graphs; (ii) second, we conduct a comprehensive comparative study of text-based message correlation in message and user levels; and (iii) finally, we analyze temporal behaviors of various campaign types. Through an experimental study over millions of Twitter messages we identify five major types of campaigns—namely Spam, Promotion, Template, News, and Celebrity campaigns—and we show how these campaigns may be extracted with high precision and recall.", "title": "" }, { "docid": "754c7cd279c8f3c1a309071b8445d6fa", "text": "We present a framework for describing insiders and their actions based on the organization, the environment, the system, and the individual. Using several real examples of unwelcome insider action (hard drive removal, stolen intellectual property, tax fraud, and proliferation of e-mail responses), we show how the taxonomy helps in understanding how each situation arose and could have been addressed. The differentiation among types of threats suggests how effective responses to insider threats might be shaped, what choices exist for each type of threat, and the implications of each. Future work will consider appropriate strategies to address each type of insider threat in terms of detection, prevention, mitigation, remediation, and punishment.", "title": "" }, { "docid": "146c58e49221a9e8f8dbcdc149737924", "text": "Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.", "title": "" }, { "docid": "dffb89c39f11934567f98a31a0ef157c", "text": "We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.", "title": "" }, { "docid": "34208fafbb3009a1bb463e3d8d983e61", "text": "A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with \"relevant\" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems.", "title": "" }, { "docid": "09a6f724e5b2150a39f89ee1132a33e9", "text": "This paper concerns a deep learning approach to relevance ranking in information retrieval (IR). Existing deep IR models such as DSSM and CDSSM directly apply neural networks to generate ranking scores, without explicit understandings of the relevance. According to the human judgement process, a relevance label is generated by the following three steps: 1) relevant locations are detected; 2) local relevances are determined; 3) local relevances are aggregated to output the relevance label. In this paper we propose a new deep learning architecture, namely DeepRank, to simulate the above human judgment process. Firstly, a detection strategy is designed to extract the relevant contexts. Then, a measure network is applied to determine the local relevances by utilizing a convolutional neural network (CNN) or two-dimensional gated recurrent units (2D-GRU). Finally, an aggregation network with sequential integration and term gating mechanism is used to produce a global relevance score. DeepRank well captures important IR characteristics, including exact/semantic matching signals, proximity heuristics, query term importance, and diverse relevance requirement. Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods.", "title": "" }, { "docid": "552baf04d696492b0951be2bb84f5900", "text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.", "title": "" }, { "docid": "706acd04d939c795979fddba98ffed30", "text": "a Information Systems and Quantitative Sciences Area, Rawls College of Business Administration, Texas Tech University, Lubbock, TX 79409, United States b Department of Management Information Systems, School of Business and Management, American University of Sharjah, Sharjah, United Arab Emirates c Department of Decision and Information Sciences, C.T. Bauer College of Business, University of Houston, Houston, TX 77204, United States", "title": "" }, { "docid": "93ec9adabca7fac208a68d277040c254", "text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\[email protected].", "title": "" }, { "docid": "8f54213e38130e9b80ff786103cfbf9b", "text": "Falling, and the fear of falling, is a serious health problem among the elderly. It often results in physical and mental injuries that have the potential to severely reduce their mobility, independence and overall quality of life. Nevertheless, the consequences of a fall can be largely diminished by providing fast assistance. These facts have lead to the development of several automatic fall detection systems. Recently, many researches have focused particularly on smartphone-based applications. In this paper, we study the capacity of smartphone built-in sensors to differentiate fall events from activities of daily living. We explore, in particular, the information provided by the accelerometer, magnetometer and gyroscope sensors. A collection of features is analyzed and the efficiency of different sensor output combinations is tested using experimental data. Based on these results, a new, simple, and reliable algorithm for fall detection is proposed. The proposed method is a threshold-based algorithm and is designed to require a low battery power consumption. The evaluation of the performance of the algorithm in collected data indicates 100 % for sensitivity and 93 % for specificity. Furthermore, evaluation conducted on a public dataset, for comparison with other existing smartphone-based fall detection algorithms, shows the high potential of the proposed method.", "title": "" }, { "docid": "01a636d56a324f8bb8367b8fc73c8687", "text": "Formal risk analysis and management in software engineering is still an emerging part of project management. We provide a brief introduction to the concepts of risk management for software development projects, and then an overview of a new risk management framework. Risk management for software projects is intended to minimize the chances of unexpected events, or more specifically to keep all possible outcomes under tight management control. Risk management is also concerned with making judgments about how risk events are to be treated, valued, compared and combined. The ProRisk management framework is intended to account for a number of the key risk management principles required for managing the process of software development. It also provides a support environment to operationalize these management tasks.", "title": "" }, { "docid": "d0cdbd1137e9dca85d61b3d90789d030", "text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).", "title": "" }, { "docid": "eb90d55afac27ff7d1e43c04002a3478", "text": "BACKGROUND\nThe detection and molecular characterization of circulating tumor cells (CTCs) are one of the most active areas of translational cancer research, with >400 clinical studies having included CTCs as a biomarker. The aims of research on CTCs include (a) estimation of the risk for metastatic relapse or metastatic progression (prognostic information), (b) stratification and real-time monitoring of therapies, (c) identification of therapeutic targets and resistance mechanisms, and (d) understanding metastasis development in cancer patients.\n\n\nCONTENT\nThis review focuses on the technologies used for the enrichment and detection of CTCs. We outline and discuss the current technologies that are based on exploiting the physical and biological properties of CTCs. A number of innovative technologies to improve methods for CTC detection have recently been developed, including CTC microchips, filtration devices, quantitative reverse-transcription PCR assays, and automated microscopy systems. Molecular-characterization studies have indicated, however, that CTCs are very heterogeneous, a finding that underscores the need for multiplex approaches to capture all of the relevant CTC subsets. We therefore emphasize the current challenges of increasing the yield and detection of CTCs that have undergone an epithelial-mesenchymal transition. Increasing assay analytical sensitivity may lead, however, to a decrease in analytical specificity (e.g., through the detection of circulating normal epithelial cells).\n\n\nSUMMARY\nA considerable number of promising CTC-detection techniques have been developed in recent years. The analytical specificity and clinical utility of these methods must be demonstrated in large prospective multicenter studies to reach the high level of evidence required for their introduction into clinical practice.", "title": "" }, { "docid": "72b5a8ab7fc92d6adea3d401ae864243", "text": "Based Heart Pulse Detector N. M. Z. Hashim*, N. A. Ali*, A. Salleh*3, A. S. Ja’afar*4, N. A. Z. Abidin* * Faculty of Electronics & Computer Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia *[email protected], *[email protected], *[email protected], *[email protected], *[email protected] Abstract— The development of heart pulse instruments rapidly fast in market since 21 century. However, the heart pulse detector is expensive due to the complicated system and it is used widely only in hospitals and clinics. The project is targeting to develop a significant photosensor to the medical fields that is easy to use and monitor their health by the user everywhere. The other target is to develop a comfortable instrument, reliable, accurate result to develop of heart pulse using low cost photosensors. This project involved both hardware and software with related to signal processing, mathematical, computational, formalisms, modeling techniques for transforming, transmitting and also for analog or digital signal. This project also used Peripheral Interface Controller (PIC) 16F877A microcontroller as the main function to control other elements. Result showed this project functioned smoothly and successfully with overall objectives were achieved. Apart from that, this project give good services for people to monitor their heart condition form time to time. In the future, wireless connection e.g. Global System for Mobile Communications (GSM) and Zigbee would be developed to make the system more reliable to the current world. Furthermore, the system should be compatible to various environments such as Android based OS so that it can be controlled away from the original location. KeywordColour Wavelength, Heart Rate, Photosensor, PIC 16F877A Microcontroller, Sensor", "title": "" }, { "docid": "227fa1a36ba6b664e37e8c93e133dfd0", "text": "The notion of complex number is intimately related to the Fundamental Theorem of Algebra and is therefore at the very foundation of mathematical analysis. The development of complex algebra, however, has been far from straightforward.1 The human idea of ‘number’ has evolved together with human society. The natural numbers (1, 2, . . . ∈ N) are straightforward to accept, and they have been used for counting in many cultures, irrespective of the actual base of the number system used. At a later stage, for sharing, people introduced fractions in order to answer a simple problem such as ‘if we catch U fish, I will have two parts 5 U and you will have three parts 3 5 U of the whole catch’. The acceptance of negative numbers and zero has been motivated by the emergence of economy, for dealing with profit and loss. It is rather impressive that ancient civilisations were aware of the need for irrational numbers such as √ 2 in the case of the Babylonians [77] and π in the case of the ancient Greeks.2 The concept of a new ‘number’ often came from the need to solve a specific practical problem. For instance, in the above example of sharing U number of fish caught, we need to solve for 2U = 5 and hence to introduce fractions, whereas to solve x2 = 2 (diagonal of a square) irrational numbers needed to be introduced. Complex numbers came from the necessity to solve equations such as x2 = −1.", "title": "" }, { "docid": "fe52b7bff0974115a0e326813604997b", "text": "Deep learning is a model of machine learning loosely based on our brain. Artificial neural network has been around since the 1950s, but recent advances in hardware like graphical processing units (GPU), software like cuDNN, TensorFlow, Torch, Caffe, Theano, Deeplearning4j, etc. and new training methods have made training artificial neural networks fast and easy. In this paper, we are comparing some of the deep learning frameworks on the basis of parameters like modeling capability, interfaces available, platforms supported, parallelizing techniques supported, availability of pre-trained models, community support and documentation quality.", "title": "" }, { "docid": "417ec8f2867323551c0767aace4ff4ad", "text": "FOR SPEECH ENHANCEMENT ALGORITHMS John H.L. Hansen and Bryan L. Pellom Robust Speech Processing Laboratory Duke University, Box 90291, Durham, NC 27708-0291 http://www.ee.duke.edu/Research/Speech [email protected] [email protected] ABSTRACT Much progress has been made in speech enhancement algorithm formulation in recent years. However, while researchers in the speech coding and recognition communities have standard criteria for algorithm performance comparison, similar standards do not exist for researchers in speech enhancement. This paper discusses the necessary ingredients for an e ective speech enhancement evaluation. We propose that researchers use the evaluation core test set of TIMIT (192 sentences), with a set of noise les, and a combination of objective measures and subjective testing for broad and ne phone-level quality assessment. Evaluation results include overall objective speech quality measure scores, measure histograms, and phoneme class and individual phone scores. The reported results are meant to illustrate speci c ways of detailing quality assessment for an enhancement algorithm.", "title": "" } ]
scidocsrr
dc09c1afdec2f4438587ec9dfc5da30f
GST: GPU-decodable supercompressed textures
[ { "docid": "911ca70346689d6ba5fd01b1bc964dbe", "text": "We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.", "title": "" }, { "docid": "90ca045940f1bc9517c64bd93fd33d37", "text": "We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.", "title": "" } ]
[ { "docid": "cc9c9720b223ff1d433758bce11a373a", "text": "or to skim the text of the article quickly, while academics are more likely to download and print the paper. Further research investigating the ratio between HTML views and PDF downloads could uncover interesting findings about how the public interacts with the open access (OA) research literature. Scholars In addition to tracking scholarly impacts on traditionally invisible audiences, altmetrics hold potential for tracking previously hidden scholarly impacts. Faculty of 1000 Faculty of 1000 (F1000) is a service publishing reviews of important articles, as adjudged by a core “faculty” of selected scholars. Wets, Weedon, and Velterop (2003) argue that F1000 is valuable because it assesses impact at the article level, and adds a human level assessment that statistical indicators lack. Others disagree (Nature Neuroscience, 2005), pointing to a very strong correlation (r = 0.93) between F1000 score and Journal Impact Factor. This said, the service has clearly demonstrated some value, as over two thirds of the world’s top research institutions pay the annual subscription fee to use F1000 (Wets et al., 2003). Moreover, F1000 has been to shown to spot valuable articles which “sole reliance on bibliometric indicators would have led [researchers] to miss” (Allen, Jones, Dolby, Lynn, & Walport, 2009, p. 1). In the PLoS dataset, F1000 recommendations were not closely associated with citation or other altmetrics counts, and formed their own factor in factor analysis, suggesting they track a relatively distinct sort of impact. Conversation (scholarly blogging) In this context, “scholarly blogging” is distinguished from its popular counterpart by the expertise and qualifications of the blogger. While a useful distinction, this is inevitably an imprecise one. One approach has been to limit the investigation to science-only aggregators like ResearchBlogging (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Academic blogging has grown steadily in visibility; academics have blogged their dissertations (Efimova, 2009), and the ranks of academic bloggers contain several Fields Medalists, Nobel laureates, and other eminent scholars (Nielsen, 2009). Economist and Nobel laureate Paul Krugman (Krugman, 2012), himself a blogger, argues that blogs are replacing the working-paper culture that has in turn already replaced economics journals as distribution tools. Given its importance, there have been surprisingly few altmetrics studies of scholarly blogging. Extant research, however, has shown that blogging shares many of the characteristics of more formal communication, including a long-tail distribution of cited articles (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Although science bloggers can write anonymously, most blog under their real names (Shema & Bar-Ilan, 2011). Conversation (Twitter) Scholars on Twitter use the service to support different activities, including teaching (Dunlap & Lowenthal, 2009; Junco, Heiberger, & Loken, 2011), participating in conferences (Junco et al., 2011; Letierce et al., 2010; Ross et al., 2011), citing scholarly articles (Priem & Costello, 2010; Weller, Dröge, & Puschmann, 2011), and engaging in informal communication (Ross et al., 2011; Zhao & Rosson, 2009). Citations from Twitter are a particularly interesting data source, since they capture the sort of informal discussion that accompanies early important work. There is, encouragingly, evidence that Tweeting scholars take citations from Twitter seriously, both in creating and reading them (Priem & Costello, 2010). The number of scholars on Twitter is growing steadily, as shown in Figure 1. The same study found that, in a sample of around 10,000 Ph.D. students and faculty members at five representative universities, one 1 in 40 scholars had an active Twitter account. Although some have suggested that Twitter is only used by younger scholars, rank was not found to significantly associate with Twitter use, and in fact faculty members’ tweets were twice as likely to discuss their and others’ scholarly work. Conversation (article commenting) Following the lead of blogs and other social media platforms, many journals have added article-level commenting to their online platforms in the middle of the last decade. In theory, the discussion taking place in these threads is another valuable lens into the early impacts of scientific ideas. In practice, however, many commenting systems are virtual ghost towns. In a sample of top medical journals, fully half had commenting systems laying idle, completely unused by anyone (Schriger, Chehrazi, Merchant, & Altman, 2011). However, commenting was far from universally unsuccessful; several journals had comments on 50-76% of their articles. In a sample from the British Medical Journal, articles had, on average, nearly five comments each (Gotzsche, Delamothe, Godlee, & Lundh, 2010). Additionally, many articles may accumulate comments in other environments; the growing number of external comment sites allows users to post comments on journal articles published elsewhere. These have tended to appear and disappear quickly over the last few years. Neylon (2010) argues that online article commenting is thriving, particularly for controversial papers, but that \"...people are much more comfortable commenting in their own spaces” (para. 5), like their blogs and on Twitter. Reference managers Reference managers like Mendeley and CiteULike are very useful sources of altmetrics data and are currently among the most studied. Although scholars have used electronic reference managers for some time, this latest generation offers scientometricians the chance to query their datasets, offering a compelling glimpse into scholars’ libraries. It is worth summarizing three main points, though. First, the most important social reference managers are CiteULike and Mendeley. Another popular reference manager, Zotero, has received less study (but see Lucas, 2008). Papers and ReadCube are newer, smaller reference managers; Connotea and 2Collab both dealt poorly with spam; the latter has closed, and the former may follow. Second, the usage base of social reference managers—particularly Mendeley—is large and growing rapidly. Mendeley’s coverage, in particular, rivals that of commercial databases like Scopus and Web of Science (WoS) (Bar-Ilan et al., 2012; Haustein & Siebenlist, 2011; Li et al., 2011; Priem et al., 2012). Finally, inclusion in reference managers correlates to citation more strongly than most other altmetrics. Working with various datasets, researchers have reported correlations of .46 (Bar-Ilan, 2012), .56 (Li et al., 2011), and .5 (Priem et al., 2012) between inclusion in users’ Mendeley libraries, and WoS citations. This closer relationship is likely because of the importance of reference managers in the citation workflow. However, the lack of perfect or even strong correlation suggests that this altmetric, too, captures influence not reflected in the citation record. There has been particular interest in using social bookmarking for recommendations (Bogers & van den Bosch, 2008; Jiang, He, & Ni, 2011). pdf downloads As discussed earlier, most research on downloads today does not distinguish between HTML views in PDF downloads. However there is a substantial and growing body of research investigating article downloads, and their relation to later citation. Several researchers have found that downloads predict or correlate with later citation (Perneger, 2004; Brody et al., 2006). The MESUR project is the largest of these studies to date, and used linked usage events to create a novel map of the connections between disciplines, as well as analyses of potential metrics using download and citation data in novel ways (Bollen, et al., 2009). Shuai, Pepe, and Bollen (2012) show that downloads and Twitter citations interact, with Twitter likely driving traffic to new papers, and also reflecting reader interest. Uses, limitations and future research Uses Several uses of altmetrics have been proposed, which aim to capitalize on their speed, breadth, and diversity, including use in evaluation, analysis, and prediction. Evaluation The breadth of altmetrics could support more holistic evaluation efforts; a range of altmetrics may help solve the reliability problems of individual measures by triangulating scores from easily-accessible “converging partial indicators” (Martin & Irvine, 1983, p. 1). Altmetrics could also support the evaluation of increasingly important, non-traditional scholarly products like datasets and software, which are currently underrepresented in the citation record (Howison & Herbsleb, 2011; Sieber & Trumbo, 1995). Research that impacts wider audiences could also be better rewarded; Neylon (2012) relates a compelling example of how tweets reveal clinical use of a research paper—use that would otherwise go undiscovered and unrewarded. The speed of altmetrics could also be useful in evaluation, particularly for younger scholars whose research has not yet accumulated many citations. Most importantly, altmetrics could help open a window on scholars’ “scientific ‘street cred’” (Cronin, 2001, p. 6), helping reward researchers whose subtle influences—in conversations, teaching, methods expertise, and so on— influence their colleagues without perturbing the citation record. Of course, potential evaluators must be strongly cautioned that while uncritical application of any metric is dangerous, this is doubly so with altmetrics, whose research base is not yet adequate to support high-stakes decisions.", "title": "" }, { "docid": "42050d2d11a30e003b9d35fad12daa5e", "text": "Document is unavailable: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's \"Publication Services and Products Board Operations Manual,\" IEEE has chosen to exclude this article from distribution. We regret any inconvenience.", "title": "" }, { "docid": "33eeb883ae070fdc1b5a1eb656bce6b9", "text": "Traffic Congestion is one of many serious global problems in all great cities resulted from rapid urbanization which always exert negative externalities upon society. The solution of traffic congestion is highly geocentric and due to its heterogeneous nature, curbing congestion is one of the hard tasks for transport planners. It is not possible to suggest unique traffic congestion management framework which could be absolutely applied for every great cities. Conversely, it is quite feasible to develop a framework which could be used with or without minor adjustment to deal with congestion problem. So, the main aim of this paper is to prepare a traffic congestion mitigation framework which will be useful for urban planners, transport planners, civil engineers, transport policy makers, congestion management researchers who are directly or indirectly involved or willing to involve in the task of traffic congestion management. Literature review is the main source of information of this study. In this paper, firstly, traffic congestion is defined on the theoretical point of view and then the causes of traffic congestion are briefly described. After describing the causes, common management measures, using worldwide, are described and framework for supply side and demand side congestion management measures are prepared.", "title": "" }, { "docid": "2d34486ae54b2ed4795a8e85ce22ce57", "text": "We collected a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web1. This corpus has found widespread use in the NLP community. Here, we focus on its acquisition and its application as training data for statistical machine translation (SMT). We trained SMT systems for 110 language pairs, which reveal interesting clues into the challenges ahead.", "title": "" }, { "docid": "32f3396d7e843f75c504cd99b00944a0", "text": "This paper aims to address the very challenging problem of efficient and accurate hand tracking from depth sequences, meanwhile to deform a high-resolution 3D hand model with geometric details. We propose an integrated regression framework to infer articulated hand pose, and regress high-frequency details from sparse high-resolution 3D hand model examples. Specifically, our proposed method mainly consists of four components: skeleton embedding, hand joint regression, skeleton alignment, and high-resolution details integration. Skeleton embedding is optimized via a wrinkle-based skeleton refinement method for faithful hand models with fine geometric details. Hand joint regression is based on a deep convolutional network, from which 3D hand joint locations are predicted from a single depth map, then a skeleton alignment stage is performed to recover fully articulated hand poses. Deformable fine-scale details are estimated from a nonlinear mapping between the hand joints and per-vertex displacements. Experiments on two challenging datasets show that our proposed approach can achieve accurate, robust, and real-time hand tracking, while preserve most high-frequency details when deforming a virtual hand.", "title": "" }, { "docid": "59405c31da09ea58ef43a03d3fc55cf4", "text": "The Quality of Service (QoS) management is one of the urgent problems in networking which doesn't have an acceptable solution yet. In the paper the approach to this problem based on multipath routing protocol in SDN is considered. The proposed approach is compared with other QoS management methods. A structural and operation schemes for its practical implementation is proposed.", "title": "" }, { "docid": "7b02c36cef0c195d755b6cc1c7fbda2e", "text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.", "title": "" }, { "docid": "bc388488c5695286fe7d7e56ac15fa94", "text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.", "title": "" }, { "docid": "920748fbdcaf91346a40e3bf5ae53d42", "text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].", "title": "" }, { "docid": "8d5d2f266181d456d4f71df26075a650", "text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends", "title": "" }, { "docid": "ab92c8ded0001d4103be4e7a8ee3a1f7", "text": "Metabolic syndrome defines a cluster of interrelated risk factors for cardiovascular disease and diabetes mellitus. These factors include metabolic abnormalities, such as hyperglycemia, elevated triglyceride levels, low high-density lipoprotein cholesterol levels, high blood pressure, and obesity, mainly central adiposity. In this context, extracellular vesicles (EVs) may represent novel effectors that might help to elucidate disease-specific pathways in metabolic disease. Indeed, EVs (a terminology that encompasses microparticles, exosomes, and apoptotic bodies) are emerging as a novel mean of cell-to-cell communication in physiology and pathology because they represent a new way to convey fundamental information between cells. These microstructures contain proteins, lipids, and genetic information able to modify the phenotype and function of the target cells. EVs carry specific markers of the cell of origin that make possible monitoring their fluctuations in the circulation as potential biomarkers inasmuch their circulating levels are increased in metabolic syndrome patients. Because of the mixed components of EVs, the content or the number of EVs derived from distinct cells of origin, the mode of cell stimulation, and the ensuing mechanisms for their production, it is difficult to attribute specific functions as drivers or biomarkers of diseases. This review reports recent data of EVs from different origins, including endothelial, smooth muscle cells, macrophages, hepatocytes, adipocytes, skeletal muscle, and finally, those from microbiota as bioeffectors of message, leading to metabolic syndrome. Depicting the complexity of the mechanisms involved in their functions reinforce the hypothesis that EVs are valid biomarkers, and they represent targets that can be harnessed for innovative therapeutic approaches.", "title": "" }, { "docid": "73c2874b381e49f9c36ae0b43d7e73fb", "text": "Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets show that the proposed framework can reliably detect abnormalities in the video sequence, outperforming the current state-of-the-art methods.", "title": "" }, { "docid": "19f4de5f01f212bf146087d4695ce15e", "text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.", "title": "" }, { "docid": "59754857209f45ab7c3708fa413808a3", "text": "Recent studies on the hippocampus and the prefrontal cortex have considerably advanced our understanding of the distinct roles of these brain areas in the encoding and retrieval of memories, and of how they interact in the prolonged process by which new memories are consolidated into our permanent storehouse of knowledge. These studies have led to a new model of how the hippocampus forms and replays memories and how the prefrontal cortex engages representations of the meaningful contexts in which related memories occur, as well as how these areas interact during memory retrieval. Furthermore, they have provided new insights into how interactions between the hippocampus and prefrontal cortex support the assimilation of new memories into pre-existing networks of knowledge, called schemas, and how schemas are modified in this process as the foundation of memory consolidation.", "title": "" }, { "docid": "248a447eb07f0939fa479b0eb8778756", "text": "The present study was done to determine the long-term success and survival of fixed partial dentures (FPDs) and to evaluate the risks for failures due to specific biological and technical complications. A MEDLINE search (PubMed) from 1966 up to March 2004 was conducted, as well as hand searching of bibliographies from relevant articles. Nineteen studies from an initial yield of 3658 titles were finally selected and data were extracted independently by three reviewers. Prospective and retrospective cohort studies with a mean follow-up time of at least 5 years in which patients had been examined clinically at the follow-up visits were included in the meta-analysis. Publications only based on patients records, questionnaires or interviews were excluded. Survival of the FPDs was analyzed according to in situ and intact failure risks. Specific biological and technical complications such as caries, loss of vitality and periodontal disease recurrence as well as loss of retention, loss of vitality, tooth and material fractures were also analyzed. The 10-year probability of survival for fixed partial dentures was 89.1% (95% confidence interval (CI): 81-93.8%) while the probability of success was 71.1% (95% CI: 47.7-85.2%). The 10-year risk for caries and periodontitis leading to FPD loss was 2.6% and 0.7%, respectively. The 10-year risk for loss of retention was 6.4%, for abutment fracture 2.1% and for material fractures 3.2%.", "title": "" }, { "docid": "641049f7bdf194b3c326298c5679c469", "text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …", "title": "" }, { "docid": "7af1da740fbff209987276bf0d765365", "text": "A finite-difference method for solving the time-dependent NavierStokes equations for an incompressible fluid is introduced. This method uses the primitive variables, i.e. the velocities and the pressure, and is equally applicable to problems in two and three space dimensions. Test problems are solved, and an application to a three-dimensional convection problem is presented. Introduction. The equations of motion of an incompressible fluid are dtUi 4UjdjUi = — — dip + vV2Ui + Ei} ( V2 = Yl d2 ) , djUj = 0 , PO \\ 3 ' where Ui are the velocity components, p is the pressure, p0 is the density, Ei are the components of the external forces per unit mass, v is the coefficient of kinematic viscosity, t is the time, and the indices i, j refer to the space coordinates Xi, x¡, i, j = 1, 2, 3. d, denotes differentiation with respect to Xi, and dt differentiation with respect to the time t. The summation convention is used in writing the equations. We write , Uj , Xj , _ ( d \\ Ui u ' Xi \" d ' p \\povur", "title": "" }, { "docid": "a57e470ad16c025f6b0aae99de25f498", "text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.", "title": "" }, { "docid": "9cdddf98d24d100c752ea9d2b368bb77", "text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.", "title": "" }, { "docid": "9c800a53208bf1ded97e963ed4f80b28", "text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.", "title": "" } ]
scidocsrr
b036bd83e2c74c99d99e3ee697ecd8e5
Graph Classification with 2 D Convolutional Neural Networks
[ { "docid": "d5adbe2a074711bdfcc5f1840f27bac3", "text": "Graph kernels have emerged as a powerful tool for graph comparison. Most existing graph kernels focus on local properties of graphs and ignore global structure. In this paper, we compare graphs based on their global properties as these are captured by the eigenvectors of their adjacency matrices. We present two algorithms for both labeled and unlabeled graph comparison. These algorithms represent each graph as a set of vectors corresponding to the embeddings of its vertices. The similarity between two graphs is then determined using the Earth Mover’s Distance metric. These similarities do not yield a positive semidefinite matrix. To address for this, we employ an algorithm for SVM classification using indefinite kernels. We also present a graph kernel based on the Pyramid Match kernel that finds an approximate correspondence between the sets of vectors of the two graphs. We further improve the proposed kernel using the Weisfeiler-Lehman framework. We evaluate the proposed methods on several benchmark datasets for graph classification and compare their performance to state-of-the-art graph kernels. In most cases, the proposed algorithms outperform the competing methods, while their time complexity remains very attractive.", "title": "" }, { "docid": "2bf9e347e163d97c023007f4cc88ab02", "text": "State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.", "title": "" } ]
[ { "docid": "a172c51270d6e334b50dcc6233c54877", "text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is", "title": "" }, { "docid": "7be1f8be2c74c438b1ed1761e157d3a3", "text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.", "title": "" }, { "docid": "85007f98272a3fd355015f9f9931bed1", "text": "Fully convolutional neural networks (FCNs) have shown outstanding performance in many computer vision tasks including salient object detection. However, there still remains two issues needed to be addressed in deep learning based saliency detection. One is the lack of tremendous amount of annotated data to train a network. The other is the lack of robustness for extracting salient objects in images containing complex scenes. In this paper, we present a new architecture−PDNet, a robust prior-model guided depth-enhanced network for RGB-D salient object detection. In contrast to existing works, in which RGBD values of image pixels are fed directly to a network, the proposed architecture is composed of a master network for processing RGB values, and a sub-network making full use of depth cues and incorporate depth-based features into the master network. To overcome the limited size of the labeled RGB-D dataset for training, we employ a large conventional RGB dataset to pre-train the master network, which proves to contribute largely to the final accuracy. Extensive evaluations over five benchmark datasets demonstrate that our proposed method performs favorably against the state-of-the-art approaches.", "title": "" }, { "docid": "c1d95246f5d1b8c67f4ff4769bb6b9ce", "text": "BACKGROUND\nA previous open-label study of melatonin, a key substance in the circadian system, has shown effects on migraine that warrant a placebo-controlled study.\n\n\nMETHOD\nA randomized, double-blind, placebo-controlled crossover study was carried out in 2 centers. Men and women, aged 18-65 years, with migraine but otherwise healthy, experiencing 2-7 attacks per month, were recruited from the general population. After a 4-week run-in phase, 48 subjects were randomized to receive either placebo or extended-release melatonin (Circadin®, Neurim Pharmaceuticals Ltd., Tel Aviv, Israel) at a dose of 2 mg 1 hour before bedtime for 8 weeks. After a 6-week washout treatment was switched. The primary outcome was migraine attack frequency (AF). A secondary endpoint was sleep quality assessed by the Pittsburgh Sleep Quality Index (PSQI).\n\n\nRESULTS\nForty-six subjects completed the study (96%). During the run-in phase, the average AF was 4.2 (±1.2) per month and during melatonin treatment the AF was 2.8 (±1.6). However, the reduction in AF during placebo was almost equal (p = 0.497). Absolute risk reduction was 3% (95% confidence interval -15 to 21, number needed to treat = 33). A highly significant time effect was found. The mean global PSQI score did not improve during treatment (p = 0.09).\n\n\nCONCLUSION\nThis study provides Class I evidence that prolonged-release melatonin (2 mg 1 hour before bedtime) does not provide any significant effect over placebo as migraine prophylaxis.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class I evidence that 2 mg of prolonged release melatonin given 1 hour before bedtime for a duration of 8 weeks did not result in a reduction in migraine frequency compared with placebo (p = 0.497).", "title": "" }, { "docid": "0000bd646e28d5012d7d77e43f75d2f5", "text": "Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.", "title": "" }, { "docid": "982dae78e301aec02012d9834f000d6d", "text": "This paper investigates a universal approach of synthesizing arbitrary ternary logic circuits in quantum computation based on the truth table technology. It takes into account of the relationship of classical logic and quantum logic circuits. By adding inputs with constant value and garbage outputs, the classical non-reversible logic can be transformed into reversible logic. Combined with group theory, it provides an algorithm using the ternary Swap gate, ternary NOT gate and ternary Toffoli gate library. Simultaneously, the main result shows that the numbers of qutrits we use are minimal compared to other methods. We also illustrate with two examples to test our approach.", "title": "" }, { "docid": "5039733d1fd5361820489549bfd2669f", "text": "Reporting the economic burden of oral diseases is important to evaluate the societal relevance of preventing and addressing oral diseases. In addition to treatment costs, there are indirect costs to consider, mainly in terms of productivity losses due to absenteeism from work. The purpose of the present study was to estimate the direct and indirect costs of dental diseases worldwide to approximate the global economic impact. Estimation of direct treatment costs was based on a systematic approach. For estimation of indirect costs, an approach suggested by the World Health Organization's Commission on Macroeconomics and Health was employed, which factored in 2010 values of gross domestic product per capita as provided by the International Monetary Fund and oral burden of disease estimates from the 2010 Global Burden of Disease Study. Direct treatment costs due to dental diseases worldwide were estimated at US$298 billion yearly, corresponding to an average of 4.6% of global health expenditure. Indirect costs due to dental diseases worldwide amounted to US$144 billion yearly, corresponding to economic losses within the range of the 10 most frequent global causes of death. Within the limitations of currently available data sources and methodologies, these findings suggest that the global economic impact of dental diseases amounted to US$442 billion in 2010. Improvements in population oral health may imply substantial economic benefits not only in terms of reduced treatment costs but also because of fewer productivity losses in the labor market.", "title": "" }, { "docid": "5fd63f9800b5df10d0c370c0db252b0d", "text": "This article describes an algorithm for the automated generation of any Euler diagram starting with an abstract description of the diagram. An automated generation mechanism for Euler diagrams forms the foundations of a generation algorithm for notations such as Harel’s higraphs, constraint diagrams and some of the UML notation. An algorithm to generate diagrams is an essential component of a diagram tool for users to generate, edit and reason with diagrams. The work makes use of properties of the dual graph of an abstract diagram to identify which abstract diagrams are “drawable” within given wellformedness rules on concrete diagrams. A Java program has been written to implement the algorithm and sample output is included.", "title": "" }, { "docid": "b27fc98c7e962b29819aa46429a18a9c", "text": "Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.", "title": "" }, { "docid": "ff3c4893cfb9c3830750e65ec5ddf9ef", "text": "One of the most successful semi-supervised learning approaches is co-training for multiview data. In co-training, one trains two classifiers, one for each view, and uses the most confident predictions of the unlabeled data for the two classifiers to “teach each other”. In this paper, we extend co-training to learning scenarios without an explicit multi-view representation. Inspired by a theoretical analysis of Balcan et al. (2004), we introduce a novel algorithm that splits the feature space during learning, explicitly to encourage co-training to be successful. We demonstrate the efficacy of our proposed method in a weakly-supervised setting on the challenging Caltech-256 object recognition task, where we improve significantly over previous results by (Bergamo & Torresani, 2010) in almost all training-set size settings.", "title": "" }, { "docid": "2361e70109a3595241b2cdbbf431659d", "text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint", "title": "" }, { "docid": "cf7af6838ae725794653bfce39c609b8", "text": "This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec’s properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.", "title": "" }, { "docid": "c304ab8c4b08d2d0019bec1bdc437672", "text": "Highly efficient ammonia synthesis at a low temperature is desirable for future energy and material sources. We accomplished efficient electrocatalytic low-temperature ammonia synthesis with the highest yield ever reported. The maximum ammonia synthesis rate was 30 099 μmol gcat-1 h-1 over a 9.9 wt% Cs/5.0 wt% Ru/SrZrO3 catalyst, which is a very high rate. Proton hopping on the surface of the heterogeneous catalyst played an important role in the reaction, revealed by in situ IR measurements. Hopping protons activate N2 even at low temperatures, and they moderate the harsh reaction condition requirements. Application of an electric field to the catalyst resulted in a drastic decrease in the apparent activation energy from 121 kJ mol-1 to 37 kJ mol-1. N2 dissociative adsorption is markedly promoted by the application of the electric field, as evidenced by DFT calculations. The process described herein opens the door for small-scale, on-demand ammonia synthesis.", "title": "" }, { "docid": "ee4c8c4d9bbd39562ecd644cbc9cde90", "text": "We consider generic optimization problems that can be formu lated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1}. For these problems we describe a framework of risk-averse stochastic problems where the cost vector W has independent random components, unknown at the time of so lution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combi nation of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for whi ch no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient general-purpose approxim ation schemes. They use as a black-box (exact or approximate) the solution to the underlying deterministic pr oblem and thus immediately apply to arbitrary combinatoria l problems. For example, from an available δ-approximation algorithm to the linear problem, we constru ct aδ(1 + ǫ)approximation algorithm for the stochastic problem, which invokes the linear algorithm only a logarithmic number of times in the problem input (and polynomial in 1 ǫ ), for any desired accuracy level ǫ > 0. The algorithms are based on a geometric analysis of the curvature and approximabilit y of he nonlinear level sets of the objective functions.", "title": "" }, { "docid": "d681c9c5a3f1f2069025d605a98bd764", "text": "The Smart Home concept integrates smart applications in the daily human life. In recent years, Smart Homes have increased security and management challenges due to the low capacity of small sensors, multiple connectivity to the Internet for efficient applications (use of big data and cloud computing), and heterogeneity of home systems, which require inexpert users to configure devices and micro-systems. This article presents current security and management approaches in Smart Homes and shows the good practices imposed on the market for developing secure systems in houses. At last, we propose future solutions for efficiently and securely managing the Smart Homes.", "title": "" }, { "docid": "128ea037369e69aefa90ec37ae1f9625", "text": "The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.", "title": "" }, { "docid": "9581483f301b3522b88f6690b2668217", "text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.", "title": "" }, { "docid": "8b9bf16bd915d795f62aae155c1ecf06", "text": "Wearing a wet diaper for prolonged periods, cause diaper rash. This paper presents an automated alarm system for Diaper wet. The design system using an advanced RF transceiver and GSM system to sound an alarm on the detection of moisture in the diaper to alert the intended person to change the diaper. A wet diaper detector comprises an elongated pair of spaced fine conductors which form the wet sensor. The sensor is positioned between the layers of a diaper in a region subject to wetness. The detector and RF transmitter are adapted to be easily coupled to the protruding end of the elongated sensor. When the diaper is wet the resistance between the spaced conductors falls below a pre-established value. Consequently, the detector and RF transmitter sends a signal to the RF receiver and the GSM to produce the require alarm. When the diaper is changed, the detector unit is decoupled from the pressing studs for reuse and the conductor is discarded along with the soiled diaper. Our experimental tests show that the designed system perfectly produces the intended alarm and can be adjusted for different level of wet if needed.", "title": "" }, { "docid": "cd449faa3508b96cd827647de9f9c0cb", "text": "Living with unrelenting pain (chronic pain) is maladaptive and is thought to be associated with physiological and psychological modifications, yet there is a lack of knowledge regarding brain elements involved in such conditions. Here, we identify brain regions involved in spontaneous pain of chronic back pain (CBP) in two separate groups of patients (n = 13 and n = 11), and contrast brain activity between spontaneous pain and thermal pain (CBP and healthy subjects, n = 11 each). Continuous ratings of fluctuations of spontaneous pain during functional magnetic resonance imaging were separated into two components: high sustained pain and increasing pain. Sustained high pain of CBP resulted in increased activity in the medial prefrontal cortex (mPFC; including rostral anterior cingulate). This mPFC activity was strongly related to intensity of CBP, and the region is known to be involved in negative emotions, response conflict, and detection of unfavorable outcomes, especially in relation to the self. In contrast, the increasing phase of CBP transiently activated brain regions commonly observed for acute pain, best exemplified by the insula, which tightly reflected duration of CBP. When spontaneous pain of CBP was contrasted to thermal stimulation, we observe a double-dissociation between mPFC and insula with the former correlating only to intensity of spontaneous pain and the latter correlating only to pain intensity for thermal stimulation. These findings suggest that subjective spontaneous pain of CBP involves specific spatiotemporal neuronal mechanisms, distinct from those observed for acute experimental pain, implicating a salient role for emotional brain concerning the self.", "title": "" }, { "docid": "60cc418b3b5a47e8f636b6c54a0a2d5e", "text": "Continued use of petroleum sourced fuels is now widely recognized as unsustainable because of depleting supplies and the contribution of these fuels to the accumulation of carbon dioxide in the environment. Renewable, carbon neutral, transport fuels are necessary for environmental and economic sustainability. Biodiesel derived from oil crops is a potential renewable and carbon neutral alternative to petroleum fuels. Unfortunately, biodiesel from oil crops, waste cooking oil and animal fat cannot realistically satisfy even a small fraction of the existing demand for transport fuels. As demonstrated here, microalgae appear to be the only source of renewable biodiesel that is capable of meeting the global demand for transport fuels. Like plants, microalgae use sunlight to produce oils but they do so more efficiently than crop plants. Oil productivity of many microalgae greatly exceeds the oil productivity of the best producing oil crops. Approaches for making microalgal biodiesel economically competitive with petrodiesel are discussed.", "title": "" } ]
scidocsrr
7daf08238d130b9662bf4b08386d1cfd
A new infrared image enhancement algorithm
[ { "docid": "82592f60e0039089e3c16d9534780ad5", "text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.", "title": "" } ]
[ { "docid": "814aa0089ce9c5839d028d2e5aca450d", "text": "Espresso is a document-oriented distributed data serving platform that has been built to address LinkedIn's requirements for a scalable, performant, source-of-truth primary store. It provides a hierarchical document model, transactional support for modifications to related documents, real-time secondary indexing, on-the-fly schema evolution and provides a timeline consistent change capture stream. This paper describes the motivation and design principles involved in building Espresso, the data model and capabilities exposed to clients, details of the replication and secondary indexing implementation and presents a set of experimental results that characterize the performance of the system along various dimensions.\n When we set out to build Espresso, we chose to apply best practices in industry, already published works in research and our own internal experience with different consistency models. Along the way, we built a novel generic distributed cluster management framework, a partition-aware change- capture pipeline and a high-performance inverted index implementation.", "title": "" }, { "docid": "4fb5658723d791803c1fe0fdbd7ebdeb", "text": "WAP-8294A2 (lotilibcin, 1) is a potent antibiotic with superior in vivo efficacy to vancomycin against methicillin-resistant Staphylococcus aureus (MRSA). Despite the great medical importance, its molecular mode of action remains unknown. Here we report the total synthesis of complex macrocyclic peptide 1 comprised of 12 amino acids with a β-hydroxy fatty-acid chain, and its deoxy analogue 2. A full solid-phase synthesis of 1 and 2 enabled their rapid assembly and the first detailed investigation of their functions. Compounds 1 and 2 were equipotent against various strains of Gram-positive bacteria including MRSA. We present evidence that the antimicrobial activities of 1 and 2 are due to lysis of the bacterial membrane, and their membrane-disrupting effects depend on the presence of menaquinone, an essential factor for the bacterial respiratory chain. The established synthetic routes and the menaquinone-targeting mechanisms provide valuable information for designing and developing new antibiotics based on their structures.", "title": "" }, { "docid": "caaec31a08d530071bd87e936eda79f4", "text": "A string dictionary is a basic tool for storing a set of strings in many kinds of applications. Recently, many applications need space-efficient dictionaries to handle very large datasets. In this paper, we propose new compressed string dictionaries using improved double-array tries. The double-array trie is a data structure that can implement a string dictionary supporting extremely fast lookup of strings, but its space efficiency is low. We introduce approaches for improving the disadvantage. From experimental evaluations, our dictionaries can provide the fastest lookup compared to state-of-the-art compressed string dictionaries. Moreover, the space efficiency is competitive in many cases.", "title": "" }, { "docid": "40ba65504518383b4ca2a6fabff261fe", "text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial", "title": "" }, { "docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a", "text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at [email protected].", "title": "" }, { "docid": "cf264a124cc9f68cf64cacb436b64fa3", "text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.", "title": "" }, { "docid": "356a2c0b4837cf3d001068d43cb2b633", "text": "A design is described of a broadband circularly-polarized (CP) slot antenna. A conventional annular-ring slot antenna is first analyzed, and it is found that two adjacent CP modes can be simultaneously excited through the proximity coupling of an L-shaped feed line. By tuning the dimensions of this L-shaped feed line, the two CP modes can be coupled together and a broad CP bandwidth is thus formed. The design method is also valid when the inner circular patch of the annular-ring slot antenna is vertically raised from the ground plane. In this case, the original band-limited ring slot antenna is converted into a wide-band structure that is composed of a circular wide slot and a parasitic patch, and consequently the CP bandwidth is further enhanced. For the patch-loaded wide slot antenna, its key parameters are investigated to show how to couple the two CP modes and achieve impedance matching. The effects of the distance between the parasitic patch and wide slot on the CP bandwidth and antenna gain are also presented and discussed in details.", "title": "" }, { "docid": "784dc5ac8e639e3ba4103b4b8653b1ff", "text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.", "title": "" }, { "docid": "b4d7a17eb034bcf5f6616d9338fe4265", "text": "Accessory breasts, usually with a protuberant appearance, are composed of both the central accessory breast tissue and adjacent fat tissue. They are a palpable convexity and cosmetically unsightly. Consequently, patients often desire cosmetic improvement. The traditional general surgical treatment for accessory breasts is removal of the accessory breast tissue, fat tissue, and covering skin as a whole unit. A rather long ugly scar often is left after this operation. A minimally invasive method frequently used by the plastic surgeon is to “dig out” the accessory breast tissue. A central depression appearance often is left due to the adjacent fat tissue remnant. From the cosmetic point of view, neither a long scar nor a bulge is acceptable. A minimal incision is made, and the tumescent liposuction technique is used to aspirate out both the central accessory breast tissue and adjacent fat tissue. If there is an areola or nipple in the accessory breast, either the areola or nipple is excised after liposuction during the same operation. For patients who have too much extra skin in the accessory breast area, a small fusiform incision is made to remove the extra skin after the accessory breast tissue and fat tissue have been aspirated out. From August 2003 to January 2008, 51 patients underwent surgery using the described technique. All were satisfied with their appearance after their initial surgery except for two patients with minimal associated morbidity. This report describes a new approach for treating accessory breasts that results in minimal scarring and a better appearance than can be achieved with traditional methods.", "title": "" }, { "docid": "4d3468bb14b7ad933baac5c50feec496", "text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.", "title": "" }, { "docid": "c6befaca710e45101b9a12dbc8110a0b", "text": "The realized strategy contents of information systems (IS) strategizing are a result of both deliberate and emergent patterns of action. In this paper, we focus on emergent patterns of action by studying the formation of strategies that build on local technology-mediated practices. This is done through case study research of the emergence of a sustainability strategy at a European automaker. Studying the practices of four organizational sub-communities, we develop a process perspective of sub-communities’ activity-based production of strategy contents. The process model explains the contextual conditions that make subcommunities initiate SI strategy contents production, the activity-based process of strategy contents production, and the IS strategy outcome. The process model, which draws on Jarzabkowski’s strategy-as-practice lens and Mintzberg’s strategy typology, contributes to the growing IS strategizing literature that examines local practices in IS efforts of strategic importance. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3caa8fc1ea07fcf8442705c3b0f775c5", "text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.", "title": "" }, { "docid": "f9571dc9a91dd8c2c6495814c44c88c0", "text": "Automatic number plate recognition is the task of extracting vehicle registration plates and labeling it for its underlying identity number. It uses optical character recognition on images to read symbols present on the number plates. Generally, numberplate recognition system includes plate localization, segmentation, character extraction and labeling. This research paper describes machine learning based automated Nepali number plate recognition model. Various image processing algorithms are implemented to detect number plate and to extract individual characters from it. Recognition system then uses Support Vector Machine (SVM) based learning and prediction on calculated Histograms of Oriented Gradients (HOG) features from each character. The system is evaluated on self-created Nepali number plate dataset. Evaluation accuracy of number plate character dataset is obtained as; 6.79% of average system error rate, 87.59% of average precision, 98.66% of average recall and 92.79% of average f-score. The accuracy of the complete number plate labeling experiment is obtained as 75.0%. Accuracy of the automatic number plate recognition is greatly influenced by the segmentation accuracy of the individual characters along with the size, resolution, pose, and illumination of the given image. Keywords—Nepali License Plate Recognition, Number Plate Detection, Feature Extraction, Histograms of Oriented Gradients, Optical Character Recognition, Support Vector Machines, Computer Vision, Machine Learning", "title": "" }, { "docid": "6a0c269074d80f26453d1fec01cafcec", "text": "Advances in neurobiology permit neuroscientists to manipulate specific brain molecules, neurons and systems. This has lead to major advances in the neuroscience of reward. Here, it is argued that further advances will require equal sophistication in parsing reward into its specific psychological components: (1) learning (including explicit and implicit knowledge produced by associative conditioning and cognitive processes); (2) affect or emotion (implicit 'liking' and conscious pleasure) and (3) motivation (implicit incentive salience 'wanting' and cognitive incentive goals). The challenge is to identify how different brain circuits mediate different psychological components of reward, and how these components interact.", "title": "" }, { "docid": "0d2e9d514586f083007f5e93d8bb9844", "text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences", "title": "" }, { "docid": "c28b48557a4eda0d29200170435f2935", "text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.", "title": "" }, { "docid": "b3f5d9335cccf62797c86b76fa2c9e7e", "text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017", "title": "" }, { "docid": "dcec6ef9e08d7bcfa86aca8d045b6bd4", "text": "This article examines the intellectual and institutional factors that contributed to the collaboration of neuropsychiatrist Warren McCulloch and mathematician Walter Pitts on the logic of neural networks, which culminated in their 1943 publication, \"A Logical Calculus of the Ideas Immanent in Nervous Activity.\" Historians and scientists alike often refer to the McCulloch-Pitts paper as a landmark event in the history of cybernetics, and fundamental to the development of cognitive science and artificial intelligence. This article seeks to bring some historical context to the McCulloch-Pitts collaboration itself, namely, their intellectual and scientific orientations and backgrounds, the key concepts that contributed to their paper, and the institutional context in which their collaboration was made. Although they were almost a generation apart and had dissimilar scientific backgrounds, McCulloch and Pitts had similar intellectual concerns, simultaneously motivated by issues in philosophy, neurology, and mathematics. This article demonstrates how these issues converged and found resonance in their model of neural networks. By examining the intellectual backgrounds of McCulloch and Pitts as individuals, it will be shown that besides being an important event in the history of cybernetics proper, the McCulloch-Pitts collaboration was an important result of early twentieth-century efforts to apply mathematics to neurological phenomena.", "title": "" }, { "docid": "5a912359338b6a6c011e0d0a498b3e8d", "text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.", "title": "" }, { "docid": "e13d6cd043ea958e9731c99a83b6de18", "text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.", "title": "" } ]
scidocsrr
2c90d38baf7071352aa4a45ea975828a
Robust Extreme Multi-label Learning
[ { "docid": "78f8d28f4b20abbac3ad848033bb088b", "text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.", "title": "" }, { "docid": "c6a44d2313c72e785ae749f667d5453c", "text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.", "title": "" }, { "docid": "c60ffb344e85887e06ed178d4941eb0e", "text": "Multi-label learning arises in many real-world tasks where an object is naturally associated with multiple concepts. It is well-accepted that, in order to achieve a good performance, the relationship among labels should be exploited. Most existing approaches require the label relationship as prior knowledge, or exploit by counting the label co-occurrence. In this paper, we propose the MAHR approach, which is able to automatically discover and exploit label relationship. Our basic idea is that, if two labels are related, the hypothesis generated for one label can be helpful for the other label. MAHR implements the idea as a boosting approach with a hypothesis reuse mechanism. In each boosting round, the base learner for a label is generated by not only learning on its own task but also reusing the hypotheses from other labels, and the amount of reuse across labels provides an estimate of the label relationship. Extensive experimental results validate that MAHR is able to achieve superior performance and discover reasonable label relationship. Moreover, we disclose that the label relationship is usually asymmetric.", "title": "" }, { "docid": "2e8e601fd25bbee74b843af86eb98c5f", "text": "In multi-label learning, each training example is associated with a set of labels and the task is to predict the proper label set for the unseen example. Due to the tremendous (exponential) number of possible label sets, the task of learning from multi-label examples is rather challenging. Therefore, the key to successful multi-label learning is how to effectively exploit correlations between different labels to facilitate the learning process. In this paper, we propose to use a Bayesian network structure to efficiently encode the conditional dependencies of the labels as well as the feature set, with the feature set as the common parent of all labels. To make it practical, we give an approximate yet efficient procedure to find such a network structure. With the help of this network, multi-label learning is decomposed into a series of single-label classification problems, where a classifier is constructed for each label by incorporating its parental labels as additional features. Label sets of unseen examples are predicted recursively according to the label ordering given by the network. Extensive experiments on a broad range of data sets validate the effectiveness of our approach against other well-established methods.", "title": "" } ]
[ { "docid": "a48ac362b2206e608303231593cf776b", "text": "Model-based test case generation is gaining acceptance to the software practitioners. Advantages of this are the early detection of faults, reducing software development time etc. In recent times, researchers have considered different UML diagrams for generating test cases. Few work on the test case generation using activity diagrams is reported in literatures. However, the existing work consider activity diagrams in method scope and mainly follow UML 1.x for modeling. In this paper, we present an approach of generating test cases from activity diagrams using UML 2.0 syntax and with use case scope. We consider a test coverage criterion, called activity path coverage criterion. The test cases generated using our approach are capable of detecting more faults like synchronization faults, loop faults unlike the existing approaches.", "title": "" }, { "docid": "1eea81ad47613c7cd436af451aea904d", "text": "The Internet of Things (IoT) brings together a large variety of devices of different platforms, computational capacities and functionalities. The network heterogeneity and the ubiquity of IoT devices introduce increased demands on both security and privacy protection. Therefore, the cryptographic mechanisms must be strong enough to meet these increased requirements but, at the same time, they must be efficient enough for the implementation on constrained devices. In this paper, we present a detailed assessment of the performance of the most used cryptographic algorithms on constrained devices that often appear in IoT networks. We evaluate the performance of symmetric primitives, such as block ciphers, hash functions, random number generators, asymmetric primitives, such as digital signature schemes, and privacyenhancing schemes on various microcontrollers, smart-cards and mobile devices. Furthermore, we provide the analysis of the usability of upcoming schemes, such as the homomorphic encryption schemes, group signatures and attribute-based schemes. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "349f24f645b823a7b0cc411d5e2a308e", "text": "In this paper, the analysis and design of an asymmetrical half bridge flyback DC-DC converter is presented, which can minimize the switching power loss by realizing the zero-voltage switching (ZVS) during the transition between the two switches and the zero-current-switching (ZCS) on the output diode. As a result, high efficiency can be achieved. The principle of the converter operation is explained and analyzed. In order to ensure the realization of ZVS in operation, the required interlock delay time between the gate signals of the two switches, the transformer leakage inductance, and the ZVS range of the output current variation are properly calculated. Experimental results from a 8 V/8 A, 200 kHz circuit are also presented, which verify the theoretical analysis.", "title": "" }, { "docid": "e1958dc823feee7f88ab5bf256655bee", "text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.", "title": "" }, { "docid": "79ca455db7e7348000c6590a442f9a4c", "text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis upon flap systems. It discusses existing electro-hydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance and life cycle costs. The paper then progresses to describe a full scale actuation demonstrator of the flap system, including the high speed electrical drive, step down gearbox and flaps. Detailed descriptions are given of the fault tolerant motor, power electronics, control architecture and position sensor systems, along with a range of test results, demonstrating the system in operation", "title": "" }, { "docid": "d277a7e6a819af474b31c7a35b9c840f", "text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.", "title": "" }, { "docid": "43e3d3639d30d9e75da7e3c5a82db60a", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "9f6f00bf0872c54fbf2ec761bf73f944", "text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.", "title": "" }, { "docid": "9c85f1543c688d4fda2124f9d282264f", "text": "Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigation. F. Pomerleau (B) · F. Colas · R. Siegwart · S. Magnenat Autonomous System Lab, ETH Zurich, Tannenstrasse 3, 8092 Zurich, Switzerland e-mail: [email protected] F. Colas e-mail: [email protected] R. Siegwart e-mail: [email protected] S. Magnenat e-mail: [email protected]", "title": "" }, { "docid": "eb4fa30a38e27a27dc02c60e007d1f01", "text": "In this paper the design and kinematic performances are presented for a low-cost parallel manipulator with 4 driven cables. It has been conceived for an easy programming of its operation by properly formulating the Kinematics of the parallel architecture that uses cables. A prototype has been built and tests have experienced the feasibility of the system design and its operation.", "title": "" }, { "docid": "434ee509ddfe4afde1407aa3ea7ce9ca", "text": "Phonocardiogram (PCG) signal is used as a diagnostic test in ambulatory monitoring in order to evaluate the heart hemodynamic status and to detect a cardiovascular disease. The objective of this study is to develop an automatic classification method for anomaly (normal vs. abnormal) and quality (good vs. bad) detection of PCG recordings without segmentation. For this purpose, a subset of 18 features is selected among 40 features based on a wrapper feature selection scheme. These features are extracted from time, frequency, and time-frequency domains without any segmentation. The selected features are fed into an ensemble of 20 feedforward neural networks for classification task. The proposed algorithm achieved the overall score of 91.50% (94.23% sensitivity and 88.76% specificity) and 85.90% (86.91% sensitivity and 84.90% specificity) on the train and unseen test datasets, respectively. The proposed method got the second best score in the PhysioNet/CinC Challenge 2016.", "title": "" }, { "docid": "85f41be6bac18846634c725505d78239", "text": "We propose SmartEscape, a real-time, dynamic, intelligent and user-specific evacuation system with a mobile interface for emergency cases such as fire. Unlike past work, we explore dynamically changing conditions and calculate a personal route for an evacuee by considering his/her individual features. SmartEscape, which is fast, low-cost, low resource-consuming and mobile supported, collects various environmental sensory data and takes evacuees’ individual features into account, uses an artificial neural network (ANN) to calculate personal usage risk of each link in the building, eliminates the risky ones, and calculates an optimum escape route under existing circumstances. Then, our system guides the evacuee to the exit through the calculated route with vocal and visual instructions on the smartphone. While the position of the evacuee is detected by RFID (Radio-Frequency Identification) technology, the changing environmental conditions are measured by the various sensors in the building. Our ANN (Artificial Neural Network) predicts dynamically changing risk states of all links according to changing environmental conditions. Results show that SmartEscape, with its 98.1% accuracy for predicting risk levels of links for each individual evacuee in a building, is capable of evacuating a great number of people simultaneously, through the shortest and the safest route.", "title": "" }, { "docid": "d2e078d0e40b4be456c57f288c7aaa95", "text": "This study examines the factors influencing online shopping behavior of urban consumers in the State of Andhra Pradesh, India and provides a better understanding of the potential of electronic marketing for both researchers and online retailers. Data from a sample of 1500 Internet users (distributed evenly among six selected major cities) was collected by a structured questionnaire covering demographic profile and the factors influencing online shopping. Factor analysis and multiple regression analysis are used to establish relationship between the factors influencing online shopping and online shopping behavior. The study identified that perceived risk and price positively influenced online shopping behavior. Results also indicated that positive attitude, product risk and financial risk affect negatively the online shopping behavior. Factors Influencing Online Shopping Behavior of Urban Consumers in India", "title": "" }, { "docid": "0a5df67766cd1027913f7f595950754c", "text": "While a number of efficient sequential pattern mining algorithms were developed over the years, they can still take a long time and produce a huge number of patterns, many of which are redundant. These properties are especially frustrating when the goal of pattern mining is to find patterns for use as features in classification problems. In this paper, we describe BIDE-Discriminative, a modification of BIDE that uses class information for direct mining of predictive sequential patterns. We then perform an extensive evaluation on nine real-life datasets of the different ways in which the basic BIDE-Discriminative can be used in real multi-class classification problems, including 1-versus-rest and model-based search tree approaches. The results of our experiments show that 1-versus-rest provides an efficient solution with good classification performance.", "title": "" }, { "docid": "8240df0c9498482522ef86b4b1e924ab", "text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.", "title": "" }, { "docid": "6633bf4bf80c4c0a9ceb6024297476ce", "text": "Software Testing In The Real World provides the reader with a tool-box for effectively improving the software testing process. The book gives the practicing. Improving software practices, delivering more customer value, and. The outsourcing process, Martin shares a real-life case study, including a.This work offers a toolbox for the practical implementation of the software testing process and how to improve it. Based on real-world issues and examples.Software Testing in the Real World provides the reader with a tool-box for effectively improving the software testing process. The book contains many testing.Software testing is a process, or a series of processes, designed to make sure. From working with this example, that thoroughly testing a complex, real-world.", "title": "" }, { "docid": "a78782e389313600620bfb68fc57a81f", "text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.", "title": "" }, { "docid": "197e64b55c60c684cfd9696652df7a2e", "text": "We describe a method to estimate the power spectral density of nonstationary noise when a noisy speech signal is given. The method can be combined with any speech enhancement algorithm which requires a noise power spectral density estimate. In contrast to other methods, our approach does not use a voice activity detector. Instead it tracks spectral minima in each frequency band without any distinction between speech activity and speech pause. By minimizing a conditional mean square estimation error criterion in each time step we derive the optimal smoothing parameter for recursive smoothing of the power spectral density of the noisy speech signal. Based on the optimally smoothed power spectral density estimate and the analysis of the statistics of spectral minima an unbiased noise estimator is developed. The estimator is well suited for real time implementations. Furthermore, to improve the performance in nonstationary noise we introduce a method to speed up the tracking of the spectral minima. Finally, we evaluate the proposed method in the context of speech enhancement and low bit rate speech coding with various noise types.", "title": "" }, { "docid": "04f939d59dcfdca93bbc60577c78e073", "text": "This paper presents a k-nearest neighbors (kNN) method to detect outliers in large-scale traffic data collected daily in every modern city. Outliers include hardware and data errors as well as abnormal traffic behaviors. The proposed kNN method detects outliers by exploiting the relationship among neighborhoods in data points. The farther a data point is beyond its neighbors, the more possible the data is an outlier. Traffic data here was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then transformed to a two-dimensional (2D) (x, y) -coordinate plane by Principal Component Analysis (PCA) for dimension reduction. The distance-based kNN method is evaluated by unsupervised and semi-supervised approaches. The semi-supervised approach reaches 96.19% accuracy.", "title": "" }, { "docid": "8ad1d9fe113f2895e29860ebf773a502", "text": "Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud)-where the allocation of new resources can be based on: (i) differences between sites, i.e., types of resources supported (e.g., GPU versus CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little's Law-a widely used result in queuing theory-can be adapted to support dynamic control in the context of such resource provisioning.", "title": "" } ]
scidocsrr
a9f1fadd61ef01ef76c985e57d9f5cc6
A Survey on Platoon-Based Vehicular Cyber-Physical Systems
[ { "docid": "1927e46cd9a198b59b83dedd13881388", "text": "Vehicle automation has been one of the fundamental applications within the field of intelligent transportation systems (ITS) since the start of ITS research in the mid-1980s. For most of this time, it has been generally viewed as a futuristic concept that is not close to being ready for deployment. However, recent development of “self-driving” cars and the announcement by car manufacturers of their deployment by 2020 show that this is becoming a reality. The ITS industry has already been focusing much of its attention on the concepts of “connected vehicles” (United States) or “cooperative ITS” (Europe). These concepts are based on communication of data among vehicles (V2V) and/or between vehicles and the infrastructure (V2I/I2V) to provide the information needed to implement ITS applications. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. Thus, it is important to start thinking about the cybersecurity implications of cooperative automated vehicle systems. In this paper, we investigate the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities. We analyze the threats on autonomous automated vehicles and cooperative automated vehicles. This analysis shows the need for considerably more redundancy than many have been expecting. We also raise awareness to generate discussion about these threats at this early stage in the development of vehicle automation systems.", "title": "" }, { "docid": "a8b8f36f7093c79759806559fb0f0cf4", "text": "Cooperative adaptive cruise control (CACC) is an extension of ACC. In addition to measuring the distance to a predecessor, a vehicle can also exchange information with a predecessor by wireless communication. This enables a vehicle to follow its predecessor at a closer distance under tighter control. This paper focuses on the impact of CACC on traffic-flow characteristics. It uses the traffic-flow simulation model MIXIC that was specially designed to study the impact of intelligent vehicles on traffic flow. The authors study the impacts of CACC for a highway-merging scenario from four to three lanes. The results show an improvement of traffic-flow stability and a slight increase in traffic-flow efficiency compared with the merging scenario without equipped vehicles", "title": "" } ]
[ { "docid": "804cee969d47d912d8bdc40f3a3eeb32", "text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.", "title": "" }, { "docid": "6eb1fdb83d936b978429c4a014e2da59", "text": "Marigold (Tagetes erecta), besides being an ornamental plant, has various medicinal properties—it is nematocidal, fungicidal, antibacterial and insecticidal and aids in wound healing. Our work is focused on the blood clotting activity of its leaf extracts. Extraction was done by conventional as well as the Soxhlet method, which was found to be much more efficient using a 1:1 ratio of ethanol to water as solvent. Blood clotting activity of the leaf extract was examined using prothrombin time test using the Owren method. For both extraction methods, the yield percentage and coagulation activity in terms of coagulation time were analysed. Marigold leaf extract obtained using the Soxhlet method has shown very good blood coagulation properties in lower quantities—in the range of microlitres. Further research is needed for identification and quantification of its bioactive compounds, which could be purified further and encapsulated. Since marigold leaf has antibacterial properties too, therefore, it might be possible in the future to develop an antiseptic with blood coagulation activity.", "title": "" }, { "docid": "c7ea816f2bb838b8c5aac3cdbbd82360", "text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).", "title": "" }, { "docid": "3f1488c678933361bac4541a97f46a97", "text": "computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "0918688b8d8fccc3d98ae790d42b3e01", "text": "Structure-from-Motion for unordered image collections has significantly advanced in scale over the last decade. This impressive progress can be in part attributed to the introduction of efficient retrieval methods for those systems. While this boosts scalability, it also limits the amount of detail that the large-scale reconstruction systems are able to produce. In this paper, we propose a joint reconstruction and retrieval system that maintains the scalability of large-scale Structure-from-Motion systems while also recovering the often lost ability of reconstructing fine details of the scene. We demonstrate our proposed method on a large-scale dataset of 7.4 million images downloaded from the Internet.", "title": "" }, { "docid": "b8cd2ce49efd26b08581bea5129dd663", "text": "Automotive radar sensors are applied to measure the target range, azimuth angle and radial velocity simultaneously even in multiple target situations. The single target measured data are necessary for target tracking in advanced driver assistance systems (ADAS) e.g. in highway scenarios. In typical city traffic situations the radar measurement is also important but additionally even the lateral velocity component of each detected target such as a vehicle is of large interest in this case. It is shown in this paper that the lateral velocity of an extended target can be measured even in a mono observation situation. For an automotive radar sensor a high spectral resolution is required in this case which means the time on target should be sufficiently large", "title": "" }, { "docid": "01bfdc1124bdab2efa56aba50180129d", "text": "Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. Furthermore, the approach can be easily generalized to data streams. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.", "title": "" }, { "docid": "aa30615991a1eaa8986c58954d4ca00c", "text": "The real-time analyses of oscillatory EEG components during right and left hand movement imagination allows the control of an electric device. Such a system, called brain-computer interface (BCI), can be used e.g. by patients who are totally paralyzed (e.g. Amyotrophic Lateral Sclerosis) to communicate with their environment. The paper demonstrates a system that utilizes the EEG for the control of a hand prosthesis.", "title": "" }, { "docid": "68c7cf8a10382fab04a7c851a9caebb0", "text": "Circular economy (CE) is a term that exists since the 1970s and has acquired greater importance in the past few years, partly due to the scarcity of natural resources available in the environment and changes in consumer behavior. Cutting-edge technologies such as big data and internet of things (IoT) have the potential to leverage the adoption of CE concepts by organizations and society, becoming more present in our daily lives. Therefore, it is fundamentally important for researchers interested in this subject to understand the status quo of studies being undertaken worldwide and to have the overall picture of it. We conducted a bibliometric literature review from the Scopus Database over the period of 2006–2015 focusing on the application of big data/IoT on the context of CE. This produced the combination of 30,557 CE documents with 32,550 unique big data/IoT studies resulting in 70 matching publications that went through content and social network analysis with the use of ‘R’ statistical tool. We then compared it to some current industry initiatives. Bibliometrics findings indicate China and USA are the most interested countries in the area and reveal a context with significant opportunities for research. In addition, large producers of greenhouse gas emissions, such as Brazil and Russia, still lack studies in the area. Also, a disconnection between important industry initiatives and scientific research seems to exist. The results can be useful for institutions and researchers worldwide to understand potential research gaps and to focus future investments/studies in the field.", "title": "" }, { "docid": "c39295b4334a22547b2c4370ef329a7c", "text": "In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in realtime. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods is validated via extensive simulations. key words: Internet of Things, mobile edge computing, cloudlet, semantics, social network, green energy.", "title": "" }, { "docid": "7749fd32da3e853f9e9cfea74ddda5f8", "text": "This study describes the roles of architects in scaling agile frameworks with the help of a structured literature review. We aim to provide a primary analysis of 20 identified scaling agile frameworks. Subsequently, we thoroughly describe three popular scaling agile frameworks: Scaled Agile Framework, Large Scale Scrum, and Disciplined Agile 2.0. After specifying the main concepts of scaling agile frameworks, we characterize roles of enterprise, software, solution, and information architects, as identified in four scaling agile frameworks. Finally, we provide a discussion of generalizable findings on the role of architects in scaling agile frameworks.", "title": "" }, { "docid": "98978373c863f49ed7cccda9867b8a5e", "text": "Increasing vulnerability of plants to a variety of stresses such as drought, salt and extreme temperatures poses a global threat to sustained growth and productivity of major crops. Of these stresses, drought represents a considerable threat to plant growth and development. In view of this, developing staple food cultivars with improved drought tolerance emerges as the most sustainable solution toward improving crop productivity in a scenario of climate change. In parallel, unraveling the genetic architecture and the targeted identification of molecular networks using modern \"OMICS\" analyses, that can underpin drought tolerance mechanisms, is urgently required. Importantly, integrated studies intending to elucidate complex mechanisms can bridge the gap existing in our current knowledge about drought stress tolerance in plants. It is now well established that drought tolerance is regulated by several genes, including transcription factors (TFs) that enable plants to withstand unfavorable conditions, and these remain potential genomic candidates for their wide application in crop breeding. These TFs represent the key molecular switches orchestrating the regulation of plant developmental processes in response to a variety of stresses. The current review aims to offer a deeper understanding of TFs engaged in regulating plant's response under drought stress and to devise potential strategies to improve plant tolerance against drought.", "title": "" }, { "docid": "ec490d7599370ab357336af33763a559", "text": "A key challenge of entity set expansion is that multifaceted input seeds can lead to significant incoherence in the result set. In this paper, we present a novel solution to handling multifaceted seeds by combining existing user-generated ontologies with a novel word-similarity metric based on skip-grams. By blending the two resources we are able to produce sparse word ego-networks that are centered on the seed terms and are able to capture semantic equivalence among words. We demonstrate that the resulting networks possess internally-coherent clusters, which can be exploited to provide non-overlapping expansions, in order to reflect different semantic classes of the seeds. Empirical evaluation against state-of-the-art baselines shows that our solution, EgoSet, is able to not only capture multiple facets in the input query, but also generate expansions for each facet with higher precision.", "title": "" }, { "docid": "b6b58b7a1c5d9112ea24c74539c95950", "text": "We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible.We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.", "title": "" }, { "docid": "6fa191434ae343d4d645587b5a240b1f", "text": "An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods.", "title": "" }, { "docid": "cb667b5d3dd2e680f15b7167d20734cd", "text": "In this letter, a low loss high isolation broadband single-port double-throw (SPDT) traveling-wave switch using 90 nm CMOS technology is presented. A body bias technique is utilized to enhance the circuit performance of the switch, especially for the operation frequency above 30 GHz. The parasitic capacitance between the drain and source of the NMOS transistor can be further reduced using the negative body bias technique. Moreover, the insertion loss, the input 1 dB compression point (P1 dB)> and the third-order intermodulation (IMD3) of the switch are all improved. With the technique, the switch demonstrates an insertion loss of 3 dB and an isolation of better than 48 dB from dc to 60 GHz. The chip size of the proposed switch is 0.68 × 0.87 mm2 with a core area of only 0.32 × 0.21 mm2.", "title": "" }, { "docid": "ab7184c576396a1da32c92093d606a53", "text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.", "title": "" }, { "docid": "544591326b250f5d68a64f793d55539b", "text": "Introduction: Exfoliative cheilitis, one of a spectrum of diseases that affect the vermilion border of the lips, is uncommon and has no known cause. It is a chronic superficial inflammatory disorder of the vermilion borders of the lips characterized by persistent scaling; it can be a difficult condition to manage. The diagnosis is now restricted to those few patients whose lesions cannot be attributed to other causes, such as contact sensitization or light. Case Report: We present a 17 year-old male presented to the out clinic in Baghdad with the chief complaint of a persistent scaly on his lower lips. The patient reported that the skin over the lip thickened gradually over a 3 days period and subsequently became loose, causing discomfort. Once he peeled away the loosened layer, a new layer began to form again. Conclusion: The lack of specific treatment makes exfoliative cheilitis a chronic disease that radically affects a person’s life. The aim of this paper is to describe a case of recurrent exfoliative cheilitis successfully treated with intralesional corticosteroids and to present possible hypotheses as to the cause.", "title": "" } ]
scidocsrr
9d4af98fd6cb119ee82a55df751cfdc0
Which cultural values matter to business process management?: Results from a global Delphi study
[ { "docid": "8bc221213edc863f8cba6f9f5d9a9be0", "text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.", "title": "" }, { "docid": "ed832b653c96f18ec4337cdde95b03c9", "text": "Purpose – Business process management (BPM) is a management approach that developed with a strong focus on the adoption of information technology (IT). However, there is a growing awareness that BPM requires a holistic organizational perspective especially since culture is often considered a key element in BPM practice. Therefore, the purpose of this paper is to provide an overview of existing research on culture in BPM. Design/methodology/approach – This literature review builds on major sources of the BPM community including the BPM Journal, the BPM Conference and central journal/conference databases. Forward and backward searches additionally deepen the analysis. Based on the results, a model of culture’s role in BPM is developed. Findings – The results of the literature review provide evidence that culture is still a widely under-researched topic in BPM. Furthermore, a framework on culture’s role in BPM is developed and areas for future research are revealed. Research limitations/implications – The analysis focuses on the concepts of BPM and culture. Thus, results do not include findings regarding related concepts such as business process reengineering or change management. Practical implications – The framework provides an orientation for managerial practice. It helps identify dimensions of possible conflicts based on cultural aspects. It thus aims at raising awareness regarding potentially neglected cultural factors. Originality/value – Although culture has been recognized in both theory and practice as an important aspect of BPM, researchers have not systematically engaged with the specifics of the culture phenomenon in BPM. This literature review provides a frame of reference that serves as a basis for future research regarding culture’s role in BPM.", "title": "" } ]
[ { "docid": "e96791f42b6c78e29a9e19610ff6baba", "text": "Although the fourth industrial revolution is already in pro-gress and advances have been made in automating factories, completely automated facilities are still far in the future. Human work is still an important factor in many factories and warehouses, especially in the field of logistics. Manual processes are, therefore, often subject to optimization efforts. In order to aid these optimization efforts, methods like human activity recognition (HAR) became of increasing interest in industrial settings. In this work a novel deep neural network architecture for HAR is introduced. A convolutional neural network (CNN), which employs temporal convolutions, is applied to the sequential data of multiple intertial measurement units (IMUs). The network is designed to separately handle different sensor values and IMUs, joining the information step-by-step within the architecture. An evaluation is performed using data from the order picking process recorded in two different warehouses. The influence of different design choices in the network architecture, as well as pre- and post-processing, will be evaluated. Crucial steps for learning a good classification network for the task of HAR in a complex industrial setting will be shown. Ultimately, it can be shown that traditional approaches based on statistical features as well as recent CNN architectures are outperformed.", "title": "" }, { "docid": "c04f67fd5cc7f2f95452046bb18c6cfa", "text": "Bob is a free signal processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, Switzerland. The toolbox is designed to meet the needs of researchers by reducing development time and efficiently processing data. Firstly, Bob provides a researcher-friendly Python environment for rapid development. Secondly, efficient processing of large amounts of multimedia data is provided by fast C++ implementations of identified bottlenecks. The Python environment is integrated seamlessly with the C++ library, which ensures the library is easy to use and extensible. Thirdly, Bob supports reproducible research through its integrated experimental protocols for several databases. Finally, a strong emphasis is placed on code clarity, documentation, and thorough unit testing. Bob is thus an attractive resource for researchers due to this unique combination of ease of use, efficiency, extensibility and transparency. Bob is an open-source library and an ongoing community effort.", "title": "" }, { "docid": "7f0a2bcd162ce702ea2813a9cbb0b813", "text": "BACKGROUND\nhCG is a term referring to 4 independent molecules, each produced by separate cells and each having completely separate functions. These are hCG produced by villous syncytiotrophoblast cells, hyperglycosylated hCG produced by cytotrophoblast cells, free beta-subunit made by multiple primary non-trophoblastic malignancies, and pituitary hCG made by the gonadotrope cells of the anterior pituitary.\n\n\nRESULTS AND DISCUSSION\nhCG has numerous functions. hCG promotes progesterone production by corpus luteal cells; promotes angiogenesis in uterine vasculature; promoted the fusion of cytotrophoblast cell and differentiation to make syncytiotrophoblast cells; causes the blockage of any immune or macrophage action by mother on foreign invading placental cells; causes uterine growth parallel to fetal growth; suppresses any myometrial contractions during the course of pregnancy; causes growth and differentiation of the umbilical cord; signals the endometrium about forthcoming implantation; acts on receptor in mother's brain causing hyperemesis gravidarum, and seemingly promotes growth of fetal organs during pregnancy. Hyperglycosylated hCG functions to promote growth of cytotrophoblast cells and invasion by these cells, as occurs in implantation of pregnancy, and growth and invasion by choriocarcinoma cells. hCG free beta-subunit is produced by numerous non-trophoblastic malignancies of different primaries. The detection of free beta-subunit in these malignancies is generally considered a sign of poor prognosis. The free beta-subunit blocks apoptosis in cancer cells and promotes the growth and malignancy of the cancer. Pituitary hCG is a sulfated variant of hCG produced at low levels during the menstrual cycle. Pituitary hCG seems to mimic luteinizing hormone actions during the menstrual cycle.", "title": "" }, { "docid": "cb6e2fd0082e16549e02db6e2d7fbef7", "text": "E-Health clouds are gaining increasing popularity by facilitating the storage and sharing of big data in healthcare. However, such an adoption also brings about a series of challenges, especially, how to ensure the security and privacy of highly sensitive health data. Among them, one of the major issues is authentication, which ensures that sensitive medical data in the cloud are not available to illegal users. Three-factor authentication combining password, smart card and biometrics perfectly matches this requirement by providing high security strength. Recently, Wu et al. proposed a three-factor authentication protocol based on elliptic curve cryptosystem which attempts to fulfill three-factor security and resist various existing attacks, providing many advantages over existing schemes. However, we first show that their scheme is susceptible to user impersonation attack in the registration phase. In addition, their scheme is also vulnerable to offline password guessing attack in the login and password change phase, under the condition that the mobile device is lost or stolen. Furthermore, it fails to provide user revocation when the mobile device is lost or stolen. To remedy these flaws, we put forward a robust three-factor authentication protocol, which not only guards various known attacks, but also provides more desired security properties. We demonstrate that our scheme provides mutual authentication using the Burrows–Abadi–Needham logic.", "title": "" }, { "docid": "b7a3a7af3495d0a722040201f5fadd55", "text": "During the last decade, biodegradable metallic stents have been developed and investigated as alternatives for the currently-used permanent cardiovascular stents. Degradable metallic materials could potentially replace corrosion-resistant metals currently used for stent application as it has been shown that the role of stenting is temporary and limited to a period of 6-12 months after implantation during which arterial remodeling and healing occur. Although corrosion is generally considered as a failure in metallurgy, the corrodibility of certain metals can be an advantage for their application as degradable implants. The candidate materials for such application should have mechanical properties ideally close to those of 316L stainless steel which is the gold standard material for stent application in order to provide mechanical support to diseased arteries. Non-toxicity of the metal itself and its degradation products is another requirement as the material is absorbed by blood and cells. Based on the mentioned requirements, iron-based and magnesium-based alloys have been the investigated candidates for biodegradable stents. This article reviews the recent developments in the design and evaluation of metallic materials for biodegradable stents. It also introduces the new metallurgical processes which could be applied for the production of metallic biodegradable stents and their effect on the properties of the produced metals.", "title": "" }, { "docid": "c4bd2667b2e105219e6a117838dd870d", "text": "Written contracts are a fundamental framework for commercial and cooperative transactions and relationships. Limited research has been published on the application of machine learning and natural language processing (NLP) to contracts. In this paper we report the classification of components of contract texts using machine learning and hand-coded methods. Authors studying a range of domains have found that combining machine learning and rule based approaches increases accuracy of machine learning. We find similar results which suggest the utility of considering leveraging hand coded classification rules for machine learning. We attained an average accuracy of 83.48% on a multiclass labelling task on 20 contracts combining machine learning and rule based approaches, increasing performance over machine learning alone.", "title": "" }, { "docid": "0d27b687287ea23c1eb2bcff307af818", "text": "To cite: Suchak T, Hussey J, Takhar M, et al. J Fam Plann Reprod Health Care Published Online First: [please include Day Month Year] doi:10.1136/jfprhc-2014101091 BACKGROUND UK figures estimate that in 1998 there were 3170 people over the age of 15 years assigned as male at birth who had presented with gender dysphoria. This figure is comparable to that found in the Netherlands where 2440 have presented; however, far fewer people actually undergo sex reassignment surgery. Recent statistics from the Netherlands indicate that about 1 in 12 000 natal males undergo sex-reassignment and about 1 in 34 000 natal females. Since April 2013, English gender identity services have been among the specialised services commissioned centrally by NHS England and this body is therefore responsible for commissioning transgender surgical services. The growth in the incidence of revealed gender dysphoria amongst both young and adult people has major implications for commissioners and providers of public services. The present annual requirement is 480 genital and gonadal male-to-female reassignment procedures. There are currently three units in the UK offering this surgery for National Health Service (NHS) patients. Prior to surgery trans women will have had extensive evaluation, including blood tests, advice on smoking, alcohol and obesity, and psychological/psychiatric evaluation. They usually begin to take female hormones after 3 months of transition, aiming to encourage development of breast buds and alter muscle and fat distribution. Some patients may elect at this stage to have breast surgery. Before genital surgery can be considered the patient must have demonstrated they have lived for 1 year full-time as a woman. Figure 1 shows a typical post-surgical result. A trans person who has lived exclusively in their identified gender for at least 2 years (as required by the Gender Recognition Act 2004) can apply for a gender recognition certificate (GRC). This is independent of whether gender reassignment surgery has taken place. Once a trans person has a GRC they can then obtain a new birth certificate. The trans person will also have new hospital records in a new name. It is good practice for health providers to take practical steps to ensure that gender reassignment is not casually visible in records or communicated without the informed consent of the user. Consent must always be sought (and documented) for all medical correspondence where the surgery or life before surgery when living as a different gender is mentioned (exceptions include an order of court and prevention or investigation of crime). 5 It is advisable to seek medico-legal advice before disclosing. Not all trans women opt to undergo vaginoplasty. Patients have free choice as to how much surgery they wish to undertake. Trans women often live a considerable distance from where their surgery was performed and as a result many elect to see their own general practitioner or local Sexual Health Clinic if they have postoperative problems. Fortunately reported complications following surgery are rare. Lawrence summarised 15 papers investigating 232 cases of vaginoplasty surgery; 13 reported rectal-vaginal fistula, 39 reported vaginal stenosis and 33 urethral stenosis; however, it is likely that there is significant under-reporting of complications. Here we present some examples of post-vaginoplasty problems presenting to a Sexual Health Service in the North East of England, and how they were managed.", "title": "" }, { "docid": "0dffca7979e72f7bb4b0fd94b031a46f", "text": "In collaborative filtering approaches, recommendations are inferred from user data. A large volume and a high data quality is essential for an accurate and precise recommender system. As consequence, companies are collecting large amounts of personal user data. Such data is often highly sensitive and ignoring users’ privacy concerns is no option. Companies address these concerns with several risk reduction strategies, but none of them is able to guarantee cryptographic secureness. To close that gap, the present paper proposes a novel recommender system using the advantages of blockchain-supported secure multiparty computation. A potential customer is able to allow a company to apply a recommendation algorithm without disclosing her personal data. Expected benefits are a reduction of fraud and misuse and a higher willingness to share personal data. An outlined experiment will compare users’ privacy-related behavior in the proposed recommender system with existent solutions.", "title": "" }, { "docid": "3f9eb2e91e0adc0a58f5229141f826ee", "text": "Box-office performance of a movie is mainly determined by the amount the movie collects in the opening weekend and Pre-Release hype is an important factor as far as estimating the openings of the movie are concerned. This can be estimated through user opinions expressed online on sites such as Twitter which is an online micro-blogging site with a user base running into millions. Each user is entitled to his own opinion which he expresses through his tweets. This paper suggests a novel way to mine and analyze the opinions expressed in these tweets with respect to a movie prior to its release, estimate the hype surrounding it and also predict the box-office openings of the movie.", "title": "" }, { "docid": "b1dbdddadf2cfa72a5fb8e8f5d08b701", "text": "To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.", "title": "" }, { "docid": "bd6ba64d14c8234e5ec2d07762a1165f", "text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.", "title": "" }, { "docid": "837803a140450d594d5693a06ba3be4b", "text": "Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and disability-adjusted life-years. We recommend an alternative system-the complete lives system-which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles.", "title": "" }, { "docid": "f84003f63714442d4f4514eaefd5c985", "text": "Continuously tracking students during a whole semester plays a vital role to enable a teacher to grasp their learning situation, attitude and motivation. It also helps to give correct assessment and useful feedback to them. To this end, we ask students to write their comments just after each lesson, because student comments re ect their learning attitude towards the lesson, understanding of course contents, and di culties of learning. In this paper, we propose a new method to predict nal student grades. The method employs Word2Vec and Arti cial Neural Network (ANN) to predict student grade in each lesson based on their comments freely written just after the lesson. In addition, we apply a window function to the predicted results obtained in consecutive lessons to keep track of each student's learning situation. The experiment results show that the prediction correct rate reached 80% by considering the predicted student grades from six consecutive lessons, and a nal rate became 94% from all 15 lessons. The results illustrate that our proposed method continuously tracked student learning situation and improved prediction performance of nal student grades as the lessons go by.", "title": "" }, { "docid": "ef239b2f40847b9670b3c4b08630535f", "text": "When a page of a book is scanned or photocopied, textual noise (extraneous symbols from the neighboring page) and/or non-textual noise (black borders, speckles, ...) appear along the border of the document. Existing document analysis methods can handle non-textual noise reasonably well, whereas textual noise still presents a major issue for document analysis systems. Textual noise may result in undesired text in optical character recognition (OCR) output that needs to be removed afterwards. Existing document cleanup methods try to explicitly detect and remove marginal noise. This paper presents a new perspective for document image cleanup by detecting the page frame of the document. The goal of page frame detection is to find the actual page contents area, ignoring marginal noise along the page border. We use a geometric matching algorithm to find the optimal page frame of structured documents (journal articles, books, magazines) by exploiting their text alignment property. We evaluate the algorithm on the UW-III database. The results show that the error rates are below 4% each of the performance measures used. Further tests were run on a dataset of magazine pages and on a set of camera captured document images. To demonstrate the benefits of using page frame detection in practical applications, we choose OCR and layout-based document image retrieval as sample applications. Experiments using a commercial OCR system show that by removing characters outside the computed page frame, the OCR error rate is reduced from 4.3 to 1.7% on the UW-III dataset. The use of page frame detection in layout-based document image retrieval application decreases the retrieval error rates by 30%.", "title": "" }, { "docid": "e72a782ccb76ac8f681a3a0c40c21d61", "text": "Integer factorization is a well studied topic. Parts of the cryptography we use each day rely on the fact that this problem is di cult. One method one can use for factorizing a large composite number is the Quadratic Sieve algorithm. This method is among the best known today. We present a parallel implementation of the Quadratic Sieve using the Message Passing Interface (MPI). We also discuss the performance of this implementation which shows that this approach is a good one.", "title": "" }, { "docid": "5e946f2a15b5d9c663d85cd12bc3d9fc", "text": "Individual differences in young children's understanding of others' feelings and in their ability to explain human action in terms of beliefs, and the earlier correlates of these differences, were studied with 50 children observed at home with mother and sibling at 33 months, then tested at 40 months on affective-labeling, perspective-taking, and false-belief tasks. Individual differences in social understanding were marked; a third of the children offered explanations of actions in terms of false belief, though few predicted actions on the basis of beliefs. These differences were associated with participation in family discourse about feelings and causality 7 months earlier, verbal fluency of mother and child, and cooperative interaction with the sibling. Differences in understanding feelings were also associated with the discourse measures, the quality of mother-sibling interaction, SES, and gender, with girls more successful than boys. The results support the view that discourse about the social world may in part mediate the key conceptual advances reflected in the social cognition tasks; interaction between child and sibling and the relationships between other family members are also implicated in the growth of social understanding.", "title": "" }, { "docid": "ceaa0ceb14034ecc2840425a627a3c71", "text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.", "title": "" }, { "docid": "e8d0a238b6e39b8b8a57954b0fa0ce2e", "text": "As a preprocessing step, image segmentation, which can do partition of an image into different regions, plays an important role in computer vision, objects recognition, tracking and image analysis. Till today, there are a large number of methods present that can extract the required foreground from the background. However, most of these methods are solely based on boundary or regional information which has limited the segmentation result to a large extent. Since the graph cut based segmentation method was proposed, it has obtained a lot of attention because this method utilizes both boundary and regional information. Furthermore, graph cut based method is efficient and accepted world-wide since it can achieve globally optimal result for the energy function. It is not only promising to specific image with known information but also effective to the natural image without any pre-known information. For the segmentation of N-dimensional image, graph cut based methods are also applicable. Due to the advantages of graph cut, various methods have been proposed. In this paper, the main aim is to help researcher to easily understand the graph cut based segmentation approach. We also classify this method into three categories. They are speed up-based graph cut, interactive-based graph cut and shape prior-based graph cut. This paper will be helpful to those who want to apply graph cut method into their research.", "title": "" }, { "docid": "11dbf03a7aa6186ea1f64a582d55c03f", "text": "This paper presents a new unsupervised learning approach with stacked autoencoder (SAE) for Arabic handwritten digits categorization. Recently, Arabic handwritten digits recognition has been an important area due to its applications in several fields. This work is focusing on the recognition part of handwritten Arabic digits recognition that face several challenges, including the unlimited variation in human handwriting and the large public databases. Arabic digits contains ten numbers that were descended from the Indian digits system. Stacked autoencoder (SAE) tested and trained the MADBase database (Arabic handwritten digits images) that contain 10000 testing images and 60000 training images. We show that the use of SAE leads to significant improvements across different machine-learning classification algorithms. SAE is giving an average accuracy of 98.5%.", "title": "" }, { "docid": "25216b9a56bca7f8503aa6b2e5b9d3a9", "text": "The study at hand is the first of its kind that aimed to provide a comprehensive analysis of the determinants of foreign direct investment (FDI) in Mongolia by analyzing their short-run, long-run, and Granger causal relationships. In doing so, we methodically used a series of econometric methods to ensure reliable and robust estimation results that included the augmented Dickey-Fuller and Phillips-Perron unit root tests, the most recently advanced autoregressive distributed lag (ARDL) bounds testing approach to cointegration, fully modified ordinary least squares, and the Granger causality test within the vector error-correction model (VECM) framework. Our findings revealed domestic market size and human capital to have a U-shaped relationship with FDI inflows, with an initial positive impact on FDI in the short-run, which then turns negative in the long-run. Macroeconomic instability was found to deter FDI inflows in the long-run. In terms of the impact of trade on FDI, imports were found to have a complementary relationship with FDI; while exports and FDI were found to be substitutes in the short-run. Financial development was also found to induce a deterring effect on FDI inflows in both the shortand long-run; thereby also revealing a substitutive relationship between the two. Infrastructure level was not found to have a significant impact on FDI on any conventional level, in either the shortor long-run. Furthermore, the results have exhibited significant Granger causal relationships between the variables; thereby, ultimately stressing the significance of policy choice in not only attracting FDI inflows, but also in translating their positive spill-over benefits into long-run economic growth. © 2017 AESS Publications. All Rights Reserved.", "title": "" } ]
scidocsrr
0e65ef7ad5219ce3b456c24aeb125268
MultiLabel Classification on Tree- and DAG-Structured Hierarchies
[ { "docid": "bcaa7d61466f21757226ef0239f14b5b", "text": "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named Mlknn is presented, which is derived from the traditional k-Nearest Neighbor (kNN) algorithm. In detail, for each unseen instance, its k nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that Ml-knn achieves superior performance to some well-established multi-label learning algorithms.", "title": "" } ]
[ { "docid": "f9a9ed5f618e11ed2d10083954ac5e9f", "text": "This study utilized a mixed methods approach to examine the feasibility and acceptability of group compassion focused therapy for adults with intellectual disabilities (CFT-ID). Six participants with mild ID participated in six sessions of group CFT, specifically adapted for adults with ID. Session-by-session feasibility and acceptability measures suggested that participants understood the group content and process and experienced group sessions and experiential practices as helpful and enjoyable. Thematic analysis of focus groups identified three themes relating to (1) direct experiences of the group, (2) initial difficulties in being self-compassionate and (3) positive emotional changes. Pre- and post-group outcome measures indicated significant reductions in both self-criticism and unfavourable social comparisons. Results suggest that CFT can be adapted for individuals with ID and provide preliminary evidence that people with ID and psychological difficulties may experience a number of benefits from this group intervention.", "title": "" }, { "docid": "d92f6df086e4f0fa2071675f4f466e66", "text": "One solution to the crime and illegal immigration problem facing South Africa is the use of biometrics techniques and technology. Biometrics a re methods for recognizing a user based on unique physiological and/or behavioural characteristics of the user. This paper presents the results of a n ongoing work in using neural networks for voice recognition. KeywordVoice recognition, Neural Networks", "title": "" }, { "docid": "170a1dba20901d88d7dc3988647e8a22", "text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.", "title": "" }, { "docid": "ee8f8f19201afaca004385c49a6e5cb0", "text": "Automatic Arabic sign language recognition (ArSL) and fingerspelling considered to be the preferred communication method among deaf people. In this paper, we propose a system for alphabetic Arabic sign language recognition using depth and intensity images which acquired from SOFTKINECT™ sensor. The proposed method does not require any extra gloves or any visual marks. Local features from depth and intensity images are learned using unsupervised deep learning method called PCANet. The extracted features are then recognized using linear support vector machine classifier. The performance of the proposed method is evaluated on dataset of real images captured from multi-users. Experiments using a combination of depth and intensity images and also using depth and intensity images separately are performed. The obtained results show that the performance of the proposed system improved by combining both depth and intensity information which give an average accuracy of 99:5%.", "title": "" }, { "docid": "5a3b8a2ec8df71956c10b2eb10eabb99", "text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.", "title": "" }, { "docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.", "title": "" }, { "docid": "408ab4c5138ee61f2602dea7907846d1", "text": "A new mirror mounting technique applicable to the primary mirror in a space telescope is presented. This mounting technique replaces conventional bipod flexures with flexures having mechanical shims so that adjustments can be made to counter the effects of gravitational distortion of the mirror surface while being tested in the horizontal position. Astigmatic aberration due to the gravitational changes is effectively reduced by adjusting the shim thickness, and the relation between the astigmatism and the shim thickness is investigated. We tested the mirror interferometrically at the center of curvature using a null lens. Then we repeated the test after rotating the mirror about its optical axis by 180° in the horizontal setup, and searched for the minimum system error. With the proposed flexure mount, the gravitational stress at the adhesive coupling between the mirror and the mount is reduced by half that of a conventional bipod flexure for better mechanical safety under launch loads. Analytical results using finite element methods are compared with experimental results from the optical interferometer. Vibration tests verified the mechanical safety and optical stability, and qualified their use in space applications.", "title": "" }, { "docid": "0d1e889a69ea17e43c5f65bac38bba79", "text": "In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.", "title": "" }, { "docid": "fe66571111191b5bf35333ad2b4e2e0e", "text": "Money laundering refers to disguise or conceal the source and nature of variety ill-gotten gains, to make it legalization. In this paper, we design and implement the anti-money laundering regulatory application system (AMLRAS), which can not only automate sorting and counting the money laundering cases in comprehension and details, but also collect, analyses and count the large cash transactions. We also adopt data mining techniques DBSCAN clustering algorithm to identify suspicious financial transactions, while using link analysis (LA) to mark the suspicious level. The presumptive approach is tested on large cash transaction data which is provided by a bank where AMLRAS has already been applied. The result proves that this method is automatable to detect suspicious financial transaction cases from mass financial data, which is helpful to prevent money laundering from occurring.", "title": "" }, { "docid": "9b1d851a41e7c253a61fec9cb65ebbfc", "text": "One of Android's main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information. We discuss the desired properties of risk signals and relative risk scores for Android apps in order to generate another metric that users can utilize when choosing apps. We present a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques. Experimental results conducted using real-world data sets show that these methods can effectively identify malware as very risky, are simple to understand, and easy to use.", "title": "" }, { "docid": "40e0d6e93c426107cbefbdf3d4ca85b9", "text": "H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.", "title": "" }, { "docid": "559be3dd29ae8f6f9a9c99951c82a8d3", "text": "This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.", "title": "" }, { "docid": "238620ca0d9dbb9a4b11756630db5510", "text": "this planet and many oceanic and maritime applications seem relatively slow in exploiting the state-of-the-art info-communication technologies. The natural and man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like sensor networks as an economically viable alternative to currently adopted and costly methods used in seismic monitoring, structural health monitoring, installation and mooring, etc. Underwater sensor networks (UWSNs) are the enabling technology for wide range of applications like monitoring the strong influences and impact of climate regulation, nutrient production, oil retrieval and transportation The underwater environment differs from the terrestrial radio environment both in terms of its energy costs and channel propagation phenomena. The underwater channel is characterized by long propagation times and frequency-dependent attenuation that is highly affected by the distance between nodes as well as by the link orientation. Some of other issues in which UWSNs differ from terrestrial are limited bandwidth, constrained battery power, more failure of sensors because of fouling and corrosion, etc. This paper presents several fundamental key aspects and architectures of UWSNs, emerging research issues of underwater sensor networks and exposes the researchers into networking of underwater communication devices for exciting ocean monitoring and exploration applications. I. INTRODUCTION The Earth is a water planet. Around 70% of the surface of earth is covered by water. This is largely unexplored area and recently it has fascinated humans to explore it. Natural or man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like wireless sensor", "title": "" }, { "docid": "ca384725ef293d63e700d0a31fd8e7dd", "text": "Attaching next-generation non-volatile memories (NVMs) to the main memory bus provides low-latency, byte-addressable access to persistent data that should significantly improve performance for a wide range of storage-intensive workloads. We present an analysis of storage application performance with non-volatile main memory (NVMM) using a hardware NVMM emulator that allows fine-grain tuning of NVMM performance parameters. Our evaluation results show that NVMM improves storage application performance significantly over flash-based SSDs and HDDs. We also compare the performance of applications running on realistic NVMM with the performance of the same applications running on idealized NVMM with the same performance as DRAM. We find that although NVMM is projected to have higher latency and lower bandwidth than DRAM, these difference have only a modest impact on application performance. A much larger drag on NVMM performance is the cost of ensuring data resides safely in the NVMM (rather than the volatile caches) so that applications can make strong guarantees about persistence and consistency. In response, we propose an optimized approach to flushing data from CPU caches that minimizes this cost. Our evaluation shows that this technique significantly improves performance for applications that require strict durability and consistency guarantees over large regions of memory.", "title": "" }, { "docid": "e1485bddbab0c3fa952d045697ff2112", "text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.", "title": "" }, { "docid": "1b6a967402639dd6b3ca7138692fab54", "text": "Web searchers often exhibit directed search behaviors such as navigating to a particular Website. However, in many circumstances they exhibit different behaviors that involve issuing many queries and visiting many results. In such cases, it is not clear whether the user's rationale is to intentionally explore the results or whether they are struggling to find the information they seek. Being able to disambiguate between these types of long search sessions is important for search engines both in performing retrospective analysis to understand search success, and in developing real-time support to assist searchers. The difficulty of this challenge is amplified since many of the characteristics of exploration (e.g., multiple queries, long duration) are also observed in sessions where people are struggling. In this paper, we analyze struggling and exploring behavior in Web search using log data from a commercial search engine. We first compare and contrast search behaviors along a number dimensions, including query dynamics during the session. We then build classifiers that can accurately distinguish between exploring and struggling sessions using behavioral and topical features. Finally, we show that by considering the struggling/exploring prediction we can more accurately predict search satisfaction.", "title": "" }, { "docid": "8b51b2ee7385649bc48ba4febe0ec4c3", "text": "This paper presents a HMM-based methodology for action recogni-tion using star skeleton as a representative descriptor of human posture. Star skeleton is a fast skeletonization technique by connecting from centroid of target object to contour extremes. To use star skeleton as feature for action recognition, we clearly define the fea-ture as a five-dimensional vector in star fashion because the head and four limbs are usually local extremes of human shape. In our proposed method, an action is composed of a series of star skeletons over time. Therefore, time-sequential images expressing human action are transformed into a feature vector sequence. Then the fea-ture vector sequence must be transformed into symbol sequence so that HMM can model the action. We design a posture codebook, which contains representative star skeletons of each action type and define a star distance to measure the similarity between feature vec-tors. Each feature vector of the sequence is matched against the codebook and is assigned to the symbol that is most similar. Conse-quently, the time-sequential images are converted to a symbol posture sequence. We use HMMs to model each action types to be recognized. In the training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. For human action recognition, the model which best matches the observed symbol sequence is selected as the recog-nized category. We implement a system to automatically recognize ten different types of actions, and the system has been tested on real human action videos in two cases. One case is the classification of 100 video clips, each containing a single action type. A 98% recog-nition rate is obtained. The other case is a more realistic situation in which human takes a series of actions combined. An action-series recognition is achieved by referring a period of posture history using a sliding window scheme. The experimental results show promising performance.", "title": "" }, { "docid": "84c6f828c4a86b8a0ab14ca84d294e52", "text": "In sectorless air traffic management (ATM) concept, air traffic controllers are no longer in charge of a certain sector. Instead, the sectorless airspace is considered as a single unit and controllers are assigned certain aircraft, which might be located anywhere in the sectorless airspace. The air traffic controllers are responsible for these geographically independent aircraft all the way from their entry into the airspace to the exit. In order to support the controllers with this task, they are provided with one radar display for each assigned aircraft. This means, only one aircraft on each of these radar displays is under their control as the surrounding traffic is under control of other controllers. Each air traffic controller has to keep track of several traffic situations at the same time. In order to optimally support controllers with this task, a color-coding of the information is necessary. For example, the aircraft under control can be distinguished from the surrounding traffic by displaying them in a certain color. Furthermore, conflict detection and resolution information can be color-coded, such that it is straightforward which controller is in charge of solving a conflict. We conducted a human-in-the-loop simulation in order to compare different color schemes for a sectorless ATM controller working position. Three different color schemes were tested: a positive contrast polarity scheme that follows the current look of the P1/VAFORIT (P1/very advanced flight-data processing operational requirement implementation) display used by the German air navigation service provider DFS in the Karlsruhe upper airspace control center, a newly designed negative contrast polarity color scheme and a modified positive contrast polarity scheme. An analysis of the collected data showed no significant evidence for an impact of the color schemes on controller task performance. However, results suggest that a positive contrast polarity should be preferred and that the newly designed positive contrast polarity color scheme has advantages over the P1/VAFORIT color scheme when used for sectorless ATM.", "title": "" } ]
scidocsrr
b035e5ae9b90655f49cb96ffb32940d2
Real-Time Pedestrian Detection with Deep Network Cascades
[ { "docid": "8e117986ccaed290d5e567d1963ab3f7", "text": "Pedestrian detection from images is an important and yet challenging task. The conventional methods usually identify human figures using image features inside the local regions. In this paper we present that, besides the local features, context cues in the neighborhood provide important constraints that are not yet well utilized. We propose a framework to incorporate the context constraints for detection. First, we combine the local window with neighborhood windows to construct a multi-scale image context descriptor, designed to represent the contextual cues in spatial, scaling, and color spaces. Second, we develop an iterative classification algorithm called contextual boost. At each iteration, the classifier responses from the previous iteration across the neighborhood and multiple image scales, called classification context, are incorporated as additional features to learn a new classifier. The number of iterations is determined in the training process when the error rate converges. Since the classification context incorporates contextual cues from the neighborhood, through iterations it implicitly propagates to greater areas and thus provides more global constraints. We evaluate our method on the Caltech benchmark dataset [11]. The results confirm the advantages of the proposed framework. Compared with state of the arts, our method reduces the miss rate from 29% by [30] to 25% at 1 false positive per image (FPPI).", "title": "" }, { "docid": "330329a7ce02b89373b935c99e4f1471", "text": "Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9% reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.", "title": "" }, { "docid": "ca20d27b1e6bfd1f827f967473d8bbdd", "text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.", "title": "" } ]
[ { "docid": "964deb65d393564f62b9df68fa1b00d9", "text": "Inferring abnormal glucose events such as hyperglycemia and hypoglycemia is crucial for the health of both diabetic patients and non-diabetic people. However, regular blood glucose monitoring can be invasive and inconvenient in everyday life. We present SugarMate, a first smartphone-based blood glucose inference system as a temporary alternative to continuous blood glucose monitors (CGM) when they are uncomfortable or inconvenient to wear. In addition to the records of food, drug and insulin intake, it leverages smartphone sensors to measure physical activities and sleep quality automatically. Provided with the imbalanced and often limited measurements, a challenge of SugarMate is the inference of blood glucose levels at a fine-grained time resolution. We propose Md3RNN, an efficient learning paradigm to make full use of the available blood glucose information. Specifically, the newly designed grouped input layers, together with the adoption of a deep RNN model, offer an opportunity to build blood glucose models for the general public based on limited personal measurements from single-user and grouped-users perspectives. Evaluations on 112 users demonstrate that Md3RNN yields an average accuracy of 82.14%, significantly outperforming previous learning methods those are either shallow, generically structured, or oblivious to grouped behaviors. Also, a user study with the 112 participants shows that SugarMate is acceptable for practical usage.", "title": "" }, { "docid": "503277b20b3fd087df5c91c1a7c7a173", "text": "Among vertebrates, only microchiropteran bats, cetaceans and some rodents are known to produce and detect ultrasounds (frequencies greater than 20 kHz) for the purpose of communication and/or echolocation, suggesting that this capacity might be restricted to mammals. Amphibians, reptiles and most birds generally have limited hearing capacity, with the ability to detect and produce sounds below ∼12 kHz. Here we report evidence of ultrasonic communication in an amphibian, the concave-eared torrent frog (Amolops tormotus) from Huangshan Hot Springs, China. Males of A. tormotus produce diverse bird-like melodic calls with pronounced frequency modulations that often contain spectral energy in the ultrasonic range. To determine whether A. tormotus communicates using ultrasound to avoid masking by the wideband background noise of local fast-flowing streams, or whether the ultrasound is simply a by-product of the sound-production mechanism, we conducted acoustic playback experiments in the frogs' natural habitat. We found that the audible as well as the ultrasonic components of an A. tormotus call can evoke male vocal responses. Electrophysiological recordings from the auditory midbrain confirmed the ultrasonic hearing capacity of these frogs and that of a sympatric species facing similar environmental constraints. This extraordinary upward extension into the ultrasonic range of both the harmonic content of the advertisement calls and the frog's hearing sensitivity is likely to have co-evolved in response to the intense, predominantly low-frequency ambient noise from local streams. Because amphibians are a distinct evolutionary lineage from microchiropterans and cetaceans (which have evolved ultrasonic hearing to minimize congestion in the frequency bands used for sound communication and to increase hunting efficacy in darkness), ultrasonic perception in these animals represents a new example of independent evolution.", "title": "" }, { "docid": "563c0f48ce83eddc15cd2f3d88c7efda", "text": "This paper presents investigations into the role of computer-vision technology in developing safer automobiles. We consider vision systems, which cannot only look out of the vehicle to detect and track roads and avoid hitting obstacles or pedestrians but simultaneously look inside the vehicle to monitor the attentiveness of the driver and even predict her intentions. In this paper, a systems-oriented framework for developing computer-vision technology for safer automobiles is presented. We will consider three main components of the system: environment, vehicle, and driver. We will discuss various issues and ideas for developing models for these main components as well as activities associated with the complex task of safe driving. This paper includes a discussion of novel sensory systems and algorithms for capturing not only the dynamic surround information of the vehicle but also the state, intent, and activity patterns of drivers", "title": "" }, { "docid": "0b62a4c31c85b88d25c7d53730068f62", "text": "Optimization algorithms are normally influenced by meta-heuristic approach. In recent years several hybrid methods for optimization are developed to find out a better solution. The proposed work using meta-heuristic Nature Inspired algorithm is applied with back-propagation method to train a feed-forward neural network. Firefly algorithm is a nature inspired meta-heuristic algorithm, and it is incorporated into back-propagation algorithm to achieve fast and improved convergence rate in training feed-forward neural network. The proposed technique is tested over some standard data set. It is found that proposed method produces an improved convergence within very few iteration. This performance is also analyzed and compared to genetic algorithm based back-propagation. It is observed that proposed method consumes less time to converge and providing improved convergence rate with minimum feed-forward neural network design", "title": "" }, { "docid": "699c6dbdd58642ec700246a52bc0ce66", "text": "The findings, interpretations and conclusions expressed in this report are those of the authors and do not necessarily imply the expression of any opinion whatsoever on the part of the Management or the Executive Directors of the African Development Bank, nor the Governments they represent, nor of the other institutions mentioned in this study. In the preparation of this report, every effort has been made to provide the most up to date, correct and clearly expressed information as possible; however, the authors do not guarantee accuracy of the data. Rights and Permissions All rights reserved. Reproduction, citation and dissemination of material contained in this information product for educational and non-commercial purposes are authorized without any prior written permission from the publisher, if the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited. Since 2000, Africa has been experiencing a remarkable economic growth accompanied by improving democratic environment. Real GDP growth has risen by more than twice its pace in the last decade. Telecommunications, financial services and banking, construction and private-investment inflows have also increased substantially. However, most of the benefits of the high growth rates achieved over the last few years have not reached the rural poor. For this to happen, substantial growth in the agriculture sector will need to be stimulated and sustained, as the sector is key to inclusive growth, given its proven record of contributing to more robust reduction of poverty. This is particularly important when juxtaposed with the fact that the majority of Africa's poor are engaged in agriculture, a sector which supports the livelihoods of 90 percent of Africa's population. The sector also provides employment for about 60 percent of the economically active population, and 70 percent of the continent's poorest communities. In spite of agriculture being an acknowledged leading growth driver for Africa, the potential of the sector's contribution to growth and development has been underexploited mainly due to a variety of challenges, including the widening technology divide, weak infrastructure and declining technical capacity. These challenges have been exacerbated by weak input and output marketing systems and services, slow progress in regional integration, land access and rights issues, limited access to affordable credit, challenging governance issues in some countries, conflicts, effects of climate change, and the scourge of HIV/AIDS and other diseases. Green growth is critical to Africa because of the fragility of the …", "title": "" }, { "docid": "6356b20b8758abf67dd76549a416f963", "text": "Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for \"getting the wrong answer\".", "title": "" }, { "docid": "70a2e62e74ef41e487d49aa4dcbbc1e9", "text": "The earliest fossil evidence of terrestrial animal activity is from the Ordovician, ~450 million years ago (Ma). However, there are earlier animal fossils, and most molecular clocks suggest a deep origin of animal phyla in the Precambrian, leaving open the possibility that animals colonized land much earlier than the Ordovician. To further investigate the time of colonization of land by animals, we sequenced two nuclear genes, glyceraldehyde-3-phosphate dehydrogenase and enolase, in representative arthropods and conducted phylogenetic and molecular clock analyses of those and other available DNA and protein sequence data. To assess the robustness of animal molecular clocks, we estimated the deuterostome-arthropod divergence using the arthropod fossil record for calibration and tunicate instead of vertebrate sequences to represent Deuterostomia. Nine nuclear and 15 mitochondrial genes were used in phylogenetic analyses and 61 genes were used in molecular clock analyses. Significant support was found for the unconventional pairing of myriapods (millipedes and centipedes) with chelicerates (spiders, scorpions, horseshoe crabs, etc.) using nuclear and mitochondrial genes. Our estimated time for the divergence of millipedes (Diplopoda) and centipedes (Chilopoda) was 442 ± 50 Ma, and the divergence of insects and crustaceans was estimated as 666 ± 58 Ma. Our results also agree with previous studies suggesting a deep divergence (~1100 – 900 Ma) for arthropods and deuterostomes, considerably predating the Cambrian Explosion seen in the animal fossil record. The consistent support for a close relationship between myriapods and chelicerates, using mitochondrial and nuclear genes and different methods of analysis, suggests that this unexpected result is not an artefact of analysis. We propose the name Myriochelata for this group of animals, which includes many that immobilize prey with venom. Our molecular clock analyses using arthropod fossil calibrations support earlier studies using vertebrate calibrations in finding that deuterostomes and arthropods diverged hundreds of millions of years before the Cambrian explosion. However, our molecular time estimate for the divergence of millipedes and centipedes is close to the divergence time inferred from fossils. This suggests that arthropods may have adapted to the terrestrial environment relatively late in their evolutionary history.", "title": "" }, { "docid": "dfa890a87b2e5ac80f61c793c8bca791", "text": "Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead planning. Inspired by the literature on hierarchical planning, I propose learning a hierarchy of models of the environment that abstract temporal detail as a means of improving the scalability of RL algorithms. I present H-DYNA (Hierarchical DYNA), an extension to Sutton's DYNA architecture that is able to learn such a hierarchy of abstract models. H-DYNA di ers from hierarchical planners in two ways: rst, the abstract models are learned using experience gained while learning to solve other tasks in the same environment, and second, the abstract models can be used to solve stochastic control tasks. Simulations on a set of compositionally-structured navigation tasks show that H-DYNA can learn to solve them faster than conventional RL algorithms. The abstract models also serve as mechanisms for achieving transfer of learning across multiple tasks.", "title": "" }, { "docid": "6e36103ba9f21103252141ad4a53b4ac", "text": "In this paper, we describe the binary classification of sentences into idiomatic and non-idiomatic. Our idiom detection algorithm is based on linear discriminant analysis (LDA). To obtain a discriminant subspace, we train our model on a small number of randomly selected idiomatic and non-idiomatic sentences. We then project both the training and the test data on the chosen subspace and use the three nearest neighbor (3NN) classifier to obtain accuracy. The proposed approach is more general than the previous algorithms for idiom detection — neither does it rely on target idiom types, lexicons, or large manually annotated corpora, nor does it limit the search space by a particular linguistic con-", "title": "" }, { "docid": "717e5a5b6026d42e7379d8e2c0c7ff45", "text": "In this paper, a color image segmentation approach based on homogram thresholding and region merging is presented. The homogram considers both the occurrence of the gray levels and the neighboring homogeneity value among pixels. Therefore, it employs both the local and global information. Fuzzy entropy is utilized as a tool to perform homogram analysis for nding all major homogeneous regions at the rst stage. Then region merging process is carried out based on color similarity among these regions to avoid oversegmentation. The proposed homogram-based approach (HOB) is compared with the histogram-based approach (HIB). The experimental results demonstrate that the HOB can nd homogeneous regions more eeectively than HIB does, and can solve the problem of discriminating shading in color images to some extent.", "title": "" }, { "docid": "91771b6c50d7193e5612d9552913dec8", "text": "The expected diffusion of EVehicles (EVs) to limit the impact of fossil fuel on mobility is going to cause severe issues to the management of electric grid. A large number of charging stations is going to be installed on the power grid to support EVs. Each of the charging station could require more than 100 kW from the grid. The grid consumption is unpredictable and it depends from the need of EVs in the neighborhood. The impact of the EV on the power grid can be limited by the proper exploitation of Vehicle to Grid communication (V2G). The advent of Low Power Wide Area Network (LPWAN) promoted by Internet Of Things applications offers new opportunity for wireless communications. In this work, an example of such a technology (the LoRaWAN solution) is tested in a real-world scenario as a candidate for EV to grid communications. The experimental results highlight as LoRaWAN technology can be used to cover an area with a radius under 2 km, in an urban environment. At this distance, the Received Signal Strength Indicator (RSSI) is about −117 dBm. Such a result demonstrates the feasibility of the proposed approach.", "title": "" }, { "docid": "caa7ecc11fc36950d3e17be440d04010", "text": "In this paper, a comparative study of routing protocols is performed in a hybrid network to recommend the best routing protocol to perform load balancing for Internet traffic. Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP) and Intermediate System to Intermediate System (IS-IS) routing protocols are compared in OPNET modeller 14 to investigate their capability of ensuring fair distribution of traffic in a hybrid network. The network simulated is scaled to a campus. The network loads are varied in size and performance study is made by running simulations with all the protocols. The only considered performance factors for observation are packet drop, network delay, throughput and network load. IGRP presented better performance as compared to other protocols. The benefit of using IGRP is reduced packet drop, reduced network delay, increased throughput while offering relative better distribution of traffic in a hybrid network.", "title": "" }, { "docid": "c80795f19f899276d0fa03d9e6ca4651", "text": "In this paper, we present a computer forensic method for detecting timestamp forgeries in the Windows NTFS file system. It is difficult to know precisely that the timestamps have been changed by only examining the timestamps of the file itself. If we can find the past timestamps before any changes to the file are made, this can act as evidence of file time forgery. The log records operate on files and leave large amounts of information in the $LogFile that can be used to reconstruct operations on the files and also used as forensic evidence. Log record with 0x07/0x07 opcode in the data part of Redo/Undo attribute has timestamps which contain past-and-present timestamps. The past-and-present timestamps can be decisive evidence to indicate timestamp forgery, as they contain when and how the timestamps were changed. We used file time change tools that can easily be found on Internet sites. The patterns of the timestamp change created by the tools are different compared to those of normal file operations. Seven file operations have ten timestamp change patterns in total by features of timestamp changes in the $STANDARD_INFORMATION attribute and the $FILE_NAME attribute. We made rule sets for detecting timestamp forgery based on using difference comparison between changes in timestamp patterns by the file time change tool and normal file operations. We apply the forensic rule sets for “.txt”, “.docx” and “.pdf” file types, and we show the effectiveness and validity of the proposed method. The importance of this research lies in the fact that we can find the past time in $LogFile, which gives decisive evidence of timestamp forgery. This makes the timestamp active evidence as opposed to simply being passive evidence. a 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fd224b566e19290e98f4d8b81c47dfa7", "text": "HTTP adaptive streaming is an attractive solution to the explosion of multimedia content consumption over the Internet, which has recently been introduced to information-centric networking in the form of DASH over CCN. In this paper, we enhance the performance of such design by taking advantage of congestion feedback available in ICN networks. By means of utility fairness optimization framework, we improve the adaptation logic in terms of fairness and stability of the multimedia bitrate delivered to content consumers. Interestingly, we find that such fairness and stability have a very positive impact on caching, making streaming adaptation highly friendly to the ubiquitous in-network caches of the ICN architectures.", "title": "" }, { "docid": "92137a6f5fa3c5059bdb08db2fb5c39d", "text": "Motivated by our ongoing efforts in the development of Refraction 2, a puzzle game targeting mathematics education, we realized that the quality of a puzzle is critically sensitive to the presence of alternative solutions with undesirable properties. Where, in our game, we seek a way to automatically synthesize puzzles that can only be solved if the player demonstrates specific concepts, concern for the possibility of undesirable play touches other interactive design domains. To frame this problem (and our solution to it) in a general context, we formalize the problem of generating solvable puzzles that admit no undesirable solutions as an NPcomplete search problem. By making two design-oriented extensions to answer set programming (a technology that has been recently applied to constrained game content generation problems) we offer a general way to declaratively pose and automatically solve the high-complexity problems coming from this formulation. Applying this technique to Refraction, we demonstrate a qualitative leap in the kind of puzzles we can reliably generate. This work opens up new possibilities for quality-focused content generators that guarantee properties over their entire combinatorial space of play.", "title": "" }, { "docid": "1377d6c1a1a23bc6be8d21e9e43ccaa3", "text": "Hepatocellular carcinoma (HCC) is one of the most common and aggressive human malignancies. Its high mortality rate is mainly a result of intra-hepatic metastases. We analyzed the expression profiles of HCC samples without or with intra-hepatic metastases. Using a supervised machine-learning algorithm, we generated for the first time a molecular signature that can classify metastatic HCC patients and identified genes that were relevant to metastasis and patient survival. We found that the gene expression signature of primary HCCs with accompanying metastasis was very similar to that of their corresponding metastases, implying that genes favoring metastasis progression were initiated in the primary tumors. Osteopontin, which was identified as a lead gene in the signature, was over-expressed in metastatic HCC; an osteopontin-specific antibody effectively blocked HCC cell invasion in vitro and inhibited pulmonary metastasis of HCC cells in nude mice. Thus, osteopontin acts as both a diagnostic marker and a potential therapeutic target for metastatic HCC.", "title": "" }, { "docid": "0036a3511abfa76b366bbe6fd877e894", "text": "We apply machine-learning techniques to construct nonlinear nonparametric forecasting models of consumer credit risk. By combining customer transactions and credit bureau data from January 2005 to April 2009 for a sample of a major commercial bank’s customers, we are able to construct out-of-sample forecasts that significantly improve the classification rates of credit-card-holder delinquencies and defaults, with linear regression R’s of forecasted/realized delinquencies of 85%. Using conservative assumptions for the costs and benefits of cutting credit lines based on machine-learning forecasts, we estimate the cost savings to range from 6% to 25% of total losses. Moreover, the time-series patterns of estimated delinquency rates from this model over the course of the recent financial crisis suggest that aggregated consumer credit-risk analytics may have important applications in forecasting systemic risk. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "71022e2197bfb99bd081928cf162f58a", "text": "Ophthalmology and visual health research have received relatively limited attention from the personalized medicine community, but this trend is rapidly changing. Postgenomics technologies such as proteomics are being utilized to establish a baseline biological variation map of the human eye and related tissues. In this context, the choroid is the vascular layer situated between the outer sclera and the inner retina. The choroidal circulation serves the photoreceptors and retinal pigment epithelium (RPE). The RPE is a layer of cuboidal epithelial cells adjacent to the neurosensory retina and maintains the outer limit of the blood-retina barrier. Abnormal changes in choroid-RPE layers have been associated with age-related macular degeneration. We report here the proteome of the healthy human choroid-RPE complex, using reverse phase liquid chromatography and mass spectrometry-based proteomics. A total of 5309 nonredundant proteins were identified. Functional analysis of the identified proteins further pointed to molecular targets related to protein metabolism, regulation of nucleic acid metabolism, transport, cell growth, and/or maintenance and immune response. The top canonical pathways in which the choroid proteins participated were integrin signaling, mitochondrial dysfunction, regulation of eIF4 and p70S6K signaling, and clathrin-mediated endocytosis signaling. This study illustrates the largest number of proteins identified in human choroid-RPE complex to date and might serve as a valuable resource for future investigations and biomarker discovery in support of postgenomics ophthalmology and precision medicine.", "title": "" }, { "docid": "f86dfe07f73e2dba05796e6847765e7a", "text": "OBJECTIVE\nThe aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS).\n\n\nBACKGROUND\nHFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents.\n\n\nMETHOD\nUsing data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period.\n\n\nRESULTS\nThe majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences.\n\n\nCONCLUSION\nThese data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety.\n\n\nAPPLICATION\nHFACS provides a tool for assessing human factors associated with accidents and incidents.", "title": "" } ]
scidocsrr
779c3634f393d5491ceae500bad29ff1
Text recognition using deep BLSTM networks
[ { "docid": "744d409ba86a8a60fafb5c5602f6d0f0", "text": "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 %, 65 %, and 55 % for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.", "title": "" } ]
[ { "docid": "a75a8a6a149adf80f6ec65dea2b0ec0d", "text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.", "title": "" }, { "docid": "6ee0c9832d82d6ada59025d1c7bb540e", "text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.", "title": "" }, { "docid": "0801ef431c6e4dab6158029262a3bf82", "text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.", "title": "" }, { "docid": "f36826993d5a9f99fc3554b5f542780e", "text": "In this research, an adaptive timely traffic light is proposed as solution for congestion in typical area in Indonesia. Makassar City, particularly in the most complex junction (fly over, Pettarani, Reformasi highway and Urip S.) is observed for months using static cameras. The condition is mapped into fuzzy logic to have a better time transition of traffic light as opposed to the current conventional traffic light system. In preliminary result, fuzzy logic shows significant number of potential reduced in congestion. Each traffic line has 20-30% less congestion with future implementation of the proposed system.", "title": "" }, { "docid": "a3d1f4a35a8de5278d7295b4ae21451c", "text": "How can one build a distributed framework that allows efficient deployment of a wide spectrum of modern advanced machine learning (ML) programs for industrial-scale problems using Big Models (100s of billions of parameters) on Big Data (terabytes or petabytes)- Contemporary parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized operators relying on graphical representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of different ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by leveraging several fundamental properties underlying ML programs that make them different from conventional operation-centric programs: error tolerance, dynamic structure, and nonuniform convergence; all stem from the optimization-centric nature shared in ML programs' mathematical definitions, and the iterative-convergent behavior of their algorithmic solutions. These properties present unique opportunities for an integrative system design, built on bounded-latency network synchronization and dynamic load-balancing scheduling, which is efficient, programmable, and enjoys provable correctness guarantees. We demonstrate how such a design in light of ML-first principles leads to significant performance improvements versus well-known implementations of several ML programs, allowing them to run in much less time and at considerably larger model sizes, on modestly-sized computer clusters.", "title": "" }, { "docid": "ef5769145c4c1ebe06af0c8b5f67e70e", "text": "Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.", "title": "" }, { "docid": "dfb83ad16854797137e34a5c7cb110ae", "text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.", "title": "" }, { "docid": "2c0cc129d7b12c1b61a149e46af23a4b", "text": "This paper presents our experiences of introducing in a senior level microprocessor course the latest touch sensing technologies, especially programming capacitive touch sensing devices and touchscreen. The emphasis is on the teaching practice details, including the enhanced course contents, outcomes and lecture and lab organization. By utilizing the software package provided by Atmel, students are taught to efficiently build MCU-based embedded applications which control various touch sensing devices. This work makes use of the 32-bit ARM Cortex-M4 microprocessor to control complex touch sensing devices (i.e., touch keys, touch slider and touchscreen). The Atmel SAM 4S-EK2 board is chosen as the main development board employed for practicing the touch devices programming. Multiple capstone projects have been developed, for example adaptive touch-based servo motor control, and calculator and games on the touchscreen. Our primary experiences indicate that the project-based learning approach with the utilization of the selected microcontroller board and software package is efficient and practical for teaching advanced touch sensing techniques. Students have shown the great interest and the capability in adopting touch devices into their senior design projects to improve human machine interface.", "title": "" }, { "docid": "5ccf0b3f871f8362fccd4dbd35a05555", "text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.", "title": "" }, { "docid": "c0762517ebbae00ab5ee1291460c164c", "text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.", "title": "" }, { "docid": "9e933363229c21caccc3842417dd6d60", "text": "A novel double-layered vertically stacked substrate integrated waveguide leaky-wave antenna (SIW LWA) is presented. An array of vias on the narrow wall produces leakage through excitation of TE10 fast-wave mode of the waveguide. Attenuation and phase constants of the leaky mode are controlled independently to obtain desired pattern in the elevation. In the azimuth, top and bottom layers radiate independently, producing symmetrically located beams on both sides of broadside. A new near-field analysis of single LWA is performed to determine wavenumbers and as a way to anticipate radiation characteristics of the dual layer antenna. In addition to frequency beam steering in the elevation plane, this novel topology also offers flexibility for multispot illumination of the azimuth plane with flat-topped beams at every ${\\varphi }$ -cut through excitation of each layer separately or both antennas simultaneously. It is shown that the proposed antenna solution is a qualified candidate for 5G base station antenna (BSA) applications due to its capability of interference mitigation and latency reduction. Moreover, from the point of view of highly reliable connectivity, users can enjoy seamless mobility through the provided spatial diversity. A 15-GHz prototype has been fabricated and tested. Measured results are in good agreement with those of simulations.", "title": "" }, { "docid": "b610e9bef08ef2c133a02e887b89b196", "text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.", "title": "" }, { "docid": "fd184f271a487aba70025218fd8c76e4", "text": "BACKGROUND\nIron deficiency anaemia is common in patients with chronic kidney disease, and intravenous iron is the preferred treatment for those on haemodialysis. The aim of this trial was to compare the efficacy and safety of iron isomaltoside 1000 (Monofer®) with iron sucrose (Venofer®) in haemodialysis patients.\n\n\nMETHODS\nThis was an open-label, randomized, multicentre, non-inferiority trial conducted in 351 haemodialysis subjects randomized 2:1 to either iron isomaltoside 1000 (Group A) or iron sucrose (Group B). Subjects in Group A were equally divided into A1 (500 mg single bolus injection) and A2 (500 mg split dose). Group B were also treated with 500 mg split dose. The primary end point was the proportion of subjects with haemoglobin (Hb) in the target range 9.5-12.5 g/dL at 6 weeks. Secondary outcome measures included haematology parameters and safety parameters.\n\n\nRESULTS\nA total of 351 subjects were enrolled. Both treatments showed similar efficacy with >82% of subjects with Hb in the target range (non-inferiority, P = 0.01). Similar results were found when comparing subgroups A1 and A2 with Group B. No statistical significant change in Hb concentration was found between any of the groups. There was a significant increase in ferritin from baseline to Weeks 1, 2 and 4 in Group A compared with Group B (Weeks 1 and 2: P < 0.001; Week 4: P = 0.002). There was a significant higher increase in reticulocyte count in Group A compared with Group B at Week 1 (P < 0.001). The frequency, type and severity of adverse events were similar.\n\n\nCONCLUSIONS\nIron isomaltoside 1000 and iron sucrose have comparative efficacy in maintaining Hb concentrations in haemodialysis subjects and both preparations were well tolerated with a similar short-term safety profile.", "title": "" }, { "docid": "e433da4c3128a48c4c2fad39ddb55ac1", "text": "Vector field design on surfaces is necessary for many graphics applications: example-based texture synthesis, nonphotorealistic rendering, and fluid simulation. For these applications, singularities contained in the input vector field often cause visual artifacts. In this article, we present a vector field design system that allows the user to create a wide variety of vector fields with control over vector field topology, such as the number and location of singularities. Our system combines basis vector fields to make an initial vector field that meets user specifications.The initial vector field often contains unwanted singularities. Such singularities cannot always be eliminated due to the Poincaré-Hopf index theorem. To reduce the visual artifacts caused by these singularities, our system allows the user to move a singularity to a more favorable location or to cancel a pair of singularities. These operations offer topological guarantees for the vector field in that they only affect user-specified singularities. We develop efficient implementations of these operations based on Conley index theory. Our system also provides other editing operations so that the user may change the topological and geometric characteristics of the vector field.To create continuous vector fields on curved surfaces represented as meshes, we make use of the ideas of geodesic polar maps and parallel transport to interpolate vector values defined at the vertices of the mesh. We also use geodesic polar maps and parallel transport to create basis vector fields on surfaces that meet the user specifications. These techniques enable our vector field design system to work for both planar domains and curved surfaces.We demonstrate our vector field design system for several applications: example-based texture synthesis, painterly rendering of images, and pencil sketch illustrations of smooth surfaces.", "title": "" }, { "docid": "2f5d428b8da4d5b5009729fc1794e53d", "text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image", "title": "" }, { "docid": "9d0ed62f210d0e09db0cc6735699f5b3", "text": "The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.", "title": "" }, { "docid": "169258ee8696b481aac76fcee488632c", "text": "Three parkinsonian patients are described who independently discovered that their gait was facilitated by inverting a walking stick and using the handle, carried a few inches from the ground, as a visual cue or target to step over and initiate walking. It is suggested that the \"inverted\" walking stick have wider application in patients with Parkinson's disease as an aid to walking, particularly if they have difficulty with step initiation and maintenance of stride length.", "title": "" }, { "docid": "4fb6b884b22962c6884bd94f8b76f6f2", "text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.", "title": "" }, { "docid": "55f253cfb67ee0ba79b1439cc7e1764b", "text": "Despite legislative attempts to curtail financial statement fraud, it continues unabated. This study makes a renewed attempt to aid in detecting this misconduct using linguistic analysis with data mining on narrative sections of annual reports/10-K form. Different from the features used in similar research, this paper extracts three distinct sets of features from a newly constructed corpus of narratives (408 annual reports/10-K, 6.5 million words) from fraud and non-fraud firms. Separately each of these three sets of features is put through a suite of classification algorithms, to determine classifier performance in this binary fraud/non-fraud discrimination task. From the results produced, there is a clear indication that the language deployed by management engaged in wilful falsification of firm performance is discernibly different from truth-tellers. For the first time, this new interdisciplinary research extracts features for readability at a much deeper level, attempts to draw out collocations using n-grams and measures tone using appropriate financial dictionaries. This linguistic analysis with machine learning-driven data mining approach to fraud detection could be used by auditors in assessing financial reporting of firms and early detection of possible misdemeanours.", "title": "" }, { "docid": "7f81e1d6a6955cec178c1c811810322b", "text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems", "title": "" } ]
scidocsrr
86b95e504304e34daa7cbd4ee0ea2b30
Selecting the best VM across multiple public clouds: a data-driven performance modeling approach
[ { "docid": "70a07b1aedcb26f7f03ffc636b1d84a8", "text": "This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.\n We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.", "title": "" }, { "docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4", "text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.", "title": "" }, { "docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf", "text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.", "title": "" } ]
[ { "docid": "7f5e6c0061351ab064aa7fd25d076a1b", "text": "Guadua angustifolia Kunth was successfully propagated in vitro from axillary buds. Culture initiation, bud sprouting, shoot and plant multiplication, rooting and acclimatization, were evaluated. Best results were obtained using explants from greenhouse-cultivated plants, following a disinfection procedure that comprised the sequential use of an alkaline detergent, a mixture of the fungicide Benomyl and the bactericide Agri-mycin, followed by immersion in sodium hypochlorite (1.5% w/v) for 10 min, and culturing on Murashige and Skoog medium containing 2 ml l−1 of Plant Preservative Mixture®. Highest bud sprouting in original explants was observed when 3 mg l−1 N6-benzylaminopurine (BAP) was incorporated into the culture medium. Production of lateral shoots in in vitro growing plants increased with BAP concentration in culture medium, up to 5 mg l−1, the highest concentration assessed. After six subcultures, clumps of 8–12 axes were obtained, and their division in groups of 3–5 axes allowed multiplication of the plants. Rooting occurred in vitro spontaneously in 100% of the explants that produced lateral shoots. Successful acclimatization of well-rooted clumps of 5–6 axes was achieved in the greenhouse under mist watering in a mixture of soil, sand and rice hulls (1:1:1).", "title": "" }, { "docid": "4abceedb1f6c735a8bc91bc811ce4438", "text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.", "title": "" }, { "docid": "1384bc0c18a47630707dfebc036d8ac0", "text": "Recent research has demonstrated the important of ontology and its applications. For example, while designing adaptive learning materials, designers need to refer to the ontology of a subject domain. Moreover, ontology can show the whole picture and core knowledge about a subject domain. Research from literature also suggested that graphical representation of ontology can reduce the problems of information overload and learning disorientation for learners. However, ontology constructions used to rely on domain experts in the past; it is a time consuming and high cost task. Ontology creation for emerging new domains like e-learning is even more challenging. The aim of this paper is to construct e-learning domain concept maps, an alternative form of ontology, from academic articles. We adopt some relevant journal articles and conferences papers in e-learning domain as data sources, and apply text-mining techniques to automatically construct concept maps for e-learning domain. The constructed concept maps can provide a useful reference for researchers, who are new to e-leaning field, to study related issues, for teachers to design adaptive courses, and for learners to understand the whole picture of e-learning domain knowledge", "title": "" }, { "docid": "f2e62e761c357c8490f1b53f125f8f28", "text": "The credit crisis and the ongoing European sovereign debt crisis have highlighted the native form of credit risk, namely the counterparty risk. The related Credit Valuation Adjustment (CVA), Debt Valuation Adjustment (DVA), Liquidity Valuation Adjustment (LVA) and Replacement Cost (RC) issues, jointly referred to in this paper as Total Valuation Adjustment (TVA), have been thoroughly investigated in the theoretical papers Crépey (2012a, 2012b). The present work provides an executive summary and numerical companion to these papers, through which the TVA pricing problem can be reduced to Markovian pre-default TVA BSDEs. The first step consists in the counterparty clean valuation of a portfolio of contracts, which is the valuation in a hypothetical situation where the two parties would be risk-free and funded at a risk-free rate. In the second step, the TVA is obtained as the value of an option on the counterparty clean value process called Contingent Credit Default Swap (CCDS). Numerical results are presented for interest rate swaps in the Vasicek, as well as in the inverse Gaussian Hull-White short rate model, also allowing one to assess the related model risk issue.", "title": "" }, { "docid": "d71e9063c8ac026f1592d8db4d927edc", "text": "With the advancement of power electronics, new materials and novel bearing technologies, there has been an active development of high speed machines in recent years. The simple rotor structure makes switched reluctance machines (SRM) candidates for high speed operation. This paper has presents the design of a low power, 50,000 RPM 6/4 SRM having a toroidally wound stator. Finite element analysis (FEA) shows an equivalence to conventionally wound SRMs in terms of torque capability. With the conventional asymmetric converter and classic angular control, this toroidal-winding SRM (TSRM) is able to produce 233.20 W mechanical power with an efficiency of 75% at the FEA stage. Considering the enhanced cooling capability as the winding is directly exposed to air, the toroidal-winding is a good option for high-speed SRM.", "title": "" }, { "docid": "875548b7dc303bef8efa8284216e010d", "text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.", "title": "" }, { "docid": "2b1a9f7131b464d9587137baf828cd3a", "text": "The description of the spatial characteristics of twoand three-dimensional objects, in the framework of MPEG-7, is considered. The shape of an object is one of its fundamental properties, and this paper describes an e$cient way to represent the coarse shape, scale and composition properties of an object. This representation is invariant to resolution, translation and rotation, and may be used for both two-dimensional (2-D) and three-dimensional (3-D) objects. This coarse shape descriptor will be included in the eXperimentation Model (XM) of MPEG-7. Applications of such a description to search object databases, in particular the CAESAR anthropometric database are discussed. ( 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "439540480944799e93717d78fc298e68", "text": "Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields (“feature channels”), whereas the steerable G-CNN can also use vector or tensor fields (“capsules”) to represent data. In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show, using elementary methods, that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform according to an induced representation. This result, which follows from G.W. Mackey’s abstract theory on induced representations, establishes G-CNNs as a universal class of equivariant network architectures, and generalizes the important recent work of Kondor & Trivedi on the intertwiners between regular representations. In order for a convolution layer to be equivariant, the filter kernel needs to satisfy certain linear equivariance constraints. The space of equivariant kernels has a rich and interesting structure, which we expose using direct calculations. Additionally, we show how this general understanding can be used to compute a basis for the space of equivariant filter kernels, thereby providing a straightforward path to the implementation of G-CNNs for a wide range of groups and manifolds. 1 ar X iv :1 80 3. 10 74 3v 2 [ cs .L G ] 3 0 M ar 2 01 8", "title": "" }, { "docid": "6620aa5b1ecaac765112f0f1f15ef920", "text": "In this paper we present the tangible 3D tabletop and discuss the design potential of this novel interface. The tangible 3D tabletop combines tangible tabletop interaction with 3D projection in such a way that the tangible objects may be augmented with visual material corresponding to their physical shapes, positions, and orientation on the tabletop. In practice, this means that both the tabletop and the tangibles can serve as displays. We present the basic design principles for this interface, particularly concerning the interplay between 2D on the tabletop and 3D for the tangibles, and present examples of how this kind of interface might be used in the domain of maps and geolocalized data. We then discuss three central design considerations concerning 1) the combination and connection of content and functions of the tangibles and tabletop surface, 2) the use of tangibles as dynamic displays and input devices, and 3) the visual effects facilitated by the combination of the 2D tabletop surface and the 3D tangibles.", "title": "" }, { "docid": "d98b97dae367d57baae6b0211c781d66", "text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.", "title": "" }, { "docid": "e70f261ba4bfa47b476d2bbd4abd4982", "text": "A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this isn’t possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them. Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford CA 94305 ([email protected]) Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford,CA 94305 ([email protected]) Department of Electrical Engineering, University of California, Los Angeles, CA 90095 ([email protected]) Clear Shape Technologies, Inc., Sunnyvale, CA 94086 ([email protected])", "title": "" }, { "docid": "d3324f45ec730b5dc088cdd49bed7a8e", "text": "Social media use is a global phenomenon, with almost two billion people worldwide regularly using these websites. As Internet access around the world increases, so will the number of social media users. Neuroscientists can capitalize on the ubiquity of social media use to gain novel insights about social cognitive processes and the neural systems that support them. This review outlines social motives that drive people to use social media, proposes neural systems supporting social media use, and describes approaches neuroscientists can use to conduct research with social media. We close by noting important directions and ethical considerations of future research with social media.", "title": "" }, { "docid": "8ab51537f15c61f5b34a94461b9e0951", "text": "An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time.", "title": "" }, { "docid": "968965ddb9aa26b041ea688413935e86", "text": "Lightweight photo sharing, particularly via mobile devices, is fast becoming a common communication medium used for maintaining a presence in the lives of friends and family. How should such systems be designed to maximize this social presence while maintaining simplicity? An experimental photo sharing system was developed and tested that, compared to current systems, offers highly simplified, group-centric sharing, automatic and persistent people-centric organization, and tightly integrated desktop and mobile sharing and viewing. In an experimental field study, the photo sharing behaviors of groups of family or friends were studied using their normal photo sharing methods and with the prototype sharing system. Results showed that users found photo sharing easier and more fun, shared more photos, and had an enhanced sense of social presence when sharing with the experimental system. Results are discussed in the context of design principles for the rapidly increasing number of lightweight photo sharing systems.", "title": "" }, { "docid": "f7f1deeda9730056876db39b4fe51649", "text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image", "title": "" }, { "docid": "f93ebf9beefe35985b6e31445044e6d1", "text": "Recent genetic studies have suggested that the colonization of East Asia by modern humans was more complex than a single origin from the South, and that a genetic contribution via a Northern route was probably quite substantial. Here we use a spatially-explicit computer simulation approach to investigate the human migration hypotheses of this region based on one-route or two-route models. We test the likelihood of each scenario by using Human Leukocyte Antigen (HLA) − A, −B, and − DRB1 genetic data of East Asian populations, with both selective and demographic parameters considered. The posterior distribution of each parameter is estimated by an Approximate Bayesian Computation (ABC) approach. Our results strongly support a model with two main routes of colonization of East Asia on both sides of the Himalayas, with distinct demographic histories in Northern and Southern populations, characterized by more isolation in the South. In East Asia, gene flow between populations originating from the two routes probably existed until a remote prehistoric period, explaining the continuous pattern of genetic variation currently observed along the latitude. A significant although dissimilar level of balancing selection acting on the three HLA loci is detected, but its effect on the local genetic patterns appears to be minor compared to those of past demographic events.", "title": "" }, { "docid": "fe947a8e35bce2b3ebd479f1eab2eb99", "text": "Deep networks often perform well on the data manifold on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. We propose Manifold Mixup which encourages the network to produce more reasonable and less confident predictions at points with combinations of attributes not seen in the training set. This is accomplished by training on convex combinations of the hidden state representations of data samples. Using this method, we demonstrate improved semi-supervised learning, learning with limited labeled data, and robustness to novel transformations of the data not seen during training. Manifold Mixup requires no (significant) additional computation. We also discover intriguing properties related to adversarial examples and generative adversarial networks. Analytical experiments on both real data and synthetic data directly support our hypothesis for why the Manifold Mixup method improves results.", "title": "" }, { "docid": "71e640caa999167a3df19eca5df2bf7f", "text": "Grid-tie inverters are used to convert DC power into AC power for connection to an existing electrical grid and are key components in a microgrid system. This paper discusses the design and implementation of a grid-tie inverter for connecting renewable resources such as solar arrays, wind turbines, and energy storage to the AC grid, in a laboratory microgrid system while also controlling real and reactive power flows. The Atmel EVK1100 with an AVR32UC3A0512 microcontroller, will be used to coordinate all of the different functions of this grid-tie inverter. The EVK1100 will communicate with Rockwell PLCs via Ethernet. The PLCs are part of the communication, control and sensing network of the microgrid system.", "title": "" }, { "docid": "ac808ecd75ccee74fff89d03e3396f26", "text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.", "title": "" }, { "docid": "049c1597f063f9c5fcc098cab8885289", "text": "When one captures images in low-light conditions, the images often suffer from low visibility. This poor quality may significantly degrade the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a very simple and effective method, named as LIME, to enhance low-light images. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging real-world low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts.", "title": "" } ]
scidocsrr
c1f02e25a9e97206b807844b752a6ae5
SIRIUS-LTG-UiO at SemEval-2018 Task 7: Convolutional Neural Networks with Shortest Dependency Paths for Semantic Relation Extraction and Classification in Scientific Papers
[ { "docid": "7927dffe38cec1ce2eb27dbda644a670", "text": "This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation typespecific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19% and an accuracy of 77.92%.", "title": "" }, { "docid": "6e8cf6a53e1a9d571d5e5d1644c56e57", "text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.", "title": "" } ]
[ { "docid": "c9af9d5f461cb0aa196221c926ac4252", "text": "The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.", "title": "" }, { "docid": "cf5452e43b6141728da673892c680b6e", "text": "This paper presents another approach of Thai word segmentation, which is composed of two processes : syllable segmentation and syllable merging. Syllable segmentation is done on the basis of trigram statistics. Syllable merging is done on the basis of collocation between syllables. We argue that many of word segmentation ambiguities can be resolved at the level of syllable segmentation. Since a syllable is a more well-defined unit and more consistent in analysis than a word, this approach is more reliable than other approaches that use a wordsegmented corpus. This approach can perform well at the level of accuracy 81-98% depending on the dictionary used in the segmentation.", "title": "" }, { "docid": "12a3e52c3af78663698e7b907f6ee912", "text": "A novel graph-based language-independent stemming algorithm suitable for information retrieval is proposed in this article. The main features of the algorithm are retrieval effectiveness, generality, and computational efficiency. We test our approach on seven languages (using collections from the TREC, CLEF, and FIRE evaluation platforms) of varying morphological complexity. Significant performance improvement over plain word-based retrieval, three other language-independent morphological normalizers, as well as rule-based stemmers is demonstrated.", "title": "" }, { "docid": "481931c78a24020a02245075418a26c3", "text": "Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. d-KG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.", "title": "" }, { "docid": "dd4edd271de8483fc3ce25f16763ffd1", "text": "Computer vision is a rapidly evolving discipline. It includes methods for acquiring, processing, and understanding still images and video to model, replicate, and sometimes, exceed human vision and perform useful tasks.\n Computer vision will be commonly used for a broad range of services in upcoming devices, and implemented in everything from movies, smartphones, cameras, drones and more. Demand for CV is driving the evolution of image sensors, mobile processors, operating systems, application software, and device form factors in order to meet the needs of upcoming applications and services that benefit from computer vision. The resulting impetus means rapid advancements in:\n • visual computing performance\n • object recognition effectiveness\n • speed and responsiveness\n • power efficiency\n • video image quality improvement\n • real-time 3D reconstruction\n • pre-scanning for movie animation\n • image stabilization\n • immersive experiences\n • and more...\n Comprised of innovation leaders of computer vision, this panel will cover recent developments, as well as how CV will be enabled and used in 2016 and beyond.", "title": "" }, { "docid": "c77042cb1a8255ac99ebfbc74979c3c6", "text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.", "title": "" }, { "docid": "70e80f9546215593862063af3fcf4a34", "text": "1 Corresponding Author 2 The two lead authors made substantially similar contributions to this paper. First authorship was determined by rotation among papers.", "title": "" }, { "docid": "d593f5205c84536ea1dfc4a561b86fca", "text": "State of the art approaches for visual-inertial sensor fusion use filter-based or optimization-based algorithms. Due to the nonlinearity of the system, a poor initialization can have a dramatic impact on the performance of these estimation methods. Recently, a closed-form solution providing such an initialization was derived in [1]. That solution determines the velocity (angular and linear) of a monocular camera in metric units by only using inertial measurements and image features acquired in a short time interval. In this letter, we study the impact of noisy sensors on the performance of this closed-form solution. We show that the gyroscope bias, not accounted for in [1], significantly affects the performance of the method. Therefore, we introduce a new method to automatically estimate this bias. Compared to the original method, the new approach now models the gyroscope bias and is robust to it. The performance of the proposed approach is successfully demonstrated on real data from a quadrotor MAV.", "title": "" }, { "docid": "c052c9e920ae871fbf20a8560b87d887", "text": "This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.", "title": "" }, { "docid": "7579b5cb9f18e3dc296bcddc7831abc5", "text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.", "title": "" }, { "docid": "14c3d8cee12007dc8af75c7e0df77f00", "text": "A modular magic sudoku solution is a sudoku solution with symbols in {0, 1, ..., 8} such that rows, columns, and diagonals of each subsquare add to zero modulo nine. We count these sudoku solutions by using the action of a suitable symmetry group and we also describe maximal mutually orthogonal families.", "title": "" }, { "docid": "930f368fd668bb98527d60c526b4c991", "text": "Limited research efforts have been made for Mobile CrowdSensing (MCS) to address quality of the recruited crowd, i.e., quality of services/data each individual mobile user and the whole crowd are potentially capable of providing, which is the main focus of the paper. Moreover, to improve flexibility and effectiveness, we consider fine-grained MCS, in which each sensing task is divided into multiple subtasks and a mobile user may make contributions to multiple subtasks. In this paper, we first introduce mathematical models for characterizing the quality of a recruited crowd for different sensing applications. Based on these models, we present a novel auction formulation for quality-aware and fine-grained MCS, which minimizes the expected expenditure subject to the quality requirement of each subtask. Then we discuss how to achieve the optimal expected expenditure, and present a practical incentive mechanism to solve the auction problem, which is shown to have the desirable properties of truthfulness, individual rationality and computational efficiency. We conducted trace-driven simulation using the mobility dataset of San Francisco taxies. Extensive simulation results show the proposed incentive mechanism achieves noticeable expenditure savings compared to two well-designed baseline methods, and moreover, it produces close-to-optimal solutions.", "title": "" }, { "docid": "f955d211ee27ac428e54116667913975", "text": "The authors are collaborating with a manufacturer of custom built steel frame modular units which are then transported for rapid erection onsite (volumetric building system). As part of its strategy to develop modular housing, Enemetric, is taking the opportunity to develop intelligent buildings, integrating a wide range of sensors and control systems for optimising energy efficiency and directly monitoring structural health. Enemetric have recently been embracing Building Information Modeling (BIM) to improve workflow, in particular cost estimation and to simplify computer aided manufacture (CAM). By leveraging the existing data generated during the design phases, and projecting it to all other aspects of construction management, less errors are made and productivity is significantly increased. Enemetric may work on several buildings at once, and scheduling and priorities become especially important for effective workflow, and implementing Enterprise Resource Planning (ERP). The parametric nature of BIM is also very useful for improving building management, whereby real-time data collection can be logically associated with individual components of the BIM stored in a local Building Management System performing structural health monitoring and environmental monitoring and control. BIM reuse can be further employed in building simulation tools, to apply simulation assisted control strategies, in order to reduce energy consumption, and increase occupant comfort. BIM Integrated Workflow Management and Monitoring System for Modular Buildings", "title": "" }, { "docid": "52dbfe369d1875c402220692ef985bec", "text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.", "title": "" }, { "docid": "dda2fdd40378ba3340354f836e6cd131", "text": "Successful face analysis requires robust methods. It has been hard to compare the methods due to different experimental setups. We carried out a comparison study for the state-of-the-art gender classification methods to find out their actual reliability. The main contributions are comprehensive and comparable classification results for the gender classification methods combined with automatic real-time face detection and, in addition, with manual face normalization. We also experimented by combining gender classifier outputs arithmetically. This lead to increased classification accuracies. Furthermore, we contribute guidelines to carry out classification experiments, knowledge on the strengths and weaknesses of the gender classification methods, and two new variants of the known methods. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dc48b68a202974f62ae63d1d14002adf", "text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.", "title": "" }, { "docid": "639729ba7b21f8b73e6dc363fe0f217f", "text": "Various magnetic nanoparticles have been extensively investigated as novel magnetic resonance imaging (MRI) contrast agents owing to their unique characteristics, including efficient contrast effects, biocompatibility, and versatile surface functionalization capability. Nanoparticles with high relaxivity are very desirable because they would increase the accuracy of MRI. Recent progress in nanotechnology enables fine control of the size, crystal structure, and surface properties of iron oxide nanoparticles. In this tutorial review, we discuss how MRI contrast effects can be improved by controlling the size, composition, doping, assembly, and surface properties of iron-oxide-based nanoparticles.", "title": "" }, { "docid": "5a4d0254c1331f8577c462343a8cfb0a", "text": "In this paper, we address the problem of realizing a human following task in a crowded environment. We consider an active perception system, consisting of a camera mounted on a pan-tilt unit and a 360◦ RFID detection system, both embedded on a mobile robot. To perform such a task, it is necessary to efficiently track humans in crowds. In a first step, we have dealt with this problem using the particle filtering framework because it enables the fusion of heterogeneous data, which improves the tracking robustness. In a second step, we have considered the problem of controlling the robot motion to make the robot follow the person of interest. To this aim, we have designed a multisensor-based control strategy based on the tracker outputs and on the RFID data. Finally, we have implemented the tracker and the control strategy on our robot. The obtained experimental results highlight the relevance of the developed perceptual functions. Possible extensions of this work are discussed at the end of the article.", "title": "" }, { "docid": "9f04ac4067179aadf5e429492c7625e9", "text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.", "title": "" }, { "docid": "37927017353dc0bab9c081629d33d48c", "text": "Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.", "title": "" } ]
scidocsrr
5dd623ca5cc151e4f047f67a3e4c3cfa
Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders
[ { "docid": "3f46d98f695da70d75cefdeefe6b9a15", "text": "Our RMSE=0.8643 solution is a linear blend of over 100 results. Some of them are new to this year, whereas many others belong to the set that was reported a year ago in our 2007 Progress Prize report [3]. This report is structured accordingly. In Section 2 we detail methods new to this year. In general, our view is that those newer methods deliver a superior performance compared to the methods we used a year ago. Throughout the description of the methods, we highlight the specific predictors that participated in the final blended solution. Nonetheless, the older methods still play a role in the blend, and thus in Section 3 we list those methods repeated from a year ago. Finally, we conclude with general thoughts in Section 4.", "title": "" }, { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" }, { "docid": "21384ea8d80efbf2440fb09a61b03be2", "text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.", "title": "" }, { "docid": "d77fcf9947573c228ac1e000f29153d7", "text": "Our final solution (RMSE=0.8712) consists of blending 107 individual results. Since many of these results are close variants, we first describe the main approaches behind them. Then, we will move to describing each individual result. The core components of the solution are published in our ICDM'2007 paper [1] (or, KDD-Cup'2007 paper [2]), and also in the earlier KDD'2007 paper [3]. We assume that the reader is familiar with these works and our terminology there. A movie-oriented k-NN approach was thoroughly described in our KDD-Cup'2007 paper [kNN]. We apply it as a post-processor for most other models. Interestingly, it was most effective when applied on residuals of RBMs [5], thereby driving the Quiz RMSE from 0.9093 to 0.8888. An earlier k-NN approach was described in the KDD'2007 paper ([3], Sec. 3) [Slow-kNN]. It appears that this earlier approach can achieve slightly more accurate results than the newer one, at the expense of a significant increase in running time. Consequently, we dropped the older approach, though some results involving it survive within the final blend. We also tried more naïve k-NN models, where interpolation weights are based on pairwise similarities between movies (see [2], Sec. 2.2). Specifically, we based weights on corr 2 /(1-corr 2) [Corr-kNN], or on mse-10 [MSE-kNN]. Here, corr is the Pearson correlation coefficient between the two respective movies, and mse is the mean squared distance between two movies (see definition of s ij in Sec. 4.1 of [2]). We also tried taking the interpolation weights as the \"support-based similarities\", which will be defined shortly [Supp-kNN]. Other variants that we tried for computing the interpolation coefficients are: (1) using our KDD-Cup'2007 [2] method on a binary user-movie matrix, which replaces every rating with \" 1 \" , and sets non-rated user-movie pairs to \" 0 \" [Bin-kNN]. (2) Taking results of factorization, and regressing the factors associated with the target movie on the factors associated with its neighbors. Then, the resulting regression coefficients are used as interpolation weights [Fctr-kNN]. As explained in our papers, we also tried user-oriented k-NN approaches. Either in a profound way (see: [1], Sec. 4.3; [3], Sec. 5) [User-kNN], or by just taking weights as pairwise similarities among users [User-MSE-kNN], which is the user-oriented parallel of the aforementioned [MSE-kNN]. Prior to computing interpolation weights, one has to choose the set of neighbors. We find the most similar neighbors based on an appropriate similarity measure. In …", "title": "" } ]
[ { "docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2", "text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.", "title": "" }, { "docid": "c8f6eac662b30768b2e64b3bd3502e73", "text": "This paper discusses the use of genetic programming (GP) and genetic algorithms (GA) to evolve solutions to a problem in robot control. GP is seen as an intuitive evolutionary method while GAs require an extra layer of human intervention. The infrastructures for the different evolutionary approaches are compared.", "title": "" }, { "docid": "abf91984fd590173faf616bbcb806d92", "text": "As high performance clusters continue to grow in size, the mean time between failures shrinks. Thus, the issues of fault tolerance and reliability are becoming one of the challenging factors for application scalability. The traditional disk-based method of dealing with faults is to checkpoint the state of the entire application periodically to reliable storage and restart from the recent checkpoint. The recovery of the application from faults involves (often manually) restarting applications on all processors and having it read the data from disks on all processors. The restart can therefore take minutes after it has been initiated. Such a strategy requires that the failed processor can be replaced so that the number of processors at checkpoint-time and recovery-time are the same. We present FTC-Charms ++, a fault-tolerant runtime based on a scheme for fast and scalable in-memory checkpoint and restart. At restart, when there is no extra processor, the program can continue to run on the remaining processors while minimizing the performance penalty due to losing processors. The method is useful for applications whose memory footprint is small at the checkpoint state, while a variation of this scheme - in-disk checkpoint/restart can be applied to applications with large memory footprint. The scheme does not require any individual component to be fault-free. We have implemented this scheme for Charms++ and AMPI (an adaptive version of MPl). This work describes the scheme and shows performance data on a cluster using 128 processors.", "title": "" }, { "docid": "0db229bd2dfd325c0f23bc9437141e69", "text": "The emergence of Infrastructure as a Service framework brings new opportunities, which also accompanies with new challenges in auto scaling, resource allocation, and security. A fundamental challenge underpinning these problems is the continuous tracking and monitoring of resource usage in the system. In this paper, we present ATOM, an efficient and effective framework to automatically track, monitor, and orchestrate resource usage in an Infrastructure as a Service (IaaS) system that is widely used in cloud infrastructure. We use novel tracking method to continuously track important system usage metrics with low overhead, and develop a Principal Component Analysis (PCA) based approach to continuously monitor and automatically find anomalies based on the approximated tracking results. We show how to dynamically set the tracking threshold based on the detection results, and further, how to adjust tracking algorithm to ensure its optimality under dynamic workloads. Lastly, when potential anomalies are identified, we use introspection tools to perform memory forensics on VMs guided by analyzed results from tracking and monitoring to identify malicious behavior inside a VM. We demonstrate the extensibility of ATOM through virtual machine (VM) clustering. The performance of our framework is evaluated in an open source IaaS system.", "title": "" }, { "docid": "a880c96ff3fc3c52af2be7374b7d9fed", "text": "Researchers have studied how people use self-tracking technologies and discovered a long list of barriers including lack of time and motivation as well as difficulty in data integration and interpretation. Despite the barriers, an increasing number of Quantified-Selfers diligently track many kinds of data about themselves, and some of them share their best practices and mistakes through Meetup talks, blogging, and conferences. In this work, we aim to gain insights from these \"extreme users,\" who have used existing technologies and built their own workarounds to overcome different barriers. We conducted a qualitative and quantitative analysis of 52 video recordings of Quantified Self Meetup talks to understand what they did, how they did it, and what they learned. We highlight several common pitfalls to self-tracking, including tracking too many things, not tracking triggers and context, and insufficient scientific rigor. We identify future research efforts that could help make progress toward addressing these pitfalls. We also discuss how our findings can have broad implications in designing and developing self-tracking technologies.", "title": "" }, { "docid": "2899b31339acbd774aff53fc99590a45", "text": "An ultra-wideband patch antenna is presented for K-band communication. The antenna is designed by employing stacked geometry and aperture-coupled technique. The rectangular patch shape and coaxial fed configuration is used for particular design. The ultra-wideband characteristics are achieved by applying a specific surface resistance of 75Ω/square to the upper rectangular patch and it is excited through a rectangular slot made on the lower patch element (made of copper). The proposed patch antenna is able to operate in the frequency range of 12-27.3 GHz which is used in radar and satellite communication, commonly named as K-band. By employing a technique of thicker substrate and by applying a specific surface resistance to the upper patch element, an impedance bandwidth of 77.8% is achieved having VSWR ≤ 2. It is noted that the gain of proposed antenna is linearly increased in the frequency range of 12-26 GHz and after that the gain is decreased up to 6 dBi. Simulation results are presented to demonstrate the performance of proposed ultra-wideband microstrip patch antenna.", "title": "" }, { "docid": "e743bfe8c4f19f1f9a233106919c99a7", "text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.", "title": "" }, { "docid": "f1b32219b6cd38cf8514d3ae2e926612", "text": "Creativity refers to the potential to produce novel ideas that are task-appropriate and high in quality. Creativity in a societal context is best understood in terms of a dialectical relation to intelligence and wisdom. In particular, intelligence forms the thesis of such a dialectic. Intelligence largely is used to advance existing societal agendas. Creativity forms the antithesis of the dialectic, questioning and often opposing societal agendas, as well as proposing new ones. Wisdom forms the synthesis of the dialectic, balancing the old with the new. Wise people recognize the need to balance intelligence with creativity to achieve both stability and change within a societal context.", "title": "" }, { "docid": "20c6b7417a31aceb39bcf1b1fa3fce4b", "text": "In the process of dealing with the cutting calculation of Multi-axis CNC Simulation, the traditional Voxel Model not only will cost large computation time when judging whether the cutting happens or not, but also the data points may occupy greater storage space. So it cannot satisfy the requirement of real-time emulation, In the construction method of Compressed Voxel Model, it can satisfy the need of Multi-axis CNC Simulation, and storage space is relatively small. Also the model reconstruction speed is faster, but the Boolean computation in the cutting judgment is very complex, so it affects the real-time of CNC Simulation indirectly. Aimed at the shortcomings of these methods, we propose an improved solid modeling technique based on the Voxel model, which can meet the demand of real-time in cutting computation and Graphic display speed.", "title": "" }, { "docid": "9458b13e5a87594140d7ee759e06c76c", "text": "Digital ecosystem, as a neoteric terminology, has emerged along with the appearance of Business Ecosystem which is a form of naturally existing business network of small and medium enterprises. However, few researches have been found in the field of defining digital ecosystem. In this paper, by means of ontology technology as our research methodology, we propose to develop a conceptual model for digital ecosystem. By introducing an innovative ontological notation system, we create the hierarchical framework of digital ecosystem form up to down, based on the related theories form Digital ecosystem and business intelligence institute.", "title": "" }, { "docid": "e1be36e185b024561190bcf85ab4c756", "text": "Molecular (nucleic acid)-based diagnostics tests have many advantages over immunoassays, particularly with regard to sensitivity and specificity. Most on-site diagnostic tests, however, are immunoassay-based because conventional nucleic acid-based tests (NATs) require extensive sample processing, trained operators, and specialized equipment. To make NATs more convenient, especially for point-of-care diagnostics and on-site testing, a simple plastic microfluidic cassette (\"chip\") has been developed for nucleic acid-based testing of blood, other clinical specimens, food, water, and environmental samples. The chip combines nucleic acid isolation by solid-phase extraction; isothermal enzymatic amplification such as LAMP (Loop-mediated AMPlification), NASBA (Nucleic Acid Sequence Based Amplification), and RPA (Recombinase Polymerase Amplification); and real-time optical detection of DNA or RNA analytes. The microfluidic cassette incorporates an embedded nucleic acid binding membrane in the amplification reaction chamber. Target nucleic acids extracted from a lysate are captured on the membrane and amplified at a constant incubation temperature. The amplification product, labeled with a fluorophore reporter, is excited with a LED light source and monitored in situ in real time with a photodiode or a CCD detector (such as available in a smartphone). For blood analysis, a companion filtration device that separates plasma from whole blood to provide cell-free samples for virus and bacterial lysis and nucleic acid testing in the microfluidic chip has also been developed. For HIV virus detection in blood, the microfluidic NAT chip achieves a sensitivity and specificity that are nearly comparable to conventional benchtop protocols using spin columns and thermal cyclers.", "title": "" }, { "docid": "89dd97465c8373bb9dabf3cbb26a4448", "text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.", "title": "" }, { "docid": "b5cb64a0a17954310910d69c694ad786", "text": "This paper proposes a hybrid of handcrafted rules and a machine learning method for chunking Korean. In the partially free word-order languages such as Korean and Japanese, a small number of rules dominate the performance due to their well-developed postpositions and endings. Thus, the proposed method is primarily based on the rules, and then the residual errors are corrected by adopting a memory-based machine learning method. Since the memory-based learning is an efficient method to handle exceptions in natural language processing, it is good at checking whether the estimates are exceptional cases of the rules and revising them. An evaluation of the method yields the improvement in F-score over the rules or various machine learning methods alone.", "title": "" }, { "docid": "727e4b745037587df8e9789f978e0db4", "text": "There is a growing number of courses delivered using elearning environments and their online discussions play an important role in collaborative learning of students. Even in courses with a few number of students, there could be thousands of messages generated in a few months within these forums. Manually evaluating the participation of students in such case is a significant challenge, considering the fact that current e-learning environments do not provide much information regarding the structure of interactions between students. There is a recent line of research on applying social network analysis (SNA) techniques to study these interactions.\n Here we propose to exploit SNA techniques, including community mining, in order to discover relevant structures in social networks we generate from student communications but also information networks we produce from the content of the exchanged messages. With visualization of these discovered relevant structures and the automated identification of central and peripheral participants, an instructor is provided with better means to assess participation in the online discussions. We implemented these new ideas in a toolbox, named Meerkat-ED, which automatically discovers relevant network structures, visualizes overall snapshots of interactions between the participants in the discussion forums, and outlines the leader/peripheral students. Moreover, it creates a hierarchical summarization of the discussed topics, which gives the instructor a quick view of what is under discussion. We believe exploiting the mining abilities of this toolbox would facilitate fair evaluation of students' participation in online courses.", "title": "" }, { "docid": "aa7fe787492aa8aa3d50f748b2df17cb", "text": "Smart Contracts sind rechtliche Vereinbarungen, die sich IT-Technologien bedienen, um die eigene Durchsetzbarkeit sicherzustellen. Es werden durch Smart Contracts autonom Handlungen initiiert, die zuvor vertraglich vereinbart wurden. Beispielsweise können vereinbarte Zahlungen von Geldbeträgen selbsttätig veranlasst werden. Basieren Smart Contracts auf Blockchains, ergeben sich per se vertrauenswürdige Transaktionen. Eine dritte Instanz zur Sicherstellung einer korrekten Transaktion, beispielsweise eine Bank oder ein virtueller Marktplatz, wird nicht benötigt. Echte Peer-to-Peer-Verträge sind möglich. Ein weiterer Anwendungsfall von Smart Contracts ist denkbar. Smart Contracts könnten statt Vereinbarungen von Vertragsparteien gesetzliche Regelungen ausführen. Beispielsweise die Regelungen des Patentgesetzes könnten durch einen Smart Contract implementiert werden. Die Verwaltung von IPRs (Intellectual Property Rights) entsprechend den gesetzlichen Regelungen würde dadurch sichergestellt werden. Bislang werden Spezialisten, beispielsweise Patentanwälte, benötigt, um eine akkurate Administration von Schutzrechten zu gewährleisten. Smart Contracts könnten die Dienstleistungen dieser Spezialisten auf dem Gebiet des geistigen Eigentums obsolet werden lassen.", "title": "" }, { "docid": "bd6c2c591cd5fe1493968b98746175c0", "text": "In this paper we investigate mapping stream programs (i.e., programs written in a streaming style for streaming architectures such as Imagine and Raw) onto a general-purpose CPU. We develop and explore a novel way of mapping these programs onto the CPU. We show how the salient features of stream programming such as computation kernels, local memories, and asynchronous bulk memory loads and stores can be easily mapped by a simple compilation system to CPU features such as the processor caches, simultaneous multi-threading, and fast inter-thread communication support, resulting in an executable that efficiently uses CPU resources. We present an evaluation of our mapping on a hyperthreaded Intel Pentium 4 CPU as a canonical example of a general-purpose processor. We compare the mapped stream program against the same program coded in a more conventional style for the general-purpose processor. Using both micro-benchmarks and scientific applications we show that programs written in a streaming style can run comparably to equivalent programs written in a traditional style. Our results show that coding programs in a streaming style can improve performance on today¿s machines and smooth the way for significant performance improvements with the deployment of streaming architectures.", "title": "" }, { "docid": "986a2771edc62a5658c0099e5cc0a920", "text": "Very-low-energy diets (VLEDs) and ketogenic low-carbohydrate diets (KLCDs) are two dietary strategies that have been associated with a suppression of appetite. However, the results of clinical trials investigating the effect of ketogenic diets on appetite are inconsistent. To evaluate quantitatively the effect of ketogenic diets on subjective appetite ratings, we conducted a systematic literature search and meta-analysis of studies that assessed appetite with visual analogue scales before (in energy balance) and during (while in ketosis) adherence to VLED or KLCD. Individuals were less hungry and exhibited greater fullness/satiety while adhering to VLED, and individuals adhering to KLCD were less hungry and had a reduced desire to eat. Although these absolute changes in appetite were small, they occurred within the context of energy restriction, which is known to increase appetite in obese people. Thus, the clinical benefit of a ketogenic diet is in preventing an increase in appetite, despite weight loss, although individuals may indeed feel slightly less hungry (or more full or satisfied). Ketosis appears to provide a plausible explanation for this suppression of appetite. Future studies should investigate the minimum level of ketosis required to achieve appetite suppression during ketogenic weight loss diets, as this could enable inclusion of a greater variety of healthy carbohydrate-containing foods into the diet.", "title": "" }, { "docid": "2672e9f29c0c54d09758dd10dc7441f4", "text": "An examination of test manuals and published research indicates that widely used memory tests (e.g., Verbal Paired Associates and Word List tests of the Wechsler Memory Scale, Rey Auditory Verbal Learning Test, and California Verbal Learning Test) are afflicted by severe ceiling effects. In the present study, the true extent of memory ability in healthy young adults was tested by giving 208 college undergraduates verbal paired-associate and verbal learning tests of various lengths; the findings demonstrate that healthy adults can remember much more than is suggested by the normative data for the memory tests just mentioned. The findings highlight the adverse effects of low ceilings in memory assessment and underscore the severe consequences of ceiling effects on score distributions, means, standard deviations, and all variability-dependent indices, such as reliability, validity, and correlations with other tests. The article discusses the optimal test lengths for verbal paired-associate and verbal list-learning tests, shows how to identify ceiling-afflicted data in published research, and explains how proper attention to this phenomenon can improve future research and clinical practice.", "title": "" }, { "docid": "5932b3f1f0523f07190855e51abc04b9", "text": "This paper proposes an optimization algorithm based on how human fight and learn from each duelist. Since this algorithm is based on population, the proposed algorithm starts with an initial set of duelists. The duel is to determine the winner and loser. The loser learns from the winner, while the winner try their new skill or technique that may improve their fighting capabilities. A few duelists with highest fighting capabilities are called as champion. The champion train a new duelists such as their capabilities. The new duelist will join the tournament as a representative of each champion. All duelist are re-evaluated, and the duelists with worst fighting capabilities is eliminated to maintain the amount of duelists. Two optimization problem is applied for the proposed algorithm, together with genetic algorithm, particle swarm optimization and imperialist competitive algorithm. The results show that the proposed algorithm is able to find the better global optimum and faster iteration. Keywords—Optimization; global, algorithm; duelist; fighting", "title": "" }, { "docid": "c8a9aff29f3e420a1e0442ae7caa46eb", "text": "Four new species of Ixora (Rubiaceae, Ixoreae) from Brazil are described and illustrated and their relationships to morphologically similar species as well as their conservation status are discussed. The new species, Ixora cabraliensis, Ixora emygdioi, Ixora grazielae, and Ixora pilosostyla are endemic to the Atlantic Forest of southern Bahia and Espirito Santo. São descritas e ilustradas quatro novas espécies de Ixora (Rubiaceae, Ixoreae) para o Brasil bem como discutidos o relacionamento morfológico com espécies mais similares e o estado de conservação. As novas espécies Ixora cabraliensis, Ixora emygdioi, Ixora grazielae e Ixora pilosostyla são endêmicas da Floresta Atlântica, no trecho do sul do estado da Bahia e o estado do Espírito Santo.", "title": "" } ]
scidocsrr
fee80fd95587516e29635959b2d2fe5c
SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks
[ { "docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8", "text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.", "title": "" }, { "docid": "704d068f791a8911068671cb3dca7d55", "text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.", "title": "" }, { "docid": "a0437070b667281f6cbb657815d7f5c8", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a b s t r a c t a r t i c l e i n f o This paper presents a novel approach to visual saliency that relies on a contextually adapted representation produced through adaptive whitening of color and scale features. Unlike previous models, the proposal is grounded on the specific adaptation of the basis of low level features to the statistical structure of the image. Adaptation is achieved through decorrelation and contrast normalization in several steps in a hierarchical approach, in compliance with coarse features described in biological visual systems. Saliency is simply computed as the square of the vector norm in the resulting representation. The performance of the model is compared with several state-of-the-art approaches, in predicting human fixations using three different eye-tracking datasets. Referring this measure to the performance of human priority maps, the model proves to be the only one able to keep the same behavior through different datasets, showing free of biases. Moreover, it is able to predict a wide set of relevant psychophysical observations, to our knowledge, not reproduced together by any other model before. Research on the estimation of visual saliency has experienced an increasing activity in the last years from both computer vision and neuro-science perspectives, giving rise to a number of improved approaches. Furthermore, a wide diversity of applications based on saliency are being proposed that range from image retargeting [1] to human-like robot surveillance [2], object learning and recognition [3–5], objectness definition [6], image processing for retinal implants [7], and many others. Existing approaches to visual saliency have adopted a number of quite different strategies. A first group, including many early models, is very influenced by psychophysical theories supporting a parallel processing of several feature dimensions. Models in this group are particularly concerned with biological plausibility in their formulation, and they resort to the modeling of visual functions. Outstanding examples can be found in [8] or in [9]. Most recent models are in a second group that broadly aims to estimate the inverse of the probability density of a set of low level features by different procedures. In this kind of models, low level features are usually …", "title": "" } ]
[ { "docid": "f23bde650be816fdca4594c180c47309", "text": "Indian economy highly depends on agricultural productivity. An important role is played by the detection of disease to obtain a perfect results in agriculture, and it is natural to have disease in plants. Proper care should be taken in this area for product quality and quantity. To reduce the large amount of monitoring in field automatic detection techniques can be used. This paper discuss different processes for segmentation technique which can be applied for different lesion disease detection. Thresholding and K-means cluster algorithms are done to detect different diseases in plant leaf.", "title": "" }, { "docid": "65385d7aee49806476dc913f6768fc43", "text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "8d570c7d70f9003b9d2f9bfa89234c35", "text": "BACKGROUND\nThe targeting of the prostate-specific membrane antigen (PSMA) is of particular interest for radiotheragnostic purposes of prostate cancer. Radiolabeled PSMA-617, a 1,4,7,10-tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-functionalized PSMA ligand, revealed favorable kinetics with high tumor uptake, enabling its successful application for PET imaging (68Ga) and radionuclide therapy (177Lu) in the clinics. In this study, PSMA-617 was labeled with cyclotron-produced 44Sc (T 1/2 = 4.04 h) and investigated preclinically for its use as a diagnostic match to 177Lu-PSMA-617.\n\n\nRESULTS\n44Sc was produced at the research cyclotron at PSI by irradiation of enriched 44Ca targets, followed by chromatographic separation. 44Sc-PSMA-617 was prepared under standard labeling conditions at elevated temperature resulting in a radiochemical purity of >97% at a specific activity of up to 10 MBq/nmol. 44Sc-PSMA-617 was evaluated in vitro and compared to the 177Lu- and 68Ga-labeled match, as well as 68Ga-PSMA-11 using PSMA-positive PC-3 PIP and PSMA-negative PC-3 flu prostate cancer cells. In these experiments it revealed similar in vitro properties to that of 177Lu- and 68Ga-labeled PSMA-617. Moreover, 44Sc-PSMA-617 bound specifically to PSMA-expressing PC-3 PIP tumor cells, while unspecific binding to PC-3 flu cells was not observed. The radioligands were investigated with regard to their in vivo properties in PC-3 PIP/flu tumor-bearing mice. 44Sc-PSMA-617 showed high tumor uptake and a fast renal excretion. The overall tissue distribution of 44Sc-PSMA-617 resembled that of 177Lu-PSMA-617 most closely, while the 68Ga-labeled ligands, in particular 68Ga-PSMA-11, showed different distribution kinetics. 44Sc-PSMA-617 enabled distinct visualization of PC-3 PIP tumor xenografts shortly after injection, with increasing tumor-to-background contrast over time while unspecific uptake in the PC-3 flu tumors was not observed.\n\n\nCONCLUSIONS\nThe in vitro characteristics and in vivo kinetics of 44Sc-PSMA-617 were more similar to 177Lu-PSMA-617 than to 68Ga-PSMA-617 and 68Ga-PSMA-11. Due to the almost four-fold longer half-life of 44Sc as compared to 68Ga, a centralized production of 44Sc-PSMA-617 and transport to satellite PET centers would be feasible. These features make 44Sc-PSMA-617 particularly appealing for clinical application.", "title": "" }, { "docid": "f873e55f76905f465e17778f25ba2a79", "text": "PURPOSE\nThe purpose of this study is to develop an automatic human movement classification system for the elderly using single waist-mounted tri-axial accelerometer.\n\n\nMETHODS\nReal-time movement classification algorithm was developed using a hierarchical binary tree, which can classify activities of daily living into four general states: (1) resting state such as sitting, lying, and standing; (2) locomotion state such as walking and running; (3) emergency state such as fall and (4) transition state such as sit to stand, stand to sit, stand to lie, lie to stand, sit to lie, and lie to sit. To evaluate the proposed algorithm, experiments were performed on five healthy young subjects with several activities, such as falls, walking, running, etc.\n\n\nRESULTS\nThe results of experiment showed that successful detection rate of the system for all activities were about 96%. To evaluate long-term monitoring, 3 h experiment in home environment was performed on one healthy subject and 98% of the movement was successfully classified.\n\n\nCONCLUSIONS\nThe results of experiment showed a possible use of this system which can monitor and classify the activities of daily living. For further improvement of the system, it is necessary to include more detailed classification algorithm to distinguish several daily activities.", "title": "" }, { "docid": "59b1cbd4f94c231c7d5a1f06672c3faf", "text": "Life stress is a major predictor of the course of bipolar disorder. Few studies have used laboratory paradigms to examine stress reactivity in bipolar disorder, and none have assessed autonomic reactivity to laboratory stressors. In the present investigation we sought to address this gap in the literature. Participants, 27 diagnosed with bipolar I disorder and 24 controls with no history of mood disorder, were asked to complete a complex working memory task presented as \"a test of general intelligence.\" Self-reported emotions were assessed at baseline and after participants were given task instructions; autonomic physiology was assessed at baseline and continuously during the stressor task. Compared to controls, individuals with bipolar disorder reported greater increases in pretask anxiety from baseline and showed greater cardiovascular threat reactivity during the task. Group differences in cardiovascular threat reactivity were significantly correlated with comorbid anxiety in the bipolar group. Our results suggest that a multimethod approach to assessing stress reactivity-including the use of physiological parameters that differentiate between maladaptive and adaptive profiles of stress responding-can yield valuable information regarding stress sensitivity and its associations with negative affectivity in bipolar disorder. (PsycINFO Database Record (c) 2015 APA, all rights reserved).", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "5837606de41a0ed39c093d8f65a9176c", "text": "Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, \"Geminoid F\", a typical humanoid robot with less facial degrees of freedom, \"Robovie R2\", and a robot with a 3-axis rotatable neck and movable lips, \"Telenoid R2\"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.", "title": "" }, { "docid": "decd813dfea894afdceb55b3ca087487", "text": "BACKGROUND\nAddiction to smartphone usage is a common worldwide problem among adults, which might negatively affect their wellbeing. This study investigated the prevalence and factors associated with smartphone addiction and depression among a Middle Eastern population.\n\n\nMETHODS\nThis cross-sectional study was conducted in 2017 using a web-based questionnaire distributed via social media. Responses to the Smartphone Addiction Scale - Short version (10-items) were rated on a 6-point Likert scale, and their percentage mean score (PMS) was commuted. Responses to Beck's Depression Inventory (20-items) were summated (range 0-60); their mean score (MS) was commuted and categorized. Higher scores indicated higher levels of addiction and depression. Factors associated with these outcomes were identified using descriptive and regression analyses. Statistical significance was set at P < 0.05.\n\n\nRESULTS\nComplete questionnaires were 935/1120 (83.5%), of which 619 (66.2%) were females and 316 (33.8%) were males. The mean ± standard deviation of their age was 31.7 ± 11  years. Majority of participants obtained university education 766 (81.9%), while 169 (18.1%) had school education. The PMS of addiction was 50.2 ± 20.3, and MS of depression was 13.6 ± 10.0. A significant positive linear relationship was present between smart phone addiction and depression (y = 39.2 + 0.8×; P < 0.001). Significantly higher smartphone addiction scores were associated with younger age users, (β = - 0.203, adj. P = 0.004). Factors associated with higher depression scores were school educated users (β = - 2.03, adj. P = 0.01) compared to the university educated group and users with higher smart phone addiction scores (β =0.194, adj. P < 0.001).\n\n\nCONCLUSIONS\nThe positive correlation between smartphone addiction and depression is alarming. Reasonable usage of smart phones is advised, especially among younger adults and less educated users who could be at higher risk of depression.", "title": "" }, { "docid": "a61dd2408c467513b1f1d27c5de9a7ea", "text": "This paper presents a new class of wideband 90° hybrid coupler with an arbitrary coupling level. The physical size of the proposed coupler is close to that of a conventional two-section branch-line coupler, but it has an additional phase inverter. The impedance bandwidth of the proposed coupler is close to that of a four-section branch-line coupler. The proposed coupler is a backward-wave coupler with a port assignment different from that of a conventional branch-line coupler. The design formulas of the proposed coupler are proved based on its even- and odd-mode half structures. We demonstrated three couplers at the center frequency of 2 GHz with different design parameters.", "title": "" }, { "docid": "40fda9cba754c72f1fba17dd3a5759b2", "text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.", "title": "" }, { "docid": "7b220c4e424abd4c6a724c7d0b45c0f4", "text": "Text in video is a very compact and accurate clue for video indexing and summarization. Most video text detection and extraction methods hold assumptions on text color, background contrast, and font style. Moreover, few methods can handle multilingual text well since different languages may have quite different appearances. This paper performs a detailed analysis of multilingual text characteristics, including English and Chinese. Based on the analysis, we propose a comprehensive, efficient video text detection, localization, and extraction method, which emphasizes the multilingual capability over the whole processing. The proposed method is also robust to various background complexities and text appearances. The text detection is carried out by edge detection, local thresholding, and hysteresis edge recovery. The coarse-to-fine localization scheme is then performed to identify text regions accurately. The text extraction consists of adaptive thresholding, dam point labeling, and inward filling. Experimental results on a large number of video images and comparisons with other methods are reported in detail.", "title": "" }, { "docid": "baba2dc1de14cc70f88284d3e7d2c41b", "text": "Deep generative models have achieved remarkable success in various data domains, including images, time series, and natural languages. There remain, however, substantial challenges for combinatorial structures, including graphs. One of the key challenges lies in the difficulty of ensuring semantic validity in context. For example, in molecular graphs, the number of bonding-electron pairs must not exceed the valence of an atom; whereas in protein interaction networks, two proteins may be connected only when they belong to the same or correlated gene ontology terms. These constraints are not easy to be incorporated into a generative model. In this work, we propose a regularization framework for variational autoencoders as a step toward semantic validity. We focus on the matrix representation of graphs and formulate penalty terms that regularize the output distribution of the decoder to encourage the satisfaction of validity constraints. Experimental results confirm a much higher likelihood of sampling valid graphs in our approach, compared with others reported in the literature.", "title": "" }, { "docid": "0fbc38c8a8c4171785902382e8d43762", "text": "Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.", "title": "" }, { "docid": "d18d67949bae399cdc148f2ded81903a", "text": "Stock market news and investing tips are popular topics in Twitter. In this paper, first we utilize a 5-year financial news corpus comprising over 50,000 articles collected from the NASDAQ website for the 30 stock symbols in Dow Jones Index (DJI) to train a directional stock price prediction system based on news content. Then we proceed to prove that information in articles indicated by breaking Tweet volumes leads to a statistically significant boost in the hourly directional prediction accuracies for the prices of DJI stocks mentioned in these articles. Secondly, we show that using document-level sentiment extraction does not yield to a statistically significant boost in the directional predictive accuracies in the presence of other 1-gram keyword features.", "title": "" }, { "docid": "41b7e610e0aa638052f71af1902e92d5", "text": "This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.", "title": "" }, { "docid": "22b47cfd0170734f5f3e3fd2b5230bce", "text": "We present a synthesis method for communication protocols for active safety applications that satisfy certain formal specifications on quality of service requirements. The protocols are developed to provide reliable communication services for automobile active safety applications. The synthesis method transforms a specification into a distributed implementation of senders and receivers that together satisfy the quality of service requirements by transmitting messages over an unreliable medium. We develop a specification language and an execution model for the implementations, and demonstrate the viability of our method by developing a protocol for a traffic scenario in which a car runs a red light at a busy intersection.", "title": "" }, { "docid": "2ec14d4544d1fcc6591b6f31140af204", "text": "To better understand the molecular and cellular differences in brain organization between human and nonhuman primates, we performed transcriptome sequencing of 16 regions of adult human, chimpanzee, and macaque brains. Integration with human single-cell transcriptomic data revealed global, regional, and cell-type–specific species expression differences in genes representing distinct functional categories. We validated and further characterized the human specificity of genes enriched in distinct cell types through histological and functional analyses, including rare subpallial-derived interneurons expressing dopamine biosynthesis genes enriched in the human striatum and absent in the nonhuman African ape neocortex. Our integrated analysis of the generated data revealed diverse molecular and cellular features of the phylogenetic reorganization of the human brain across multiple levels, with relevance for brain function and disease.", "title": "" }, { "docid": "6e9edeffb12cf8e50223a933885bcb7c", "text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.", "title": "" }, { "docid": "0f7a4ddeb2627b8815175aea809a1ca3", "text": "A deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph. This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage. The LFVC is associated with the sensitivity of algebraic connectivity to node or edge removals. We prove that a greedy node/edge removal strategy, based on successive maximization of LFVC, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy. Under a stochastic block model framework, we show that the greedy LFVC strategy can extract deep communities with probability one as the number of observations becomes large. We apply the greedy LFVC strategy to real-world social network datasets. Compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network.", "title": "" } ]
scidocsrr
171473280b389a1bc36a5ecbbeebe02e
SIFT Hardware Implementation for Real-Time Image Feature Extraction
[ { "docid": "c797b2a78ea6eb434159fd948c0a1bf0", "text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.", "title": "" }, { "docid": "744519470178d9e53f8e4a06a4c4fdb3", "text": "Detecting and matching image features is a fundamental task in video analytics and computer vision systems. It establishes the correspondences between two images taken at different time instants or from different viewpoints. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a new FPGA-based embedded system architecture for feature detection and matching. It consists of scale-invariant feature transform (SIFT) feature detection, as well as binary robust independent elementary features (BRIEF) feature description and matching. It is able to establish accurate correspondences between consecutive frames for 720-p (1280x720) video. It optimizes the FPGA architecture for the SIFT feature detection to reduce the utilization of FPGA resources. Moreover, it implements the BRIEF feature description and matching on FPGA. Due to these contributions, the proposed system achieves feature detection and matching at 60 frame/s for 720-p video. Its processing speed can meet and even exceed the demand of most real-life real-time video analytics applications. Extensive experiments have demonstrated its efficiency and effectiveness.", "title": "" }, { "docid": "90378605e6ee192cfedf60d226f8cacf", "text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.", "title": "" } ]
[ { "docid": "c35306b0ec722364308d332664c823f8", "text": "The uniform asymmetrical microstrip parallel coupled line is used to design the multi-section unequal Wilkinson power divider with high dividing ratio. The main objective of the paper is to increase the trace widths in order to facilitate the construction of the power divider with the conventional photolithography method. The separated microstrip lines in the conventional Wilkinson power divider are replaced with the uniform asymmetrical parallel coupled lines. An even-odd mode analysis is used to calculate characteristic impedances and then the per-unit-length capacitance and inductance parameter matrix are used to calculate the physical dimension of the power divider. To clarify the advantages of this method, two three-section Wilkinson power divider with an unequal power-division ratio of 1 : 2.5 are designed and fabricated and measured, one in the proposed configuration and the other in the conventional configuration. The simulation and the measurement results show that not only the specified design goals are achieved, but also all the microstrip traces can be easily implemented in the proposed power divider.", "title": "" }, { "docid": "ff3867a1c0ee1d3f1e61cb306af37bb1", "text": "Introduction: The mucocele is one of the most common benign soft tissue masses that occur in the oral cavity. Mucoceles (mucus and coele cavity), by definition, are cavities filled with mucus. Two types of mucoceles can appear – extravasation type and retention type. Diagnosis is mostly based on clinical findings. The common location of the extravasation mucocele is the lower lip and the treatment of choice is surgical removal. This paper gives an insight into the phenomenon and a case report has been presented. Case report: Twenty five year old femalepatient reported with chief complaint of small swelling on the left side of the lower lip since 2 months. The swelling was diagnosed as extravasation mucocele after history and clinical examination. The treatment involved surgical excision of tissue and regular follow up was done to check for recurrence. Conclusion: The treatment of lesion such as mucocele must be planned taking into consideration the various clinical parameters and any oral habits as these lesions have a propensity of recurrence.", "title": "" }, { "docid": "6f674570fce0c7070b3b1df83ce9da6a", "text": "Monitoring of the network performance in highspeed Internet infrastructure is a challenging task, as the requirements for the given quality level are service-dependent. Backbone QoS monitoring and analysis in Multi-hop Networks requires therefore knowledge about types of applications forming current network traffic. To overcome the drawbacks of existing methods for traffic classification, usage of C5.0 Machine Learning Algorithm (MLA) was proposed. On the basis of statistical traffic information received from volunteers and C5.0 algorithm we constructed a boosted classifier, which was shown to have ability to distinguish between 7 different applications in test set of 76,632-1,622,710 unknown cases with average accuracy of 99.3-99.9%. This high accuracy was achieved by using high quality training data collected by our system, a unique set of parameters used for both training and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options. This paper shows how we collected accurate traffic data, presents arguments used in classification process, introduces the C5.0 classifier and its options, and finally evaluates and compares the obtained results.", "title": "" }, { "docid": "a717222db438adc4be0fd82f916bacdc", "text": "This paper presents MalwareVis, a utility that provides security researchers a method to browse, filter, view and compare malware network traces as entities.\n Specifically, we propose a cell-like visualization model to view the network traces of a malware sample's execution. This model is a intuitive representation of the heterogeneous attributes (protocol, host ip, transmission size, packet number, duration) of a list of network streams associated with a malware instance. We encode these features into colors and basic geometric properties of common shapes. The list of streams is organized circularly in a clock-wise fashion to form an entity. Our design takes into account of the sparse and skew nature of these attributes' distributions and proposes mapping and layout strategies to allow a clear global view of a malware sample's behaviors.\n We demonstrate MalwareVis on a real-world corpus of malware samples and display their individual activity patterns. We show that it is a simple to use utility that provides intriguing visual representations that facilitate user interaction to perform security analysis.", "title": "" }, { "docid": "8fd90f5904e6bd9738840bdaf8014372", "text": "We present analytical formulations, based on a coulombian approach, of the magnetic field created by permanent-magnet rings. For axially magnetized magnets, we establish the expressions for the three components. We also give the analytical 3-D formulation of the created magnetic field for radially magnetized rings. We compare the results determined by a 2-D analytical approximation to those for the 3-D analytical formulation, in order to determine the range of validity of the 2-D approximation.", "title": "" }, { "docid": "13a23fe61319bc82b8b3e88ea895218c", "text": "A new generation of robots is being designed for human occupied workspaces where safety is of great concern. This research demonstrates the use of a capacitive skin sensor for collision detection. Tests demonstrate that the sensor reduces impact forces and can detect and characterize collision events, providing information that may be used in the future for force reduction behaviors. Various parameters that affect collision severity, including interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth are also explored using the sensor to provide information about the contact force at the site of impact. Joint stiffness is made independent of controller bandwidth limitations using passive torsional springs of various stiffnesses. Results indicate a positive correlation between peak impact force and joint stiffness, skin friction and interface stiffness, with implications for future skin and robot link designs and post-collision behaviors.", "title": "" }, { "docid": "8741e414199ecfbbf4a4c16d8a303ab5", "text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .", "title": "" }, { "docid": "b5c27fa3dbcd917f7cdc815965b22a67", "text": "Our aim is to provide a pixel-wise instance-level labeling of a monocular image in the context of autonomous driving. We build on recent work [32] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [32] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [15]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [32].", "title": "" }, { "docid": "84f6bc32035aab1e490d350c687df342", "text": "Popularity bias is a phenomenon associated with collaborative filtering algorithms, in which popular items tend to be recommended over unpopular items. As the appropriate level of item popularity differs depending on individual users, a user-level modification approach can produce diverse recommendations while improving the recommendation accuracy. However, there are two issues with conventional user-level approaches. First, these approaches do not isolate users’ preferences from their tendencies toward item popularity clearly. Second, they do not consider temporal item popularity, although item popularity changes dynamically over time in reality. In this paper, we propose a novel approach to counteract the popularity bias, namely, matrix factorization based collaborative filtering incorporating individual users’ tendencies toward item popularity. Our model clearly isolates users’ preferences from their tendencies toward popularity. In addition, we consider the temporal item popularity and incorporate it into our model. Experimental results using a real-world dataset show that our model improve both accuracy and diversity compared with a baseline algorithm in both static and time-varying models. Moreover, our model outperforms conventional approaches in terms of accuracy with the same diversity level. Furthermore, we show that our proposed model recommends items by capturing users’ tendencies toward item popularity: it recommends popular items for the user who likes popular items, while recommending unpopular items for those who don’t like popular items.", "title": "" }, { "docid": "f43ae2f0002343deeb0987d19e6a425e", "text": "Recent state-of-the-art approaches automatically generate regular expressions from natural language specifications. Given that these approaches use only synthetic data in both training datasets and validation/test datasets, a natural question arises: are these approaches effective to address various real-world situations? To explore this question, in this paper, we conduct a characteristic study on comparing two synthetic datasets used by the recent research and a real-world dataset collected from the Internet, and conduct an experimental study on applying a state-of-the-art approach on the real-world dataset. Our study results suggest the existence of distinct characteristics between the synthetic datasets and the real-world dataset, and the state-of-the-art approach (based on a model trained from a synthetic dataset) achieves extremely low effectiveness when evaluated on real-world data, much lower than the effectiveness when evaluated on the synthetic dataset. We also provide initial analysis on some of those challenging cases and discuss future directions.", "title": "" }, { "docid": "9c2c74da1e0f5ea601e50f257015c5b3", "text": "We present a new lock-based algorithm for concurrent manipulation of a binary search tree in an asynchronous shared memory system that supports search, insert and delete operations. Some of the desirable characteristics of our algorithm are: (i) a search operation uses only read and write instructions, (ii) an insert operation does not acquire any locks, and (iii) a delete operation only needs to lock up to four edges in the absence of contention. Our algorithm is based on an internal representation of a search tree and it operates at edge-level (locks edges) rather than at node-level (locks nodes); this minimizes the contention window of a write operation and improves the system throughput. Our experiments indicate that our lock-based algorithm outperforms existing algorithms for a concurrent binary search tree for medium-sized and larger trees, achieving up to 59% higher throughput than the next best algorithm.", "title": "" }, { "docid": "7dd3c935b6a5a38284b36ddc1dc1d368", "text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "c2e92f8289ebf50ca363840133dc2a43", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.042 ⇑ Address: WOLNM & ESIME Zacatenco, Instituto Politécnico Nacional, U. Profesional Adolfo López Mateos, Edificio Z-4, 2do piso, cubiculo 6, Miguel Othón de Mendizábal S/N, La Escalera, Gustavo A. Madero, D.F., C.P. 07320, Mexico. Tel.: +52 55 5694 0916/+52 55 5454 2611 (cellular); fax: +52 55 5694 0916. E-mail address: [email protected] URL: http://www.wolnm.org/apa 1 AIWBES: adaptive and intelligent web-based educational systems; BKT: Bayesian knowledge tracing; CBES: computer-based educational systems; CBIS: computerbased information system,; DM: data mining; DP: dynamic programming; EDM: educational data mining; EM: expectation maximization; HMM: hidden Markov model; IBL: instances-based learning; IRT: item response theory; ITS: intelligent tutoring systems; KDD: knowledge discovery in databases; KT: knowledge tracing; LMS: learning management systems; SNA: social network analysis; SWOT: strengths, weakness, opportunities, and threats; WBC: web-based courses; WBES: web-based educational systems. Alejandro Peña-Ayala ⇑", "title": "" }, { "docid": "333b15d94a2108929a8f6c18ef460ff4", "text": "Inferring the latent emotive content of a narrative requires consideration of para-linguistic cues (e.g. pitch), linguistic content (e.g. vocabulary) and the physiological state of the narrator (e.g. heart-rate). In this study we utilized a combination of auditory, text, and physiological signals to predict the mood (happy or sad) of 31 narrations from subjects engaged in personal story-telling. We extracted 386 audio and 222 physiological features (using the Samsung Simband) from the data. A subset of 4 audio, 1 text, and 5 physiologic features were identified using Sequential Forward Selection (SFS) for inclusion in a Neural Network (NN). These features included subject movement, cardiovascular activity, energy in speech, probability of voicing, and linguistic sentiment (i.e. negative or positive). We explored the effects of introducing our selected features at various layers of the NN and found that the location of these features in the network topology had a significant impact on model performance. To ensure the real-time utility of the model, classification was performed over 5 second intervals. We evaluated our model’s performance using leave-one-subject-out crossvalidation and compared the performance to 20 baseline models and a NN with all features included in the input layer.", "title": "" }, { "docid": "b4d85eae82415b0a8dcd5e9f6eadbc6f", "text": "We compared the effects of children’s reading of an educational electronic storybook on their emergent literacy with those of being read the same story in its printed version by an adult. We investigated 128 5to 6-year-old kindergarteners; 64 children from each of two socio-economic status (SES) groups: low (LSES) and middle (MSES). In each group, children were randomly assigned to one of three subgroups. The two intervention groups included three book reading sessions each; children in one group individually read the electronic book; in the second group, the children were read the same printed book by an adult; children in the third group, which served as a control, received the regular kindergarten programme. Preand post-intervention emergent literacy measures included vocabulary, word recognition and phonological awareness. Compared with the control group, the children’s vocabulary scores in both intervention groups improved following reading activity. Children from both interventions groups and both SES groups showed a similarly good level of story comprehension. In both SES groups, compared with the control group, children’s phonological awareness and word recognition did not improve following both reading interventions. Implications for future research and for education are discussed.", "title": "" }, { "docid": "6773b060fd16b6630f581eb65c5c6488", "text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.", "title": "" }, { "docid": "6d3e19c44f7af5023ef991b722b078c5", "text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.", "title": "" }, { "docid": "70e2716835f789398e6d7a50aed9df46", "text": "Human spatial behavior and experience cannot be investigated independently from the shape and configuration of environments. Therefore, comparative studies in architectural psychology and spatial cognition would clearly benefit from operationalizations of space that provide a common denominator for capturing its behavioral and psychologically relevant properties. This paper presents theoretical and methodological issues arising from the practical application of isovist-based graphs for the analysis of architectural spaces. Based on recent studies exploring the influence of spatial form and structure on behavior and experience in virtual environments, the following topics are discussed: (1) the derivation and empirical verification of meaningful descriptor variables on the basis of classic qualitative theories of environmental psychology relating behavior and experience to spatial properties; (2) methods to select reference points for the analysis of architectural spaces at a local level; furthermore, based on two experiments exploring the phenomenal conception of the spatial structure of architectural environments, formalized strategies for (3) the selection of reference points at a global level, and for (4), their integration into a sparse yet plausible comprehensive graph structure, are proposed. Taken together, a well formalized and psychologically oriented methodology for the efficient description of spatial properties of environments at the architectural scale level is outlined. This method appears useful for a wide range of applications, ranging from abstract architectural analysis over behavioral experiments to studies on mental representations in cognitive science. doi:10.1068/b33050 }Formerly also associated to Cognitive Neuroscience, Department of Zoology, University of Tu« bingen. Currently at the Centre for Cognitive Science, University of Freiburg, Friedrichstrasse 50, 79098 Freiburg, Germany. because, in reality, various potentially relevant factors coexist. In order to obtain better predictions under such complex conditions, either a comprehensive model or at least additional knowledge on the relative weights of individual factors and their potential interactions is required. As an intermediate step towards such more comprehensive approaches, existing theories have to be formulated qualitatively and translated to a common denominator. In this paper an integrative framework for describing the shape and structure of environments is outlined that allows for a quantitative formulation and test of theories on behavioral and emotional responses to environments. It is based on the two basic elements isovist and place graph. This combination appears particularly promising, since its sparseness allows an efficient representation of both geometrical and topological properties at a wide range of scales, and at the same time it seems capable and flexible enough to retain a substantial share of psychologically and behaviorally relevant detail features. Both the isovist and the place graph are established analysis techniques within their scientific communities of space syntax and spatial cognition respectively. Previous combinations of graphs and isovists (eg Batty, 2001; Benedikt, 1979; Turner et al, 2001) were based on purely formal criteria, whereas many placegraph applications made use of their inherent flexibility but suffered from a lack of formalization (cf Franz et al, 2005a). The methodology outlined in this paper seeks to combine both approaches by defining well-formalized rules for flexible graphs based on empirical findings on the human conception of the spatial structure. In sections 3 and 4, methodological issues of describing local properties on the basis of isovists are discussed. This will be done on the basis of recent empirical studies that tested the behavioral relevance of a selection of isovist measurands. The main issues are (a) the derivation of meaningful isovist measurands, based on classic qualitative theories from environmental psychology, and (b) strategies to select reference points for isovist analysis in environments consisting of few subspaces. Sections 5 and 6 then discuss issues arising when using an isovist-based description system for operationalizing larger environments consisting of multiple spaces: (c) on the basis of an empirical study in which humans identified subspaces by marking their centers, psychologically plausible selection criteria for sets of reference points are proposed and formalized; (d) a strategy to derive a topological graph on the basis of the previously identified elements is outlined. Taken together, a viable methodology is proposed which describes spatial properties of environments efficiently and comprehensively in a psychologically and behaviorally plausible manner.", "title": "" }, { "docid": "0b407f1f4d771a34e6d0bc59bf2ef4c4", "text": "Social advertisement is one of the fastest growing sectors in the digital advertisement landscape: ads in the form of promoted posts are shown in the feed of users of a social networking platform, along with normal social posts; if a user clicks on a promoted post, the host (social network owner) is paid a fixed amount from the advertiser. In this context, allocating ads to users is typically performed by maximizing click-through-rate, i.e., the likelihood that the user will click on the ad. However, this simple strategy fails to leverage the fact the ads can propagate virally through the network, from endorsing users to their followers. In this paper, we study the problem of allocating ads to users through the viral-marketing lens. Advertisers approach the host with a budget in return for the marketing campaign service provided by the host. We show that allocation that takes into account the propensity of ads for viral propagation can achieve significantly better performance. However, uncontrolled virality could be undesirable for the host as it creates room for exploitation by the advertisers: hoping to tap uncontrolled virality, an advertiser might declare a lower budget for its marketing campaign, aiming at the same large outcome with a smaller cost. This creates a challenging trade-off: on the one hand, the host aims at leveraging virality and the network effect to improve advertising efficacy, while on the other hand the host wants to avoid giving away free service due to uncontrolled virality. We formalize this as the problem of ad allocation with minimum regret, which we show is NP-hard and inapproximable w.r.t. any factor. However, we devise an algorithm that provides approximation guarantees w.r.t. the total budget of all advertisers. We develop a scalable version of our approximation algorithm, which we extensively test on four real-world data sets, confirming that our algorithm delivers high quality solutions, is scalable, and significantly outperforms several natural baselines.", "title": "" }, { "docid": "4ade01af5fd850722fd690a5d8f938f4", "text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.", "title": "" } ]
scidocsrr
4782c26ce3008bd97d53b89ff607dfe5
Longitudinal Analysis of Android Ad Library Permissions
[ { "docid": "103d6713dd613bfe5a768c60d349bb4a", "text": "Mobile phones and tablets can be considered as the first incarnation of the post-PC era. Their explosive adoption rate has been driven by a number of factors, with the most signifcant influence being applications (apps) and app markets. Individuals and organizations are able to develop and publish apps, and the most popular form of monetization is mobile advertising.\n The mobile advertisement (ad) ecosystem has been the target of prior research, but these works typically focused on a small set of apps or are from a user privacy perspective. In this work we make use of a unique, anonymized data set corresponding to one day of traffic for a major European mobile carrier with more than 3 million subscribers. We further take a principled approach to characterize mobile ad traffic along a number of dimensions, such as overall traffic, frequency, as well as possible implications in terms of energy on a mobile device.\n Our analysis demonstrates a number of inefficiencies in today's ad delivery. We discuss the benefits of well-known techniques, such as pre-fetching and caching, to limit the energy and network signalling overhead caused by current systems. A prototype implementation on Android devices demonstrates an improvement of 50 % in terms of energy consumption for offline ad-sponsored apps while limiting the amount of ad related traffic.", "title": "" } ]
[ { "docid": "d506267b7b3eed0227d7c5e14b095223", "text": "Analytic tools are beginning to be largely employed, given their ability to rank, e.g., the visibility of social media users. Visibility that, in turns, can have a monetary value, since social media popular people usually either anticipate or establish trends that could impact the real world (at least, from a consumer point of view). The above rationale has fostered the flourishing of private companies providing statistical results for social media analysis. These results have been accepted, and largely diffused, by media without any apparent scrutiny, while Academia has moderately focused its attention on this phenomenon. In this paper, we provide evidence that analytic results provided by field-flagship companies are questionable (at least). In particular, we focus on Twitter and its \"fake followers\". We survey popular Twitter analytics that count the fake followers of some target account. We perform a series of experiments aimed at verifying the trustworthiness of their results. We compare the results of such tools with a machine-learning classifier whose methodology bases on scientific basis and on a sound sampling scheme. The findings of this work call for a serious re-thinking of the methodology currently used by companies providing analytic results, whose present deliveries seem to lack on any reliability.", "title": "" }, { "docid": "ee08d4723ebf030bb79c3c1a18d27ee3", "text": "In this work we present a new method for the modeling and simulation study of a photovoltaic grid connected system and its experimental validation. This method has been applied in the simulation of a grid connected PV system with a rated power of 3.2 Kwp, composed by a photovoltaic generator and a single phase grid connected inverter. First, a PV module, forming part of the whole PV array is modeled by a single diode lumped circuit and main parameters of the PV module are evaluated. Results obtained for the PV module characteristics have been validated experimentally by carrying out outdoor I–V characteristic measurements. To take into account the power conversion efficiency, the measured AC output power against DC input power is fitted to a second order efficiency model to derive its specific parameters. The simulation results have been performed through Matlab/Simulink environment. Results has shown good agreement with experimental data, whether for the I–V characteristics or for the whole operating system. The significant error indicators are reported in order to show the effectiveness of the simulation model to predict energy generation for such PV system. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1f2430896a3859fb752c99cbe8037acf", "text": "Diode junction photovoltaic (PV) generators exhibit nonlinear V-I characteristics and the maximum power extractable varies with the intensity of solar radiation, temperature and load conditions. A maximum power point tracking (MPPT) controller is therefore usually employed in PV-generator applications to automatically extract maximum power irrespective of the instantaneous conditions of the PV system. This paper presents a fuzzy logic control (FLC) scheme for extracting the maximum power from a stand-alone PV generator for use in a water pumping system. The PV-generator system comprises a solar panel, DC-DC buck chopper, fuzzy MPP tracker and permanent DC-motor driving a centrifugal pump. The fuzzy controller generates a control signal for the pulse-width-modulation generator which in turn adjusts the duty ratio of the buck chopper to match the load impedance to the PV generator, and consequently maximizes the motor speed and the water discharge rate of a coupled centrifugal pump. The control method has been modelled in Matlab/Simulink and simulation results are presented to confirm its significantly improved power extraction performance under different sunlight conditions, when compared with a directly-connected PV-generator energized pumping system operating.", "title": "" }, { "docid": "6610f89ba1776501d6c0d789703deb4e", "text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.", "title": "" }, { "docid": "ef1d28df2575c2c844ca2fa109893d92", "text": "Measurement of the quantum-mechanical phase in quantum matter provides the most direct manifestation of the underlying abstract physics. We used resonant x-ray scattering to probe the relative phases of constituent atomic orbitals in an electronic wave function, which uncovers the unconventional Mott insulating state induced by relativistic spin-orbit coupling in the layered 5d transition metal oxide Sr2IrO4. A selection rule based on intra-atomic interference effects establishes a complex spin-orbital state represented by an effective total angular momentum = 1/2 quantum number, the phase of which can lead to a quantum topological state of matter.", "title": "" }, { "docid": "4480840e6dbab77e4f032268ea69bff1", "text": "This chapter provides a critical survey of emergence definitions both from a conceptual and formal standpoint. The notions of downward / backward causation and weak / strong emergence are specially discussed, for application to complex social system with cognitive agents. Particular attention is devoted to the formal definitions introduced by (Müller 2004) and (Bonabeau & Dessalles, 1997), which are operative in multi-agent frameworks and make sense from both cognitive and social point of view. A diagrammatic 4-Quadrant approach, allow us to understanding of complex phenomena along both interior/exterior and individual/collective dimension.", "title": "" }, { "docid": "230d380cbe134f01f3711309d8cc8e35", "text": "For privacy concerns to be addressed adequately in today’s machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.", "title": "" }, { "docid": "ff9d798b270af1971e8c5431bf9a9812", "text": "Observing actions and understanding sentences about actions activates corresponding motor processes in the observer-comprehender. In 5 experiments, the authors addressed 2 novel questions regarding language-based motor resonance. The 1st question asks whether visual motion that is associated with an action produces motor resonance in sentence comprehension. The 2nd question asks whether motor resonance is modulated during sentence comprehension. The authors' experiments provide an affirmative response to both questions. A rotating visual stimulus affects both actual manual rotation and the comprehension of manual rotation sentences. Motor resonance is modulated by the linguistic input and is a rather immediate and localized phenomenon. The results are discussed in the context of theories of action observation and mental simulation.", "title": "" }, { "docid": "226f84ed038a4509d9f3931d7df8b977", "text": "Physically Asynchronous/Logically Synchronous (PALS) is an architecture pattern that allows developers to design and verify a system as though all nodes executed synchronously. The correctness of PALS protocol was formally verified. However, the implementation of PALS adds additional code that is otherwise not needed. In our case, we have a middleware (PALSWare) that supports PALS systems. In this paper, we introduce a verification framework that shows how we can apply Software Model Checking (SMC) to verify a PALS system at the source code level. SMC is an automated and exhaustive source code checking technology. Compared to verifying (hardware or software) models, verifying the actual source code is more useful because it minimizes any chance of false interpretation and eliminates the possibility of missing software bugs that were absent in the model but introduced during implementation. In other words, SMC reduces the semantic gap between what is verified and what is executed. Our approach is compositional, i.e., the verification of PALSWare is done separately from applications. Since PALSWare is inherently concurrent, to verify it via SMC we must overcome the statespace explosion problem, which arises from concurrency and asynchrony. To this end, we develop novel simplification abstractions, prove their soundness, and then use these abstractions to reduce the verification of a system with many threads to verifying a system with a relatively small number of threads. When verifying an application, we leverage the (already verified) synchronicity guarantees provided by the PALSWare to reduce the verification complexity significantly. Thus, our approach uses both “abstraction” and “composition”, the two main techniques to reduce statespace explosion. This separation between verification of PALSWare and applications also provides better management against upgrades to either. We validate our approach by verifying the current PALSWare implementation, and several PALSWare-based distributed real time applications.", "title": "" }, { "docid": "5fe472c30e1dad99628511e03a707aac", "text": "An automatic program that generates constant profit from the financial market is lucrative for every market practitioner. Recent advance in deep reinforcement learning provides a framework toward end-to-end training of such trading agent. In this paper, we propose an Markov Decision Process (MDP) model suitable for the financial trading task and solve it with the state-of-the-art deep recurrent Q-network (DRQN) algorithm. We propose several modifications to the existing learning algorithm to make it more suitable under the financial trading setting, namely 1. We employ a substantially small replay memory (only a few hundreds in size) compared to ones used in modern deep reinforcement learning algorithms (often millions in size.) 2. We develop an action augmentation technique to mitigate the need for random exploration by providing extra feedback signals for all actions to the agent. This enables us to use greedy policy over the course of learning and shows strong empirical performance compared to more commonly used epsilon-greedy exploration. However, this technique is specific to financial trading under a few market assumptions. 3. We sample a longer sequence for recurrent neural network training. A side product of this mechanism is that we can now train the agent for every T steps. This greatly reduces training time since the overall computation is down by a factor of T. We combine all of the above into a complete online learning algorithm and validate our approach on the spot foreign exchange market.", "title": "" }, { "docid": "6b6fb43a134f3677e0cbfabc10fa8b54", "text": "The role of leadership as a core management function becomes extremely important in the case of rapid changes occurring in the market, and then within an organization that must adapt to new changes. Therefore, management becomes a central topic of study within the field of management. Terms of managers and leaders are not equal, and have not same meaning. The manager may be the person who operates in a stable business environment; a leader is needed in terms of uncertainty that identifies new opportunities for the company in a dynamic business environment. Therefore, leadership, charisma and inspiring employees, and the use of power, are becoming the key to the success of the enterprise market, and among its competitors. There is no dilemma that is leadership crucial for its success or not, the importance of leadership is unquestioned. Therefore the study of this area management as a management tool is importance for the success of the business. A leadership skill derives satisfaction of employees work activity. The company, which have no leader will result , with bad results, not motivated and disgruntled employees, while the opposite, organization that are based on knowledge and expertise in the field of management will be successful in their own business domain. Because of its importance in achieving the goals set out by managers and organizations, purpose of this paper is to examine the effects of a valid, leadership on the effectiveness of employees in the enterprises. The results show that a leadership skill affects the efficiency of enterprises and employee motivation. Leadership skills becoming a key success factor in business and in achieving the organization's objectives.", "title": "" }, { "docid": "76738e6a05b147a349d90eae1cde00e7", "text": "In this work we introduce a new framework for performing temporal predictions in the presence of uncertainty. It is based on a simple idea of disentangling components of the future state which are predictable from those which are inherently unpredictable, and encoding the unpredictable components into a low-dimensional latent variable which is fed into a forward model. Our method uses a supervised training objective which is fast and easy to train. We evaluate it in the context of video prediction on multiple datasets and show that it is able to consistently generate diverse predictions without the need for alternating minimization over a latent space or adversarial training.", "title": "" }, { "docid": "d6496dd2c1e8ac47dc12fde28c83a3d4", "text": "We describe a natural extension of the banker’s algorithm for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker’s algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the “localized approximate maximum claims” is used for testing system safety.", "title": "" }, { "docid": "d242ef5126dfb2db12b54c15be61367e", "text": "RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "a5255efa61de43a3341473facb4be170", "text": "Differentiation of 3T3-L1 preadipocytes can be induced by a 2-d treatment with a factor \"cocktail\" (DIM) containing the synthetic glucocorticoid dexamethasone (dex), insulin, the phosphodiesterase inhibitor methylisobutylxanthine (IBMX) and fetal bovine serum (FBS). We temporally uncoupled the activities of the four DIM components and found that treatment with dex for 48 h followed by IBMX treatment for 48 h was sufficient for adipogenesis, whereas treatment with IBMX followed by dex failed to induce significant differentiation. Similar results were obtained with C3H10T1/2 and primary mesenchymal stem cells. The 3T3-L1 adipocytes differentiated by sequential treatment with dex and IBMX displayed insulin sensitivity equivalent to DIM adipocytes, but had lower sensitivity to ISO-stimulated lipolysis and reduced triglyceride content. The nondifferentiating IBMX-then-dex treatment produced transient expression of adipogenic transcriptional regulatory factors C/EBPbeta and C/EBPdelta, and little induction of terminal differentiation factors C/EBPalpha and PPARgamma. Moreover, the adipogenesis inhibitor preadipocyte factor-1 (Pref-1) was repressed by DIM or by dex-then-IBMX, but not by IBMX-then-dex treatment. We conclude that glucocorticoids drive preadipocytes to a novel intermediate cellular state, the dex-primed preadipocyte, during adipogenesis in cell culture, and that Pref-1 repression may be a cell fate determinant in preadipocytes.", "title": "" }, { "docid": "5e18a7f3eb71f20e3905a17de5e0077c", "text": "Research Article Nancy K. Lankton Marshall University [email protected] Harrison D. McKnight Michigan State University [email protected] Expectation disconfirmation theory (EDT) posits that expectations, disconfirmation, and performance influence customer satisfaction. While information systems researchers have adopted EDT to explain user information technology (IT) satisfaction, they often use various EDT model subsets. Leaving out one or more key variables, or key relationships among the variables, can reduce EDT’s explanatory potential. It can also suggest an intervention for practice that is very different from (and inferior to) the intervention suggested by a more complete model. Performance is an especially beneficial but largely neglected EDT construct in IT research. Using EDT theory from the marketing literature, this paper explains and demonstrates the incremental value of using the complete IT EDT model with performance versus the simplified model without it. Studying software users, we find that the complete model with performance both reveals assimilation effects for less experienced users and uncovers asymmetric effects not found in the simplified model. We also find that usefulness performance more strongly influences usage continuance intention than does any other EDT variable. We explain how researchers and practitioners can take full advantage of the predictive and explanatory power of the complete IT EDT model.", "title": "" }, { "docid": "0291c29ea44cd4131d93f726b05b62c8", "text": "Scheduling policies for general purpose multiprogrammed multiprocessors are not well understood. This paper examines various policies to determine which properties of a scheduling policy are the most significant determinants of performance. We compare a more comprehensive set of policies than previous work, including one important scheduling policy that has not previously been examined. We also compare the policies under workloads that we feel are more realistic than previous studies have used. Using these new workloads, we arrive at different conclusions than reported in earlier work. In particular, we find that the “smallest number of processes first” (SNPF) scheduling discipline performs poorly, even when the number of processes in a job is positively correlated with the total service demand of the job. We also find that policies that allocate an equal fraction of the processing power to each job in the system perform better, on the whole, than policies that allocate processing power unequally. Finally, we find that for lock access synchronization, dividing processing power equally among all jobs in the system is a more effective property of a scheduling policy than the property of minimizing synchronization spin-waiting, unless demand for synchronization is extremely high. (The latter property is implemented by coscheduling processes within a job, or by using a thread management package that avoids preemption of processes that hold spinlocks.) Our studies are done by simulating abstract models of the system and the workloads.", "title": "" }, { "docid": "6821d4c1114e007453578dd90600db15", "text": "Our goal is to assess the strategic and operational benefits of electronic integration for industrial procurement. We conduct a field study with an industrial supplier and examine the drivers of performance of the procurement process. Our research quantifies both the operational and strategic impacts of electronic integration in a B2B procurement environment for a supplier. Additionally, we show that the customer also obtains substantial benefits from efficient procurement transaction processing. We isolate the performance impact of technology choice and ordering processes on both the trading partners. A significant finding is that the supplier derives large strategic benefits when the customer initiates the system and the supplier enhances the system’s capabilities. With respect to operational benefits, we find that when suppliers have advanced electronic linkages, the order-processing system significantly increases benefits to both parties. (Business Value of IT; Empirical Assessment; Electronic Integration; Electronic Procurement; B2B; Strategic IT Impact; Operational IT Impact)", "title": "" }, { "docid": "06cfb7d14b50c24dc84ae14be8d525d1", "text": "Distributed round-wire windings are usually manufactured using the insertion technology. If the needle winding technology is applied instead the end windings have to be conducted in a three-layer axial arrangement. This leads to differing coil lengths and thus to a phase asymmetry which is much more distinct than the one resulting from the insertion technology. In addition, it is possible that the first phase exhibits a higher end winding leakage inductance than the other phases if the distance between the first phase and the front end side of the stator core is too short. In this case the magnetic flux lines of the end windings partially close across the stator core producing an increase of the end winding leakage inductance. Therefore, in this paper the impact of the needle winding technology on the operational behavior of an asynchronous machine is investigated. For this purpose a needle wound electrical machine with three-layer end windings is compared to an electrical machine with very symmetric windings built up using manual insertion. By the use of the no load and blocked rotor test as well as a static stator measurement the machine parameters are determined and the impact of the phase asymmetry is investigated. In addition, load measurements are conducted in order to quantify the impact of the production related differences.", "title": "" } ]
scidocsrr