abstract
stringlengths 7
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 6
367
| __index_level_0__
int64 5
1,000k
|
---|---|---|---|
We propose a data structure that stores previously observed vehicle paths in a given area in order to predict the forward trajectory of an observed vehicle at any stage. Incomplete vehicle trajectories are conditioned against in a Past Tree , to predict future trajectories in another tree structure - a Future Tree . Many use cases in transportation simulation benefit from higher validity by considering historical paths in determining how to route vehicle entities. Instead of assigning static and independent turn probabilities at intersections, the storage and retrieval of historical path information can give a more accurate picture of future traffic trends and enhance the capabilities of real-time simulations to, say, inform mobile phone users of expected traffic jams along certain segments, direct the search efforts of law enforcement personnel, or allow more effective synchronization of traffic signals. | ['Philip Pecher', 'Michael Hunter', 'Richard M. Fujimoto'] | Past and future trees: structures for predicting vehicle trajectories in real-time | 208,427 |
Fluorescence microscopy is a useful tool for building quantitative profiles of drug effects. Although features with rich information can be extracted from fluorescence microscopy images, most current profiling methods build profiles from the extracted features using either univariate or non-automated methods. We propose a new multivariate, automated and scalable method for building drug profiles by using a decision hyperplane. The method was evaluated by using 23 compounds belonging to four groups of known mechanisms. We produced quantitative profiles that group drugs with similar mechanisms together, and separate drugs with dissimilar mechanisms from each other. These profiles resulted in better characterizations of the drug effects than profiles obtained from a previous univariate method. | ['Lit-Hsin Loo', 'Lani F. Wu', 'Steven J. Altschuler'] | Automated multivariate profiling of drug effects from fluorescence microscopy images | 118,364 |
We propose a scheme of efficient set similarity joins on Graphics Processing Units GPUs. Due to the rapid growth and diversification of data, there is an increasing demand for fast execution of set similarity joins in applications that vary from data integration to plagiarism detection. To tackle this problem, our solution takes advantage of the massive parallel processing offered by GPUs. Additionally, we employ MinHash to estimate the similarity between two sets in terms of Jaccard similarity. By exploiting the high parallelism of GPUs and the space efficiency provided by MinHash, we can achieve high performance without renouncing accuracy. Experimental results show that our proposed method is more than two orders of magnitude faster than the serial version of CPU implementation, and 25 times faster than the parallel version of CPU implementation, while generating highly precise query results. | ['Mateus Silqueira Hickson Cruz', 'Yusuke Kozawa', 'Toshiyuki Amagasa', 'Hiroyuki Kitagawa'] | GPU Acceleration of Set Similarity Joins | 602,918 |
This paper proposes a simple packet rate estimator that can be very useful in predicting the rate of network traffic. The quality and performance of the estimator is evaluated and compared with three popular rate estimators that were originally designed for estimating bit rate. The proposed estimator is highly cost effective as its computation is not carried out upon the arrival of each incoming packet. In addition, the computation is simple and does not depend on the measurement of interarrival times of packets. We evaluate and compare the quality and performance in terms of agility, stability, accuracy and cost. The performance evaluation is conducted using discrete-event simulation that produces synthesized bursty traffic with empirical packet sizes. | ['Khaled Salah', 'F. Haidari'] | On the performance of a simple packet rate estimator | 137,346 |
Comments on “Green IT: A Matter of Business and Information Systems Engineering?” | ['Andreas Gadatsch'] | Comments on “Green IT: A Matter of Business and Information Systems Engineering?” | 85,598 |
Automatic presentations, also called FA-presentations, were introduced to extend finite model theory to infinite structures whilst retaining the solubility of interesting decision problems. A particular focus of research has been the classification of those structures of some species that admit automatic presentations. Whilst some successes have been obtained, this appears to be a difficult problem in general. A restricted problem, also of significant interest, is to ask this question for unary automatic presentations: automatic presentations over a one-letter alphabet. This paper studies unary FA-presentable semigroups. We prove the following: Every unary FA-presentable structure admits an injective unary automatic presentation where the language of representatives consists of every word over a one-letter alphabet. Unary FA-presentable semigroups are locally finite, but non-finitely generated unary FA-presentable semigroups may be infinite. Every unary FA-presentable semigroup satisfies some Burnside identity. We describe the Green's relations in unary FA-presentable semigroups. We investigate the relationship between the class of unary FA-presentable semigroups and various semigroup constructions. A classification is given of the unary FA-presentable completely simple semigroups. | ['Alan J. Cain', 'Nikola Ruskuc', 'Richard M. Thomas'] | UNARY FA-PRESENTABLE SEMIGROUPS | 485,026 |
Since the problem of textual entailment recognition requires capturing semantic relations between diverse expressions of language, linguistic and world knowledge play an important role. In this article, we explore the effectiveness of different types of currently available resources including synonyms, antonyms, hypernym-hyponym relations, and lexical entailment relations for the task of textual entailment recognition. In order to do so, we develop an entailment relation recognition system which utilizes diverse linguistic analyses and resources to align the linguistic units in a pair of texts and identifies entailment relations based on these alignments. We use the Japanese subset of the NTCIR-9 RITE-1 dataset for evaluation and error analysis, conducting ablation testing and evaluation on hand-crafted alignment gold standard data to evaluate the contribution of individual resources. Error analysis shows that existing knowledge sources are effective for RTE, but that their coverage is limited, especially for domain-specific and other low-frequency expressions. To increase alignment coverage on such expressions, we propose a method of alignment inference that uses syntactic and semantic dependency information to identify likely alignments without relying on external resources. Evaluation adding alignment inference to a system using all available knowledge sources shows improvements in both precision and recall of entailment relation recognition. | ['Yotaro Watanabe', 'Junta Mizuno', 'Eric Nichols', 'Katsuma Narisawa', 'Keita Nabeshima', 'Naoaki Okazaki', 'Kentaro Inui'] | Leveraging Diverse Lexical Resources for Textual Entailment Recognition | 137,755 |
We discuss the physical generation process of images as a combination of basic operations: occlusions, transparencies and contrast changes. These operations generate the essential singularities which we call junctions. We deduce a mathematical and computational model for image analysis according to which the "atoms" of the image must be "pieces of level lines joining junctions", fitting the phenomenological description of Gaetano Kanizsa (1990). A parameter free junction detection algorithm is proposed for the computation of the previously defined "atoms". Then we propose an adequate modification of the morphological filtering algorithms so that they smooth the "atoms" without altering the junctions. Finally, we give some experiments on real and synthetic images. | ['Vicent Caselles', 'Bartomeu Coll', 'Jean-Michel Morel'] | Junction detection and filtering: a morphological approach | 62,886 |
Most automatic fingerprint identification systems identify a person using minutiae. However, minutiae depend almost entirely on the quality of the fingerprint images that are captured. Therefore, it is important that the matching step uses only reliable minutiae. The quality estimation algorithm deduces the availability of the extracted minutiae and allows for a matching step that will use only reliable minutiae. We propose a model-based quality estimation of fingerprint images. We assume that the ideal structure of a fingerprint image takes the shape of a sinusoidal wave consisting of ridges and valleys. To determine the quality of a fingerprint image, the similarity between the sinusoidal wave and the input fingerprint image is measured. The proposed method uses the 1-dimensional (1D) probability density function (PDF) obtained by projecting the 2-dimensional (2D) gradient vectors of the ridges and valleys in the orthogonal direction to the local ridge orientation. Quality measurement is then caculated as the similarity between the 1D probability density functions of the sinusoidal wave and the input fingerprint image. In our experiments, we compared the proposed method and other conventional methods using FVC-2002 DB I, III procedures. The performance of verification and the separability between good and bad regions were tested. | ['Sanghoon Lee', 'Chulhan Lee', 'Jaihie Kim'] | Model-based quality estimation of fingerprint images | 828,756 |
Consider a Gaussian multiple-input multiple-output (MIMO) multiple-access channel (MAC) with channel matrix H and a Gaussian MIMO broadcast channel (BC) with channel matrix H⊺. For the MIMO MAC, the integer-forcing architecture consists of first decoding integer-linear combinations of the transmitted codewords, which are then solved for the original messages. For the MIMO BC, the integer-forcing architecture consists of pre-inverting the integer-linear combinations at the transmitter so that each receiver can obtain its desired codeword by decoding an integer-linear combination. In both cases, integer-forcing offers higher achievable rates than zero-forcing. In recent work, we established an uplink-downlink duality relationship for integer-forcing, i.e., we showed that any rate tuple that is achievable via integer-forcing on the MIMO MAC can be achieved via integer-forcing on the MIMO BC with the same sum power and vice versa. It has also been shown that integer-forcing for the MIMO MAC can be enhanced via successive cancellation. Here, we introduce dirty-paper integer-forcing for the MIMO BC and establish uplink-downlink duality with successive integer-forcing for the MIMO MAC. | ['Wenbo He', 'Bobak Nazer', 'Shlomo Shamai'] | Dirty-paper integer-forcing | 709,878 |
Single-Pass Rendering of Day and Night Sky Phenomena | ['Daniel Müller', 'Juri Engel', 'Jürgen Döllner'] | Single-Pass Rendering of Day and Night Sky Phenomena | 644,288 |
The use of multiple antennas for wireless communication systems has gained overwhelming interest during the last decade - both in academia and industry. Multiple antennas can be utilized in order to accomplish a multiplexing gain, a diversity gain, or an antenna gain, thus enhancing the bit rate, the error performance, or the signal-to-noise-plus-interference ratio of wireless systems, respectively. With an enormous amount of yearly publications, the field of multiple-antenna systems, often called multiple-input multiple-output (MIMO) systems, has evolved rapidly. To date, there are numerous papers on the performance limits of MIMO systems, and an abundance of transmitter and receiver concepts has been proposed. The objective of this literature survey is to provide non-specialists working in the general area of digital communications with a comprehensive overview of this exciting research field. To this end, the last ten years of research efforts are recapitulated, with focus on spatial multiplexing and spatial diversity techniques. In particular, topics such as transmitter and receiver structures, channel coding, MIMO techniques for frequency-selective fading channels, diversity reception and space-time coding techniques, differential and non-coherent schemes, beamforming techniques and closed-loop MIMO techniques, cooperative diversity schemes, as well as practical aspects influencing the performance of multiple-antenna systems are addressed. Although the list of references is certainly not intended to be exhaustive, the publications cited will serve as a good starting point for further reading. | ['Jan Mietzner', 'Robert Schober', 'Lutz Lampe', 'Wolfgang H. Gerstacker', 'Peter Adam Hoeher'] | Multiple-antenna techniques for wireless communications - a comprehensive literature survey | 415,604 |
The use of parallelism for protocol processing in a parallel architecture of an internetworking unit is presented. This architecture consists of pipelines and arrays of processors and supports multiple memory concepts (local and global memory). A high-performance parallel implementation of the internetworking protocol in a gateway is discussed, and selected performance results are presented. The results show that requirements of high-speed networks with throughputs of more than 100 Mb/s can be fulfilled with the proposed parallel architecture and implementation. > | ['Torsten Braun', 'Martina Zitterbart'] | High performance internetworking protocol | 319,601 |
In this paper, inspired by the society of animals, we study the coalition formation of robots for detecting intrusions using game theory. We consider coalition formation in a group of three robots that detect and capture intrusions in a closed curve loop. In our analytical model, individuals seek alliances if they think that their detect regions are too short to gain an intrusion capturing probability larger than their own. We assume that coalition seeking has an investment cost and that the formation of a coalition determines the outcomes of parities, with the detect length of a coalition simply being the sum of those of separate coalition members. We derive that, for any cost, always detecting alone is an evolutionarily stable strategy (ESS), and that, if the cost is below a threshold, always trying to form a coalition is an ESS (thus a three-way coalition arises). | ['Xiannuan Liang', 'Yang Xiao'] | Studying Bio-Inspired Coalition Formation of Robots for Detecting Intrusions Using Game Theory | 248,058 |
It is now possible to embed radio-frequency identification tags into almost any physical device. However, issues regarding privacy remain a concern and limit their widespread use. We propose a scalable anonymous radio-frequency identification authentication protocol using what we refer to as anonymous tickets. These tickets uniquely identify tags and are reusable. This considerably strengthens its non-traceability and requires just O1 search/query time, with minimal storage overhead on the back-end system. We formally prove the protocol and compare the performance of the proposed protocol with selected works found in literature. Copyright © 2013 John Wiley & Sons, Ltd. | ['Mahdi Asadpour', 'Mohammad Torabi Dashti'] | Scalable, privacy preserving radio-frequency identification protocol for the internet of things | 340,873 |
We present an implementation of goroutines and channels on the SCC. Goroutines and channels are the building blocks for writing concurrent programs in the Go programming language. Both Go and the SCC share the same basic idea--the use of messages for communication and synchronization. Our implementation of goroutines on top of tasks reuses existing runtime support for scheduling and load balancing. Channels, which permit goroutines to communicate by sending and receiving messages, can be implemented efficiently using the on-die message passing buffers. We demonstrate the use of goroutines and channels with a parallel genetic algorithm that can utilize all cores of the SCC. | ['Andreas Prell', 'Thomas Rauber'] | Go's Concurrency Constructs on the SCC | 793,226 |
A triplex target DNA site (TTS), a stretch of DNA that is composed of polypurines, is able to form a triple-helix (triplex) structure with triplex-forming oligonucleotides (TFOs) and is able to influence the site-specific modulation of gene expression and/or the modification of genomic DNA. The co-localization of a genomic TTS with gene regulatory signals and functional genome structures suggests that TFOs could potentially be exploited in antigene strategies for the therapy of cancers and other genetic diseases. Here, we present the TTS Mapping and Integration (TTSMI; http://ttsmi.bii.a-star.edu.sg) database, which provides a catalog of unique TTS locations in the human genome and tools for analyzing the co-localization of TTSs with genomic regulatory sequences and signals that were identified using next-generation sequencing techniques and/or predicted by computational models. TTSMI was designed as a user-friendly tool that facilitates (i) fast searching/filtering of TTSs using several search terms and criteria associated with sequence stability and specificity, (ii) interactive filtering of TTSs that co-localize with gene regulatory signals and non-B DNA structures, (iii) exploration of dynamic combinations of the biological signals of specific TTSs and (iv) visualization of a TTS simultaneously with diverse annotation tracks via the UCSC genome browser. | ['Piroon Jenjaroenpun', 'Chee Siang Chew', 'Tai Pang Yong', 'Kiattawee Choowongkomon', 'Wimada Thammasorn', 'Vladimir A. Kuznetsov'] | The TTSMI database: a catalog of triplex target DNA sites associated with genes and regulatory elements in the human genome | 95,397 |
Fluctuations in protein abundance among single cells are primarily due to the inherent stochasticity in transcription and translation processes, such stochasticity can often confer phenotypic heterogeneity among isogenic cells. It has been proposed that expression noise can be triggered as an adaptation to environmental stresses and genetic perturbations, and as a mechanism to facilitate gene expression evolution. Thus, elucidating the relationship between expression noise, measured at the single-cell level, and expression variation, measured on population of cells, can improve our understanding on the variability and evolvability of gene expression. Here, we showed that noise levels are significantly correlated with conditional expression variations. We further demonstrated that expression variations are highly predictive for noise level, especially in TATA-box containing genes. Our results suggest that expression variabilities can serve as a proxy for noise level, suggesting that these two properties share the same underlining mechanism, e.g. chromatin regulation. Our work paves the way for the study of stochastic noise in other single-cell organisms. | ['Dong Dong', 'Xiaojian Shao', 'Naiyang Deng', 'Zhaolei Zhang'] | Gene expression variations are predictive for stochastic noise. | 471,462 |
This research introduce our work on developing Krylov subspace and AMG solvers on NVIDIA GPUs. As SpMV is a crucial part for these iterative methods, SpMV algorithms for single GPU and multiple GPUs are implemented. A HEC matrix format and a communication mechanism are established. And also, a set of specific algorithms for solving preconditioned systems in parallel environments are designed, including ILU(k), RAS and parallel triangular solvers. Based on these work, several Krylov solvers and AMG solvers are developed. According to numerical experiments, favorable acceleration performance is acquired from our Krylov solver and AMG solver under various parameter conditions. | ['Bo Yang', 'Hui Liu', 'Zhangxin Chen'] | Development of Krylov and AMG linear solvers for large-scale sparse matrices on GPUs | 814,747 |
Entity Extraction from Social Media using Machine Learning Approaches. | ['Sombuddha Choudhury', 'Somnath Banerjee', 'Sudip Kumar Naskar', 'Paolo Rosso', 'Sivaji Bandyopadhyay'] | Entity Extraction from Social Media using Machine Learning Approaches. | 985,676 |
As wireless networks have been increasingly deployed, the need of quality measurement became essential since network operators want to control their network resources while maintaining user satisfaction. More importantly, measurement of technical parameters fails to give an account of the user experience, what could be named QoE (Quality of Experience). Therefore, many techniques have been developed in order to assess as accurately as possible this perceptual quality. To investigate QoE measurement, this paper presents three approaches namely subjective approach, objective approach, and hybrid approach. It also presents performance evaluation of these approaches for assessing QoE in video streaming application over wireless networks in different network conditions (using variation of loss rate and its distribution). We focus more specifically on a hybrid approach called Pseudo Subjective Quality Assessment (PSQA) that keeps advantages of both subjective and objective schemes while minimizing their drawbacks. We demonstrate that this approach provides good estimations comparing to the well-known objective metric called Peak Signal to Noise Ratio (PSNR). We also observe that PSQA gives similar result comparing to subjective test that has been evaluated by human observers in most of the cases. Moreover, one objective of this evaluation is to validate PSQA for QoE measurement, which will facilitate the use of QoE as metric for resource management in the future. For that, we also give some possible directions allowing us to manage network resources using this metric. | ['Kandaraj Piamrat', 'César Viho', 'Jean-Marie Bonnin', 'Adlen Ksentini'] | Quality of Experience Measurements for Video Streaming over Wireless Networks | 427,770 |
Rettungsdienste sind ein wichtiger Bestandteil der Gefahrenabwehr und der Gesundheitsvorsorge. Die Arbeit des Rettungsfachpersonals und der Notarzte ist gepragt von zahlreichen taglichen Routineeinsatzen (Krankentransporte, individualmedizinische Notfalle) einerseits und ausergewohnlichen Einsatzlagen bei einem Massenanfall von Verletzten (MANV) andererseits. Zur Dokumentation relevanter Patienten- und Behandlungsdaten werden gegenwartig in der Regel noch zahlreiche Papierformulare eingesetzt. Mobile computerbasierte Werkzeuge sind im Regeldienst vereinzelt, fur den MANV bislang nicht etabliert. Zur Gestaltung einer gebrauchstauglichen Losung, deren Benutzung bei der Erledigung von Routineaufgaben effizient und unter extremen Bedingung effektiv ist, werden in diesem Beitrag erweiterte Radialmenus (Marking Menus) im Zusammenhang mit stiftbasierter Touch-Steuerung als potenzielle Alternative zu etablierten Bedienkonzepten vorgestellt und exemplarisch auf den rettungsdienstlichen Kontext angewendet. Die gebrauchstaugliche Gestaltung wird abschliesend durch eine formative Evaluation untersucht, aus der weitere Entwicklungspotentiale abgeleitet werden. | ['Tilo Mentler', 'René Kutschke', 'Michael Herczeg', 'Martin Christof Kindsmüller'] | Marking Menus im sicherheitskritischen mobilen Kontext am Beispiel des Rettungsdienstes. | 764,824 |
Certificateless public key cryptography was introduced to remove the use of certificate to ensure the authentication of the user's public key in the traditional certificate-based public key cryptography and overcome he key escrow problem in the identity-based public key cryptography. Concurrent signatures were introduced as an alternative approach to solving the problem of fair exchange of signatures. Combining the concept of certificateless cryptography with the concept of concurrent signature, in this paper, we present a notion of certificateless concurrent signature with a formal security model and propose a provably secure scheme assuming the hardness of computational Diffie-Hellman Problem. | ['Zhenjie Huang', 'Xuanzhi Lin', 'Rufen Huang'] | Certificateless Concurrent Signature Scheme | 456,005 |
We derive a first-order approximation of the density of maximum entropy for a continuous 1-D random variable, given a number of simple constraints. This results in a density expansion which is somewhat similar to the classical polynomial density expansions by Gram-Charlier and Edgeworth. Using this approximation of density, an approximation of 1-D differential entropy is derived. The approximation of entropy is both more exact and more robust against outliers than the classical approximation based on the polynomial density expansions, without being computationally more expensive. The approximation has applications, for example, in independent component analysis and projection pursuit. | ['Aapo Hyvärinen'] | New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit | 237,809 |
ABSTRACTThis paper reports a study of the use of activity theory in human–computer interaction (HCI) research. We analyse activity theory in HCI since its first appearance about 25 years ago. Through an analysis and meta-synthesis of 109 selected HCI activity theory papers, we created a taxonomy of 5 different ways of using activity theory: (1) analysing unique features, principles, and problematic aspects of the theory; (2) identifying domain-specific requirements for new theoretical tools; (3) developing new conceptual accounts of issues in the field of HCI; (4) guiding and supporting empirical analyses of HCI phenomena; and (5) providing new design illustrations, claims, and guidelines. We conclude that HCI researchers are not only users of imported theory, but also theory-makers who adapt and develop theory for different purposes. | ['Torkil Clemmensen', 'Victor Kaptelinin', 'Bonnie A. Nardi'] | Making HCI theory work: an analysis of the use of activity theory in HCI research | 724,969 |
We suggest a novel approach for compressing images of text documents based on building up a simple derived font from patterns in the image, and present the results of a prototype implementation based on our approach. Our prototype achieves better compression than most alternative systems, and the decompression time appears substantially shorter than other methods with the same compression rate. The method has other advantages, such as a straightforward extension to a lossy scheme that allows one to control the lossiness introduced in a well-defined manner. We believe our approach will be applicable in other domains as well. | ['Andrei Z. Broder', 'Michael Mitzenmacher'] | Pattern-based compression of text images | 441,525 |
Transitions-Based System. | ['Thomas Raimbault', 'David Genest', 'Stéphane Loiseau'] | Transitions-Based System. | 737,833 |
Flash Device Support for Database Management | ['Philippe Bonnet', 'Luc Bouganim'] | Flash Device Support for Database Management | 654,170 |
The localization of dipolar sources in the brain based on electroencephalography (EEG) or magnetoencephalography (MEG) data is a frequent problem in the neurosciences. Deterministic standard approaches such as the Levenberg-Marquardt (LM) method often have problems in finding the global optimum of the associated nonlinear optimization function, when two or more dipoles are to be reconstructed. In such cases, probabilistic approaches turned out to be superior, but their applicability in neuromagnetic source localizations is not yet satisfactory. The objective of this study was to find probabilistic optimization strategies that perform better in such applications. Thus, hybrid and nested evolution strategies (NES) which both realize a combination of global and local search by means of multilevel optimizations were newly designed. The new methods were bench-marked and compared to the established evolution strategies (ES), to fast evolution strategies (FES), and to the deterministic LM method by conducting a two-dipole fit with MEG data sets from neuropsychological experiments. The best results were achieved with NES. | ['Roland Eichardt', 'Jens Haueisen', 'Thomas R. Knösche', 'Ernst Günter Schukat-Talamazzini'] | Reconstruction of Multiple Neuromagnetic Sources Using Augmented Evolution Strategies— A Comparative Study | 306,911 |
Among the key elements of mature engineering is automated production: we understand the technical problems and we understand their solutions; our goal is to automate production as much as possible to increase product quality, reduce costs and time-to-market, and be adept at creating new products quickly and cheaply. Automated production is a technological statement of maturity: "We've built these products so often by hand that we've gotten it down to a Science". Models of automated production are indeed the beginnings of a Science of Automated Design (SOAD). Feature Oriented Software Development (FOSD) will play a fundamental role in SOAD, and I believe also play a fundamental role in the future of software engineering. In this presentation, I explain what distinguishes FOSD from other software design disciplines and enumerate key technical barriers that lie ahead for FOSD and SOAD. | ['Don S. Batory'] | On the importance and challenges of FOSD | 258,917 |
Data Exchange is an old problem that was firstly studied from a theoretical point of view only in 2003. Since then many approaches were considered when it came to the language describing the relationship between the source and the target schema. These approaches focus on what it makes a target instance a "good" solution for data-exchange. In this paper we propose the inference-based semantics that solves many certain-answer anomalies existing in current data-exchange semantics. To this we introduce a new mapping language between the source and the target schema based on annotated bidirectional dependencies (abd) and, consequently define the semantics for this new language. It is shown that the ABD-semantics can properly represent the inference-based seman- tics, for any source-to-target mappings. We discovered three dichotomy results under the new semantics for solution-existence, solution-check and UCQ evaluation problems. These results rely on two factors describ- ing the annotation used in the mappings (density and cardinality). Fi- nally we also investigate the certain-answers evaluation problem under ABD-semantics and discover many tractable classes for non-UCQ queries even for a subclass of CQ ¬ . | ['Adrian Onet'] | Inference-based semantics in Data Exchange | 727,331 |
In this paper, we consider the problem of representing a multiresolution geometric model, called a Simplicial Multi-Complex (SMC), in a compact way. We present encoding schemes for both two- and three-dimensional SMCs built through a vertex insertion (removal) simplification strategy. We show that a good compression ratio is achieved not only with respect to a general-purpose data structure for a SMC, but also with respect to just encoding the complex at the maximum resolution. | ['Emanuele Danovaro', 'Leila De Floriani', 'Paola Magillo', 'Enrico Puppo'] | Representing vertex-based simplicial multi-complexes | 555,364 |
An Empirical Comparison of Methods for Multi-label Data Stream Classification | ['Konstantina Karponi', 'Grigorios Tsoumakas'] | An Empirical Comparison of Methods for Multi-label Data Stream Classification | 900,162 |
The ability to recognize emotion is one of the hallmarks of emotional intelligence, an aspect of human intelligence that has been argued to be even more important than mathematical and verbal intelligences. This paper proposes that machine intelligence needs to include emotional intelligence and demonstrates results toward this goal: developing a machine's ability to recognize the human affective state given four physiological signals. We describe difficult issues unique to obtaining reliable affective data and collect a large set of data from a subject trying to elicit and experience each of eight emotional states, daily, over multiple weeks. This paper presents and compares multiple algorithms for feature-based recognition of emotional state from this data. We analyze four physiological signals that exhibit problematic day-to-day variations: The features of different emotions on the same day tend to cluster more tightly than do the features of the same emotion on different days. To handle the daily variations, we propose new features and algorithms and compare their performance. We find that the technique of seeding a Fisher Projection with the results of sequential floating forward search improves the performance of the Fisher Projection and provides the highest recognition rates reported to date for classification of affect from physiology: 81 percent recognition accuracy on eight classes of emotion, including neutral. | ['Rosalind W. Picard', 'Elias Vyzas', 'Jennifer Healey'] | Toward machine emotional intelligence: analysis of affective physiological state | 29,920 |
The paper introduces an innovative implementation of dynamic hard disk power management based on renewal theory, from the system model construction, relative knowledge on renew theory, to how to integrate both to achieve PM (Power Management) optimization. It also presents an experiment performed on a physical hard disk to demonstrate the feasibility of the implementation for the purpose of energy-consumption saving on hard disk. | ['Fagui Liu', 'Zexiang Wu', 'Weipeng Mai'] | Renewal-Theory-Based Hard Disk Power Management Strategy Optimization | 500,049 |
We describe a simple, fast computing and easy to implement method for finding relatively good clusterings of software systems. Our method relies on the ability to compute the strength of an edge in a graph by applying a straightforward metric defined in terms of the neighborhoods of its end vertices. The metric is used to identify the weak edges of the graph, which are momentarily deleted to break it into several components. We study the quality metric MQ introduced by S. Mancoridis et al. (1998) and exhibit mathematical properties that make it a good measure for clustering quality. Letting the threshold weakness of edges vary defines a path, i.e. a sequence of clusterings in the solution space (of all possible clustering of the graph). This path is described in terms of a curve linking MQ to the weakness of the edges in the graph. | ['Yves Chiricota', 'Fabien Jourdan', 'Guy Melançon'] | Software components capture using graph clustering | 453,287 |
This paper highlights key opportunities for technology design for informal caregivers who provide long-term in-home care. For this purpose, a study with informal caregivers was conducted, including interviews (N=4) and online questionnaires (N=34) based on holistic analysis of supportive technologies. These investigations provide a deeper understanding of the key opportunities in the design of technologies to support the caregiver, namely (1) making caregivers better informed and more aware of existing solutions (2) increasing awareness of the caregivers' own wellness; (3) cherishing the valuable, positive moments of caregiving (e.g. by capturing precious moments) and (4) encouraging meaningful social interactions among caregivers for strengthening social ties. | ['Lilian Bosch', 'Marije Kanis'] | Design Opportunities for Supporting Informal Caregivers | 725,459 |
In this paper, we address the rate control problem in a multi-hop random access wireless network, with the objective of achieving proportional fairness amongst the end-to-end sessions. The problem is considered in the framework of nonlinear optimization. Compared to its counterpart in a wired network where link capacities are assumed to be fixed, rate control in a multi-hop random access network is much more complex and requires joint optimization at both the transport layer and the link layer. This is due to the fact that the attainable throughput on each link in the network is `elastic' and is typically a non-convex and non-separable function of the transmission attempt rates. Two cross-layer algorithms, a dual based algorithm and a primal based algorithm, are proposed in this paper to solve the rate control problem in a multi-hop random access network. Both algorithms can be implemented in a distributed manner, and work at the link layer to adjust link attempt probabilities and at the transport layer to adjust session rates. We prove rigorously that the two proposed algorithms converge to the globally optimal solutions. Simulation results are provided to support our conclusions. | ['Xin Wang', 'Koushik Kar'] | Cross-layer rate control for end-to-end proportional fairness in wireless networks with random access | 103,204 |
Proper tokenization of biomedical text is a non-trivial problem. Problematic characteristics of current biomedical tokenizers include idiosyncratic tokenizer output and poor tokenizer extensibility and reuse. To address these problematic characteristics, we identified and completed a novel tokenizer design pattern for biomedical tokenizers. We separated a tokenizer into three components: a token lattice and lattice constructor, a best lattice-path chooser and token transducers. Token transducers create tokens from text. These tokens are assembled into a token lattice by the lattice constructor. The best path (tokenization) is selected from the token lattice, tokenizing the text. We applied our design pattern and our token transducer identification guidelines in the creation of a tokenizer for SNOMED CT concept descriptions and compared our tokenizer to three other tokenizer methods. Med post and our adapted Viterbi tokenizer perform best with a 90.1% and 93.7% accuracy respectively. | ['Neil Barrett', 'Jens H. Weber-Jahnke'] | Building a Biomedical Tokenizer Using the Token Lattice Design Pattern and the Adapted Viterbi Algorithm | 269,908 |
In this paper a modular approach of gradual confidence for facial feature extraction over real video frames is presented. The problem is being dealt under general imaging conditions and soft presumptions. The proposed methodology copes with large variations in the appearance ofdiverse subjects, as well as ofthe same subject in various instances within real video sequences. Areas of the face that statistically seem to be outstanding form an initial set of regions that are likely to include information about the features of interest. Enhancement of these regions produces closed objects, which reveal—through the use of a fuzzy system—a dominant angle, i.e. the facial rotation angle. The object set is restricted using the dominant angle. An exhaustive search is performed among all candidate objects, matching a pattern that models the relative position ofthe eyes and the mouth. Labeling ofthe winner features can be used to evaluate the features extracted and provide feedback in an iterative framework. A subset of the MPEG-4 facial definition or facial animation parameter set can be obtained. This gradual feature revelation is performed under optimization for each step, producing a posteriori knowledge about the face and leading to a step-by-step visualization ofthe f in search. r 2002 Elsevier Science B.V. All rights reserved. | ['George N. Votsis', 'Athanasios I. Drosopoulos', 'Stefanos D. Kollias'] | A modular approach to facial feature segmentation on real sequences | 451,304 |
The current mobile service in China displays a highly dynamic competition between two major operators, China Mobile and China Unicom. Their market share concerning the number of subscriber is influenced by the subscriber base, service quality, pricing policy, etc. Current researches and analyses are mostly direct comparisons of the relative advantage of the two operators, in which weights of every item being compared are chosen almost arbitrarily and the dynamic relationships between these items are usually ignored. This research stands from the view of China Mobile, the bigger and earlier operator, to explore the reason of its shrinking market share after SMS (Short Message Service) becomes popular in China. The dynamics between the service quality, exchange capacity, price in the mobile service are highlighted to show how these factors influence the growth of the total mobile market and the relative advantage in the duopoly as well. Long run and short run effects of these factors are differentiated so as to make it possible to suggest some proper policies for China Mobile to keep the leadership in the market. | ['Yueping Chen', 'Yongguang Zhong'] | Investigating Rivalry in Tele-mobile Service Industry in China: A System Dynamics Approach | 378,305 |
URINE OUTPUT MONITORING - A Simple and Reliable Device for Monitoring Critical Patients’ Urine Output | ['Abraham Otero', 'Teodor Akinfiev', 'Andrey Apalkov', 'Francisco Palacios', 'J. Presedo'] | URINE OUTPUT MONITORING - A Simple and Reliable Device for Monitoring Critical Patients’ Urine Output | 802,221 |
Robot positioning is an important function of autonomous intelligent robots. However, the application of external forces to a robot can disrupt its normal operation and cause localisation errors. We present a novel approach for detecting external disturbances based on optic flow without the use of egomotion information. Even though this research moderately validates the efficacy of the model we argue that its application is plausible to a large number of robotic systems. | ['Leonidas Georgopoulos', 'Gillian M. Hayes', 'George Konidaris'] | A forward model of optic flow for detecting external forces | 7,412 |
For the past several years, a team in the Department of Electrical Engineering (EE), National Chung Cheng University, Taiwan, has been establishing a pedagogical approach to embody embedded systems in the context of robotics. To alleviate the burden on students in the robotics curriculum in their junior and senior years, a training platform on embedded systems with co-design in hardware and software has been developed and fabricated as a supplement for these students. This general-purpose platform has several advantages over commercial training kits for embedded systems. For instance, the programming layer has been brought onto an open-source platform ported by Linux and μC/OS-II such that it is mostly hardware-independent. Meanwhile, in addition to linking to fundamental library functions provided for robotics, users can program the codes not only in C language, but also through visual programming by means of a graphic interface developed along with the platform, allowing users to concentrate on higher-level robot function design. In other words, the platform facilitates rapid prototyping in robotics design. Meanwhile, a tailored laboratory manual associated with the platform has been designed and used in classes. Based on assessments and evaluation on the students who have completed this course, the curricular training is satisfactory and largely meets the requirements established at the design stage. | ['Kao-Shing Hwang', 'Wen-Hsu Hsiao', 'Gaung-Ting Shing', 'Kim-Joan Chen'] | Rapid Prototyping Platform for Robotics Applications | 153,439 |
Potato (Solanum tuberosum L.) is an important crop worldwide, with total world production of about 360 million metric ton. Potato yield and quality are very dependent on an adequate supply of nitrogen. The relatively shallow root system of the potato crop, coupled with its large nitrogen (N) requirement and sensitivity to water stress on coarse textured soil increases the risk of nitrate (NO 3 -N) leaching. Therefore, precise N management for potato is important, both for maximizing production and for minimizing N loss to groundwater. Therefore, with this dilemma, efficient monitoring of plant N status and appropriate N fertilizer management are essential to balance the increasing cost of N fertilizer, demand by the crop, and the need to minimize environmental damage, especially water quality. | ['Feng Li', 'V. Alchanatis'] | The potential of airborne hyperspectral images to detect leaf nitrogen content in potato fields | 933,371 |
This paper describes a state feed-back tracking controller with parameter uncertainties for vehicle dynamics with a four-wheel active steering system as well as an active suspension system. The objectives of the proposed controller are to improve the vehicle behavior by forcing the lateral dynamics and the load transfer ratio to track the desired vehicle behavior in critical situations. The Takagi-Sugeno (TS) representation has been used in order to take into account the non-linearity of the cornering forces. Moreover, the variation of the tire cornering stiffness has been considered through parameter uncertainties. Based on the obtained uncertain fuzzy model, the controller design has been formulated in terms of Linear Matrix Inequality (LMI) constraints. The proposed techniques have been evaluated through an obstacle avoidance test conducted in MATLAB/Simulink®. | ['H. Dahmani', 'O. Pages', 'A. El Hajjaji'] | Robust control with parameter uncertainties for vehicle chassis stability in critical situations | 655,672 |
Query execution using link-traversal is a promising approach for retrieving and accessing data on the web. However, this approach finds its limitation when it comes to query patterns such as ?s rdf:type ex:Employee, where one does not know the subject URI. Such queries are quite useful for di erent application needs. In this paper, we conduct an empirical analysis on the use of such patterns in SPARQL query logs. We present di erent solution approaches to extend the current Linked Open Data principles with the ability for inverse link traversal. We discuss the advantages and disadvantages of the di erent approaches. | ['Stefan Scheglmann', 'Ansgar Scherp'] | Will Linked Data Benefit from Inverse Link Traversal | 680,203 |
The authors define basic units of computation in distributed systems, whether communicating synchronously or asynchronously, as comprising indivisible logical units of computation that take the system from one ground state to another. It is explained how a computation can be viewed as a partial order over the basic units of the computation. The problem of detecting the basic units is considered. One algorithm for creating ground states during a computation in an asynchronously communicating system with FIFO channels is given, and an existing algorithm that implicitly creates ground states in a synchronously communicating system is referenced. The significance of the basic unit is explained, and its applications are given. > | ['Mohan Ahuja', 'Ajay D. Kshemkalyani', 'Timothy J. Carlson'] | A basic unit of computation in distributed systems | 225,231 |
The two existing approaches to detecting cyber attacks on computers and networks, signature recognition and anomaly detection, have shortcomings related to the accuracy and efficiency of detection. This paper describes a new approach to cyber attack (intrusion) detection that aims to overcome these shortcomings through several innovations. We call our approach attack-norm separation. The attack-norm separation approach engages in the scientific discovery of data, features and characteristics for cyber signal (attack data) and noise (normal data). We use attack profiling and analytical discovery techniques to generalize the data, features and characteristics that exist in cyber attack and norm data. We also leverage well-established signal detection models in the physical space (e.g., radar signal detection), and verify them in the cyberspace. With this foundation of information, we build attack-norm separation models that incorporate both attack and norm characteristics. This enables us to take the least amount of relevant data necessary to achieve detection accuracy and efficiency. The attack-norm separation approach considers not only activity data, but also state and performance data along the cause-effect chains of cyber attacks on computers and networks. This enables us to achieve some detection adequacy lacking in existing intrusion detection systems. | ['Nong Ye', 'Toni Farley', 'Deepak Lakshminarasimhan'] | An attack-norm separation approach for detecting cyber attacks | 472,780 |
Typical grinding operations in batch production are characterized by multiple data streams sampled at distinct intervals. A unique estimation strategy is proposed for integrating rapidly sampled sensor signals with postprocess inspection data from a series of grinding cycles. After a nonlinear state-space model is derived from existing analytical models, system observability is tested for various combinations of sensors and measurement settings. A multirate simultaneous state and parameter estimation scheme is developed based on extended Kalman filters for real-time estimation of the model parameters and part quality. Results from case studies demonstrate that the proposed scheme enables challenging estimation tasks to be undertaken that cannot be performed using traditional approaches. | ['Cheol W. Lee'] | Estimation Strategy for a Series of Grinding Cycles in Batch Production | 409,100 |
Estimation of the vocal tract shape of nasals using a Bayesian scheme. | ['Christian H. Kasess', 'Wolfgang Kreuzer', 'Ewald Enzinger', 'Nadja Kerschhofer-Puhalo'] | Estimation of the vocal tract shape of nasals using a Bayesian scheme. | 753,106 |
A paired-dominating set of a graph G is a dominating set of vertices whose induced subgraph has a perfect matching, while the paired-domination number is the minimum cardinality of a paired-dominating set in the graph, denoted by \(\gamma _{pr}(G)\). Let G be a connected \(\{K_{1,3}, K_{4}-e\}\)-free cubic graph of order n. We show that \(\gamma _{pr}(G)\le \frac{10n+6}{27}\) if G is \(C_{4}\)-free and that \(\gamma _{pr}(G)\le \frac{n}{3}+\frac{n+6}{9(\lceil \frac{3}{4}(g_o+1)\rceil +1)}\) if G is \(\{C_{4}, C_{6}, C_{10}, \ldots , C_{2g_o}\}\)-free for an odd integer \(g_o\ge 3\); the extremal graphs are characterized; we also show that if G is a 2 -connected, \(\gamma _{pr}(G) = \frac{n}{3} \). Furthermore, if G is a connected \((2k+1)\)-regular \(\{K_{1,3}, K_4-e\}\)-free graph of order n, then \(\gamma _{pr}(G)\le \frac{n}{k+1} \), with equality if and only if \(G=L(F)\), where \(F\cong K_{1, 2k+2}\), or k is even and \(F\cong K_{k+1,k+2}\). | ['Wei Yang', 'Xinhui An', 'Baoyindureng Wu'] | Paired-domination number of claw-free odd-regular graphs | 767,497 |
In this paper, a method of copyright monitoring or tracing using invariants of contents is proposed. We first define certain feature parameters of color images which are invariant under arbitrary bicontinuous or smooth transforms, called topological or differentially topological invariants. To extract the topological invariants robustly, we introduce scale-space of invariant features. By selecting topologically stable features with respect to scale transform, noise-robust invariants of images are obtained. These invariants can be applied to copyright tracing or monitoring and protection due to their robustness against various deformation attacks. | ['Jinhui Chao', 'Shintaro Suzuki', 'Jongdae Kim'] | Copyright tracing using invariants of contents | 428,830 |
Abstract#R##N##R##N#Profilers play an important role in the development of efficient programs. Profiling techniques developed for traditional languages are inadequate for logic programming languages, for a number of reasons: first, the flow of control in logic programming languages, involving back-tracking and failure, is significantly more complex than in traditional languages; secondly, the time taken by a unification operation, the principal primitive operation of such languages, cannot be predicted statically because it depends on the size of the input; and finally, programs may change at run-time because clauses may be added or deleted using primitives such as assert and retract. This paper describes a simple profiler for Prolog. The ideas outlined here may be used either to implement a simple interactive profiler, or integrated into Prolog compilers. | ['Saumya K. Debray'] | Profiling Prolog programs | 147,698 |
Cet article porte sur l'extension de la problematique classique de la decouverte de regles de type « si a alors presque b » a la recherche de regles generalisees de type R⇒ R' ou les premisses R et les conclusions R' peuvent etre elles-memes des regles. Dans le cadre de l'analyse statistique implicative developpee initialement par Gras [GRA 79], [GRA 96], une premiere formalisation basee sur la notion de « hierarchie orientee » a ete recemment proposee [GRA 01]. Inspiree fortement par la classification ascendante hierarchique, la demarche consiste a « agreger » des regles entre elles selon un mecanisme incremental. Nous proposons ici une nouvelle formalisation du modele qui met plus nettement en evidence les structures en jeu. Et, nous justifions l'emploi du terme hierarchie, jusqu'alors utilise metaphoriquement, en montrant que la mesure construite pour l'indicer verifie les proprietes d'une ultrametrique. La demarche est illustree sur un corpus de donnees reelles issu d'une enquete aupres d'enseignants de mathematiques du secondaire sur les objectifs assignes a leur enseignement. | ['Régis Gras', 'Pascale Kuntz', 'Henri Briand'] | Hiérarchie orientée de règles généralisées en analyse implicative. | 754,627 |
We describe the Sage project, a new approach to software engineering for (fault-tolerant) distributed applications. Sage uses the modal logic of knowledge and applies theoretical results detailing how processes learn facts about each other's state to derive the minimal communication graph for a wide range of coordination problems. The specification interface is controlled, yet expressive enough to capture important distributed coordination problems and weaker variants appropriate for wide-area applications. The resulting graphical display shows programmers which messages must be received. Sage allows users to experiment on the derived protocol by crashing processes, reordering events, losing messages, and partitioning the network. If a solution still exists, Sage regenerates the communication graph. This animates the effects of unpredictable system events on distributed applications, and separates the issues in testing a protocol's behavior in the face of failures, from the effects background system conditions can have on the testing procedure itself. | ['Aleta Ricciardi'] | The Sage project: a new approach to software engineering for distributed applications | 544,191 |
The need to rank and order data is pervasive, and many algorithms are fundamentally dependent upon sorting and partitioning operations. Prior to this work, GPU stream processors have been perceived as challenging targets for problems with dynamic and global data-dependences such as sorting. This paper presents: (1) a family of very efficient parallel algorithms for radix sorting; and (2) our allocation-oriented algorithmic design strategies that match the strengths of GPU processor architecture to this genre of dynamic parallelism. We demonstrate multiple factors of speedup (up to 3.8x) compared to state-of-the-art GPU sorting. We also reverse the performance differentials observed between GPU and multi/many-core CPU architectures by recent comparisons in the literature, including those with 32-core CPU-based accelerators. Our average sorting rates exceed 1B 32-bit keys/sec on a single GPU microprocessor. Our sorting passes are constructed from a very efficient parallel prefix scan "runtime" that incorporates three design features: (1) kernel fusion for locally generating and consuming prefix scan data; (2) multi-scan for performing multiple related, concurrent prefix scans (one for each partitioning bin); and (3) flexible algorithm serialization for avoiding unnecessary synchronization and communication within algorithmic phases, allowing us to construct a single implementation that scales well across all generations and configurations of programmable NVIDIA GPUs. | ['Duane Merrill', 'Andrew S. Grimshaw'] | HIGH PERFORMANCE AND SCALABLE RADIX SORTING: A CASE STUDY OF IMPLEMENTING DYNAMIC PARALLELISM FOR GPU COMPUTING | 449,547 |
Non-value adding activities which consume time and/or resources without increasing value, have been considered as main contributors to schedule delays and cost overruns in design and construction projects. While these activities are mainly triggered and proliferated by errors and changes, traditional construction management approaches have not explicitly addressed the impact of errors and changes on non-value adding activities. To capture non-value adding activities due to errors and changes, a system dynamics based simulation model is developed and presented in this paper wherein the impact of non-value adding activities are intuitively visualized in a colored bar chart. The developed model is applied to a bridge project in Massachusetts. The simulation results show that errors and changes resulted in 26.1% of non-value adding activities and 171 days of schedule delays in this project. Based on these simulation results, it is concluded that the developed simulation model holds significant potential to aid better decision-making for controlling non-value adding activities in design and construction projects. | ['Sangwon Han', 'SangHyun Lee', 'Mani Golparvar Fard', 'Feniosky Peña-Mora'] | Modeling and representation of non-value adding activities due to errors and changes in design and construction projects | 141,028 |
An active real-time database system (ARTDBS) is designed to provide timely response to the critical situations that are defined on database states. Several studies have already addressed various issues in ARTDBSs. The distinctive features of our work are to describe a detailed performance model of a distributed ARTDBS and investigate various performance issues in time-cognizant transaction processing in ARTDBSs. | ['O. Ulusoy'] | Performance issues in processing active real-time transactions | 822,455 |
The problem of automatic classification of scientific texts is considered. Methods based on statistical analysis of probabilistic distributions of scientific terms in texts are discussed. The procedures for selecting the most informative terms and the method of making use of auxiliary information related to the terms positions are presented. The results of experimental evaluation of proposed algorithms and procedures over real-world data are reported. | ['Vaidas Balys', 'Rimantas Rudzkis'] | Statistical Classification of Scientific Publications | 208,689 |
A code C/spl sube/Z/sup n//sub 2/, where Z/sub 2/={0,1}, has unidirectional covering radius R if R is the smallest integer so that any word in Z/sup n//sub 2/ can be obtained from at least one codeword c/spl isin/C by replacing either 1s by 0s in at most R coordinates or 0s by 1s in at most R coordinates. The minimum cardinality of such a code is denoted by E(n,R). Upper bounds on this function are here obtained by constructing codes using tabu search; lower bounds, on the other hand, are mainly obtained by integer programming and exhaustive search. Best known bounds on E(n,R) for n/spl les/13 and R/spl les/6 are tabulated. | ['Patric R. J. Östergård', 'Esa Antero Seuranen'] | Unidirectional covering codes | 187,758 |
Multi-camera systems are more and more used in vision-based robotics. An accurate extrinsic calibration is usually required. In most of cases, this task is done by matching features through different views of the same scene. However, if the cameras fields of view do not overlap, such a matching procedure is not feasible anymore. This article deals with a simple and flexible extrinsic calibration method, for nonoverlapping camera rig. The aim is the calibration of non-overlapping cameras embedded on a vehicle, for visual navigation purpose in urban environment. The cameras do not see the same area at the same time. The calibration procedure consists in manoeuvring the vehicle while each camera observes a static scene. The main contributions are a study of the singular motions and a specific bundle adjustment which both reconstructs the scene and calibrates the cameras. Solutions to handle the singular configurations, such as planar motions, are exposed. The proposed approach has been validated with synthetic and real data. | ['Pierre Lebraly', 'Eric Royer', 'Omar Ait-Aider', 'Michel Dhome'] | Calibration of Non-Overlapping Cameras - Application to Vision-Based Robotics | 295,064 |
Data base design is currently a costly and time consuming activity. Part of this overall design is concerned with the logic of the underlying network structure, and this part is commonly called logical design. Logical design involves a tedium of calculations which can be automated on a program and used as a design tool. The basic approach is applicable to a wide variety of data base handlers, such as IMS, the DBTC proposal, CIS, and others. The approach has been prototyped and a version suitable for IMS is now being used (DBDA) as a program product. This paper describes the basic concepts and how they can be applied to IMS, DBTG or relational implementations. The data structure needed to support a particular application program is called the local view, and input to the design tool is the collection of all local views. Local views are constructedusing certain primitives which the integration of the local views. The diagnostics of the design tool program will partially depend on the data base handler. Each handler, (IMS, DBTG., etc.) has different network restrictions which limit the local views which can be generated from the network. Different network restrictions result in different diagnostics. There is no relational data base handler to evaluate for network restrictions. | ['George U. Hubbard', 'N. Raver'] | Automating logical file design | 306,617 |
Query Answering in the Semantic Social Web: An Argumentation-Based Approach | ['Maria Vanina Martinez', 'Sebastián Gottifredi'] | Query Answering in the Semantic Social Web: An Argumentation-Based Approach | 748,191 |
Recent advances in speech coding have made wideband coding feasible at the bit-rates sufficient for mobile communication. Here we propose a novel hybrid harmocic Code Excited Linear Prediction (CELP) scheme for highband coding of band-split scalable wideband codec, where the low-band (0---4 kHz) is critically subsampled and coded selectively using existing narrowband codecs such as 5.4 kbps and 6.3 kbps G.723.1, 8 kbps G.729, and 11.8 kbps G.729E. The high-band signal is divided into stationary mode (SM) and non-stationary mode (NSM) components based on its unique characteristics. In the SM portion, the high-band signal is compressed using a multi-stage coding that combines the sinusoidal model and CELP. The first stage coding applies the damping factor matching pursuit (MP) algorithm without either the Over-Lap-Add (OLA) or smoothly interpolative synthesis schemes and the second stage utilizes CELP with the circular codebook. In the NSM portion, the high-band signals are coded by CELP with both pulse and circular codebooks by applying the complexity-reduced algorithm. To ensure scalability in highband coding, two enhancement layers are used to increase the number of pulses and control the quantizing sinusoidal parameter numbers. This paper describes the new algorithm and discuses novel techniques for efficient bandwidth wideband speech coding and subjective quality performance. For efficient bit allocation and enhanced performance, the pitch of the high-band codec is estimated using the quantized pitch parameter in low-band codec. An informal listening test, rated the subjective speech quality as comparable to that obtainable with G.722.2 as the fullband wideband codec and G.722.2 as the highband codec, the recent standardized band-split wideband codec. | ['Gyuhyeok Jeong', 'Sang-Wook Sohn', 'Jong-Ha Lim', 'Bonam Kim', 'In-Sung Lee'] | Embedded bandwidth scalable wideband codec using hybrid matching pursuit harmonic/CELP scheme | 203,597 |
Automatic Switched Optical Network (ASON), standardized by ITU-T, is emerging as a major technology choice for inter-operational and intervendor optical backbone transportation. With the ability to flexibly and automatically establish and maintain connection, ASON promises an IP-traffic tolerant, end-to-end QoS guaranteed transportation mechanism within which both bandwidth-consuming stream media traffic and traditional web traffic can be relayed smoothly. In this paper, we look into the traffic aggregation effect of TCP traffic in an optical switched network environment modeled under the 3TNet network architecture. We provide a simplified simulation model, which shows how large-scale TCP traffic aggregation can be leveraged by ASON switching ability and meanwhile proves the superiority of this architecture. | ['Hua Wang', 'Xin Wang', 'Xiangyang Xue'] | Simulating Large-Scale Traffic Aggregation in an Automatic Switched Optical Network | 121,642 |
In this paper, we design and fabricate a voltage booster circuit, aiming at the realization of a monolithic system LSI that utilize on-chip solar cell to eliminate the need for external voltage supply. Since the voltage that can be obtained from an on-chip solar cell is about −0.5V, we need an efficient voltage booster to obtain voltage around +4.0V if we need to program a non-volatile memory on the same chip. First, it is confirmed by circuit simulation that a 10-stage cross-coupled charge pump circuit using 0.18µm process technology can generate +4.0V from −0.5V input. A ring oscillator using current starved inverters with 2R-1T bias circuit is developed, whose frequency is successfully compensated within ±10% for supply voltage ranging between 0.45V and 0.60V. From measurement of the test chip of proposed circuits using Rohm 0.18µm CMOS process technology, it is confirmed that the voltage booster TEG with 2-stage cross-coupled charge pump circuit generates voltage beyond +0.8V from −0.5V input voltage with about 40% efficiency even at a bright environment of 6.1klux illumination. In addition, it is demonstrated that by implementing an on-chip solar cell, the voltage booster, and a standard-cell-based digital circuit in a single chip, an external piezoelectric diaphragm makes a sound only by the power from on-chip solar cell. | ['Tomoya Kimura', 'Hiroyuki Ochi'] | A −0.5V-input voltage booster circuit for on-chip solar cells in 0.18µm CMOS technology | 717,790 |
Implicit Discourse Relation Recognition with Context-aware Character-enhanced Embeddings. | ['Lianhui Qin', 'Zhisong Zhang', 'Hai Zhao'] | Implicit Discourse Relation Recognition with Context-aware Character-enhanced Embeddings. | 992,890 |
Predicting Subcellular Localization of Multiple Sites Proteins | ['Dong Wang', 'Wenzheng Bao', 'Yuehui Chen', 'Wenxing He', 'Luyao Wang', 'Yuling Fan'] | Predicting Subcellular Localization of Multiple Sites Proteins | 840,245 |
The main goal of this paper is to study the delay evolution for future technology nodes (32 nm and beyond) using electrical circuit predictive simulations. With this aim, two SPICE predictive models, directly based on ITRS data, are developed for devices and for interconnect respectively. The predictive spice models generation is presented and validated versus 45 nm silicon data. The predictive delay evaluation is performed with buffered interconnect lines simulations. The simulation results show that the critical interconnect length should be in the order of 10 mum for the 2020 generation. Moreover, in forthcoming technologies, driver resizing and systematic buffer insertion will no longer be sufficient to systematically limit wire delay increase. | ['Manuel Sellier', 'Jean Michel Portal', 'B. Borot', 'Steve Colquhoun', 'Richard Ferrant', 'F. Boeuf', 'A. Farcy'] | Predictive Delay Evaluation on Emerging CMOS Technologies: A Simulation Framework | 207,101 |
In traditional nonlinear programming, the technique of converting a problem with inequality constraints into a problem containing only equality constraints, by the addition of squared slack variables, is well known. Unfortunately, it is considered to be an avoided technique in the optimization community, since the advantages usually do not compensate for the disadvantages, like the increase in the dimension of the problem, the numerical instabilities, and the singularities. However, in the context of nonlinear second-order cone programming, the situation changes, because the reformulated problem with squared slack variables has no longer conic constraints. This fact allows us to solve the problem by using a general-purpose nonlinear programming solver. The objective of this work is to establish the relation between Karush---Kuhn---Tucker points of the original and the reformulated problems by means of the second-order sufficient conditions and regularity conditions. We also present some preliminary numerical experiments. | ['Ellen H. Fukuda', 'Masao Fukushima'] | The Use of Squared Slack Variables in Nonlinear Second-Order Cone Programming | 658,215 |
CrowdTravel: Leveraging Heterogeneous Crowdsourced Data for Scenic Spot Profiling and Recommendation | ['Tong Guo', 'Bin Guo', 'J. Zhang', 'Zhiwen Yu', 'Xingshe Zhou'] | CrowdTravel: Leveraging Heterogeneous Crowdsourced Data for Scenic Spot Profiling and Recommendation | 939,720 |
STiki is an anti-vandalism tool for Wikipedia. Unlike similar tools, STiki does not rely on natural language processing (NLP) over the article or diff text to locate vandalism. Instead, STiki leverages spatio-temporal properties of revision metadata. The feasibility of utilizing such properties was demonstrated in our prior work, which found they perform comparably to NLP-efforts while being more efficient, robust to evasion, and language independent. STiki is a real-time, on-Wikipedia implementation based on these properties. It consists of, (1) a server-side processing engine that examines revisions, scoring the likelihood each is vandalism, and, (2) a client-side GUI that presents likely vandalism to end-users for definitive classification (and if necessary, reversion on Wikipedia). Our demonstration will provide an introduction to spatio-temporal properties, demonstrate the STiki software, and discuss alternative research uses for the open-source code. | ['Andrew G. West', 'Sampath Kannan', 'Insup Lee'] | STiki: an anti-vandalism tool for Wikipedia using spatio-temporal analysis of revision metadata | 7,285 |
NoC performance parameters estimation at design stage | ['Nadezhda Matveeva', 'Elena Suvorova'] | NoC performance parameters estimation at design stage | 409,668 |
The grid is considered as a crucial technology for the future knowledge-based economy and science. The Wisdom Grid project (a joint research effort of the University of Vienna and the Vienna University of Technology) aims, as the first research effort, to cover all aspects of the knowledge life cycle on the grid - from discovery in grid data repositories, to processing, sharing and finally reusing of knowledge as input for a new discovery. This paper first outlines the architecture of the Wisdom Grid infrastructure and then focuses on the kernel architecture component called Grid-Miner, which realizes the knowledge discovery, based on data mining and on-line analytical processing (OLAP) in grid repositories. A running GridMiner prototype is already available to the scientific community as an open service system. | ['Peter Brezany', 'Ivan Janciak', 'A Min Tjoa'] | GridMiner: A Fundamental Infrastructure for Building Intelligent Grid Systems | 506,547 |
This paper deals with the problem of modeling Internet images and associated texts for cross-modal retrieval such as text-to-image retrieval and image-to-text retrieval. We start with deep canonical correlation analysis (DCCA), a deep approach for mapping text and image pairs into a common latent space. We first propose a novel progressive framework and embed DCCA in it. In our progressive framework, a linear projection loss layer is inserted before the nonlinear hidden layers of a deep network. The training of linear projection and the training of nonlinear layers are combined to ensure that the linear projection is well matched with the nonlinear processing stages and good representations of the input raw data are learned at the output of the network. Then we introduce a hypergraph semantic embedding (HSE) method, which extracts latent semantics from texts, into DCCA to regularize the latent space learned by image view and text view. In addition, a search-based similarity measure is proposed to score relevance of image-text pairs. Based on the above ideas, we propose a model, called DCCA-PHS, for cross-modal retrieval. Experiments on three publicly available data sets show that DCCA-PHS is effective and efficient, and achieves state-of-the-art performance for unsupervised scenario. | ['Jie Shao', 'Leiquan Wang', 'Zhicheng Zhao', 'Fei Su', 'Anni Cai'] | Deep canonical correlation analysis with progressive and hypergraph learning for cross-modal retrieval | 837,953 |
Many applications of high societal relevance -- e.g., transportation and traffic management, disaster remediation, location-aware social networking, (tourist) recommendation systems, military logistics (to name but a few) -- rely on some kind of Location Based Services (LBS). The crucial components to support such services, in turn, rely on efficient techniques for managing the data capturing the information pertaining to the whereabouts in time of the moving entities -- storing, retrieving and querying such data. Traditionally, such topics were subjects of the fields called Spatial/Spatio-Temporal Databases, Moving Objects Databases (MOD) and Geographic Information Systems (GIS) [2, 5, 11]. To give an intuitive idea about the magnitude -- according to Mc Kinsey survey from 2011 [9], the volume of location-in-time data exceeds the order of Peta-Bytes per year just from smartphones -- and this is only the "pure" GPS (Global Positioning System) data. Including the cell-towers location data would boost the size by two orders of magnitude -- however, this is not even close to the full magnitude of the variety of location-related data contained in numerous tweets and other social networks based communications (which is of interest for applications such as behavioral marketing). | ['Goce Trajcevski'] | Fusion of uncertain location data from heterogeneous sources | 720,651 |
Tetrahedral robots come from a family of crawling and tumbling robots. They operate by changing their shape. This could be a more functional way to move than a wheeled rover because it can crawl or tumble over rough terrain and obstacles. The current tetrahedral robot (an 8-TET), which was developed by our team, moves by tumbling or rolling. This is a problem because it constantly is changing its orientation due to the fact that it is rolling and each part is upside down at some point. A new robot was to be designed that does not change its orientation. The idea of the Tetrahedral Worm (TET Worm) came about as a possible alternative for the current robot. The TET Worm moves by crawling rather than tumbling. By doing this movement, it holds a constant orientation. The new robot could be designed, tested, and controlled using SimMechanics, which is a package of MATLAB. After the robot is designed and a control system is constructed on the program, several Gaits (the way that the robot will move) must be designed for the TET Worm. The Gaits can then be compared to one another to test which is the best for controllability, force on the struts, and speed. | ['Korey Cook', 'Miguel Abrahantes'] | Gait design for a Tetrahedral Worm | 873,858 |
As the practical use of answer set programming (ASP) has grown with the development of efficient solvers, we expect a growing interest in extensions of ASP as their semantics stabilize and solvers supporting them mature. Epistemic Specifications, which adds modal operators K and M to the language of ASP, is one such extension. We call a program in this language an epistemic logic program (ELP). Solvers have thus far been practical for only the simplest ELPs due to exponential growth of the search space. We describe a solver that is able to solve harder problems better (e.g., without exponentially-growing memory needs w.r.t. K and M occurrences) and faster than any other known ELP solver. | ['Patrick Thor Kahl', 'Anthony P. Leclerc', 'Tran Cao Son'] | A Parallel Memory-efficient Epistemic Logic Program Solver: Harder, Better, Faster | 884,096 |
I/O-efficient algorithms take the advantage of large capacities of external memories to verify huge state spaces even on a single machine with low-capacity RAM. On the other hand, parallel algorithms are used to accelerate the computation and their usage may significantly increase the amount of available RAM memory if clusters of computers are involved. Since both the large amount of memory and high speed computation are desired in verification of large-scale industrial systems, extending I/O-efficient model checking to work over a network of computers can bring substantial benefits. In this paper we propose an explicit state cluster-based I/O efficient LTL model checking algorithm that is capable to verify systems with approximately $10^{10}$ states within hours. | ['Jiri Barnat', 'Luboš Brim', 'Pavel Šimeček'] | Cluster-Based I/O-Efficient LTL Model Checking | 327,040 |
Many optimization algorithms have been developed by drawing inspiration from swarm intelligence (SI). These SI-based algorithms can have some advantages over traditional algorithms. In this paper, we carry out a critical analysis of these SI-based algorithms by analyzing their ways to mimic evolutionary operators. We also analyze the ways of achieving exploration and exploitation in algorithms by using mutation, crossover and selection. In addition, we also look at algorithms using dynamic systems, self-organization and Markov chain framework. Finally, we provide some discussions and topics for further research. | ['Xin-She Yang'] | Swarm Intelligence Based Algorithms: A Critical Analysis | 139,235 |
On-line pen input benefits greatly from mode detection when the user is in a free writing situation, where he is allowed to write, to draw, and to generate gestures. Mode detection is performed before recognition to restrict the classes that a classifier has to consider, thereby increasing the performance of the overall recognition. In this paper we present a hybrid system which is able to achieve a mode detection performance of 95.6% on seven classes; handwriting, lines, arrows, ellipses, rectangles, triangles, and diamonds. The system consists of three kNN classifiers which use global and structural features of the pen trajectory and a fitting algorithm for verifying the different geometrical objects. Results are presented on a significant amount of data, acquired in different contexts like scribble matching and design applications. | ['D.J.M. Willems', 'Stéphane Rossignol', 'L.G. Vuurpijl'] | Mode detection in on-line pen drawing and handwriting recognition | 404,260 |
This paper presents an alternative computationally efficient approach to the thermal design of compact wound components. The method is based on the use of anisotropic lumped regions within 3-D thermal finite-element analyses. The lumped regions replicate the multimaterial composites used in the construction of wound components. Material data for these lumped regions are obtained experimentally, accounting for the thermal anisotropy. Input loss data for the analysis were derived by combining electromagnetic finite-element iron loss calculations with experimental ac copper loss correlations. The technique is applied to a design of a high-energy-density filter inductor. Thermal measurements from prototype inductors are compared with the theoretical predictions showing a good agreement. | ['Rafal Wrobel', 'Phil Mellor'] | Thermal Design of High-Energy-Density Wound Components | 125,252 |
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing. Definition of new objective function for locating additional boreholes.Application of metaheuristic methods for objective function minimization.Comparison of results from proposed objective function to conventional ones.Validation of algorithm in Esphordi phosphate mine. | ['Saeed Soltani-Mohammadi', 'Mohammad Ahmadi Safa', 'Hadi Mokhtari'] | Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization | 843,620 |
Authentication and key agreement protocols play an important role in wireless sensor communication networks. Recently Xue et al’. suggested a key agreement protocols for WSN which in this paper we show that the protocol has some security aws. Also we introduce an enhanced authentication and | ['Majid Bayat', 'Mohammad Reza Aref'] | A Secure and efficient elliptic curve based authentication and key agreement protocol suitable for WSN. | 315,531 |
Large-scale cloud networks are constantly driven by the need for improved performance in communication between datacenters. Indeed, such back-office communication makes up a large fraction of traffic in many cloud environments. This communication often occurs frequently, carrying control messages, coordination and load balancing information, and customer data. However, ensuring such inter-datacenter traffic is delivered efficiently requires optimizing connections over large physical distances, which is non-trivial. Worse still, many large cloud networks are subject to complex configuration and administrative restrictions, limiting the types of solutions that can be implemented. In this paper, we propose improving the efficiency of datacenter to datacenter communication by learning the congestion level of links in between. We then use this knowledge to inform new connections made between the relevant datacenters, allowing us to eliminate the overhead associated with traditional slow-start processes in new connections. We further present Riptide, a tool which implements this approach. We present the design and implementation details of Riptide, showing that it can be easily executed on modern Linux servers deployed in the real world. We further demonstrate that it successfully reduces total transfer times in a production global-scale content delivery network (CDN), providing up to a 30% decrease in tail latency. We further show that Riptide is simple to deploy and easy to maintain within a complex existing network. | ['Marcel Flores', 'Amir R. Khakpour', 'Harkeerat Singh Bedi'] | Riptide: Jump-Starting Back-Office Connections in Cloud Systems | 871,162 |
This paper proposes an accurate and dense wide-baseline stereo matching method using Scaled Window Phase-Only Correlation (SW-POC). The wide-baseline setting of the stereo camera can improve the accuracy of the 3D reconstruction compared with the short-baseline setting. However, it is difficult to find accurate and dense correspondence from wide-baseline stereo images due to its large perspective distortion. Addressing this problem, we employ the SW-POC, which is a correspondence matching method using 1D POC with the concept of Scale Window Matching (SWM). The use of SW-POC makes it possible to find the accurate and dense correspondence from a wide-baseline stereo image pair with low computational cost. We also apply the proposed method to 3D reconstruction using a moving and uncalibrated consumer digital camera. | ['Shuji Sakai', 'Koichi Ito', 'Takafumi Aoki', 'Hiroki Unten'] | Accurate and dense wide-baseline stereo matching using SW-POC | 922,728 |
Predicting Seat-Off and Detecting Start-of-Assistance Events for Assisting Sit-to-Stand With an Exoskeleton | ['Kevin Tanghe', 'Anna Harutyunyan', 'Erwin Aertbeliën', 'Friedl De Groote', 'Joris De Schutter', 'Peter Vrancx', 'Ann Nowé'] | Predicting Seat-Off and Detecting Start-of-Assistance Events for Assisting Sit-to-Stand With an Exoskeleton | 665,723 |
Screen content coding (SCC) has evolved into the extension of the High Efficiency Video Coding (HEVC). Low-latency, real-time transport between devices in the form of screen content video is becoming popular in many applications. However, the complexity of encoder is still very high for intra prediction in HEVC-based SCC. This paper proposes a fast intra prediction method based on content property analysis for HEVC-based SCC. First, coding units (CUs) are classified into natural content CUs (NCCUs) and screen content CUs (SCCUs), based on the statistic characteristics of the content. For NCCUs, the newly adopted prediction modes, including intra block copy mode and palette mode are skipped, if the DC or PLANR mode is the best mode, after testing the traditional intra prediction rough modes. In addition, the quadtree partition process is also terminated due to the homogeneous and smooth block usually chooses a large size CU. For SCCUs, a rank-based decision strategy is introduced to terminate the splitting process of current CU. For all CUs, the bit per pixel of current CU is used to make a CU size decision. Meanwhile, the depth information of neighboring CUs and co-located CU are utilized to further improve the performance. Experimental results show that the proposed algorithm can save 44.92% encoding time on average with negligible loss of video quality. | ['Jianjun Lei', 'Dongyang Li', 'Zhaoqing Pan', 'Zhenyan Sun', 'Sam Kwong', 'Chunping Hou'] | Fast Intra Prediction Based on Content Property Analysis for Low Complexity HEVC-Based Screen Content Coding | 943,046 |
With the widespread development of biometric systems, concerns about security and privacy are increasing. An active area of research is template protection technology, which aims to protect registered biometric data. We focus on a homomorphic encryption approach, which enables building a "cryptographically-secure" system. In DPM 2013, Yasuda et al. proposed an efficient template protection system, using the homomorphic encryption scheme proposed by Brakerski and Vaikuntanathan. In this work, we improve and fortify their system to withstand impersonation attacks such as replay and spoofing attacks. We introduce a challenge-response authentication mechanism in their system and design a practical distributed architecture where computation and authentication are segregated. Our comprehensive system would be useful to build a large-scale and secure biometric system such as secure remote authentication over public networks. | ['Avradip Mandal', 'Arnab Roy', 'Masaya Yasuda'] | Comprehensive and Improved Secure Biometric System Using Homomorphic Encryption | 723,324 |
A balanced V-shape is a polygonal region in the plane contained in the union of two crossing equal-width strips. It is delimited by two pairs of parallel rays that emanate from two points x, y, are contained in the strip boundaries, and are mirror-symmetric with respect to the line xy. The width of a balanced V-shape is the width of the strips.#R##N##R##N#We first present an O(n2 log n) time algorithm to compute, given a set of n points P, a minimum-width balanced V-shape covering P. We then describe a PTAS for computing a (1+e)-approximation of this V-shape in time O((n/e) log n + (n/e3/2) log2(1/e)). | ['Boris Aronov', 'Muriel Dulieu'] | How to cover a point set with a V-shape of minimum width | 970,488 |
For a simple bipartite graph and an integer t ≥ 2, we consider the problem of finding a minimum-weight kt, t-free t-factor, which is a t-factor containing no complete bipartite graph kt, t as a subgraph. When t = 2, this problem amounts to the square-free 2-factor problem in a bipartite graph. For the unweighted square-free 2-factor problem, a combinatorial algorithm is given by Hartvigsen, and the weighted version of the problem is NP-hard. For general t, Pap designed a combinatorial algorithm for the unweighted version, and Makai gave a dual integral description of kt, t-free t-matchings for a certain case where the weight vector is vertex-induced on any subgraph isomorphic to kt, t. For this class of weight vectors, we propose a strongly polynomial algorithm to find a minimum-weight kt, t-free t-factor. The algorithm adapts the unweighted algorithms of Hartvigsen and Pap and a primal-dual approach to the minimum-cost flow problem. The algorithm is fully combinatorial and thus provides a dual integrality theorem, which is tantamount to Makai's one. | ['Kenjiro Takazawa'] | A Weighted kt, t-Free t-Factor Algorithm for Bipartite Graphs | 416,481 |
Elders & Families Rely On Social Networks For Aging-Related Information: Implications For Informaticians. | ['Bradley H. Crotty', 'Janice Walker', "Jacqueline O'Brien", 'Lewis A. Lipsitz', 'Meghan Dierks', 'Charles Safran'] | Elders & Families Rely On Social Networks For Aging-Related Information: Implications For Informaticians. | 986,700 |
A coding technique for improving the reliability of digital transmission over noisy partial-response channels with characteristics (+or-D/sup m/), m=1, 2, where the channel input symbols are constrained to be +or-1, is presented. In particular, the application of a traditional modulation code as an inner code of a concentrated coding scheme in which the outer code is designed for maximum (free) Hamming distance is considered. A performance comparison is made between the concentrated scheme and a coding technique presented by Wolf and G. Ungerboeck (see ibid., vol. COM-34, p.765-773, Aug. 1986) for the dicode channel with transfer function (1-D). > | ['Kees A. Schouhamer Immink'] | Coding techniques for partial-response channels | 511,871 |
Expressing speaker's intentions through sentence-final intonations for Japanese conversational speech synthesis | ['Kazuhiko Iwata', 'Tetsunori Kobayashi'] | Expressing speaker's intentions through sentence-final intonations for Japanese conversational speech synthesis | 794,144 |
Online Speaker Adaptation with Pre-Computed FMLLR Transformations. | ['Volker Fischer', 'Siegfried Kunzmann'] | Online Speaker Adaptation with Pre-Computed FMLLR Transformations. | 759,982 |
With the explosion in the amount of semi-structured data users access and store, there is a need for complex search tools to retrieve often very heterogeneous data in a simple and efficient way. Existing tools usually index text content, allowing for some IR-style ranking on the textual part of the query, but only consider structure (e.g., file directory) and metadata (e.g., date, file type) as filtering conditions. We propose a novel multidimensional querying approach to semi-structured data searches in personal information systems by allowing users to provide fuzzy structure and metadata conditions in addition to traditional keyword conditions. The provided query interface is more comprehensive than content-only searches as it considers three query dimensions (content, structure, metadata) in the search. We have implemented our proposed approach in the Wayfinder file system. In this demo, we will use this implementation to both present an overview of the unified scoring framework underlying the fuzzy multi-dimensional querying approach and demonstrate its potential in improving search results. | ['Christopher Peery', 'Wei Wang', 'Amélie Marian', 'Thu D. Nguyen'] | Fuzzy Multi-Dimensional Search in the Wayfinder File System | 405,392 |
Network operators expect a coordinated handling of parameter changes submitted to the operating network's configuration management entity by closed-loop self-organizing network (SON) techniques. For this reason, a major research goal for emerging SON technologies is to achieve coordinated results out of a plethora of independently or even concurrently running use-case implementations. In this paper, we extend current frameworks to compute desirable user associations by an interference model that explicitly takes base-station loads into account. With the aid of this model, we are able to make considerably more accurate estimations and predictions of cell loads compared with established methods. Based on the ability to predict cell loads, we derive algorithms that jointly adapt user-association policies and antenna-tilt settings for multiple cells. We demonstrate by detailed numerical evaluations of realistic networks that these algorithms can be applied to capacity and coverage optimization, mobility load balancing, and cell outage compensation use cases. As a result, rather than performing any heading or tailing coordination, the joint technique inherently comprises all three use cases, making their coordination redundant. For all scenarios studied, the joint optimization of tilts and user association improves quality of service in terms of the fifth percentile of user throughput compared with state-of-the-art techniques. The proposed models and techniques can be straightforwardly extended to other physical and soft parameters. | ['Albrecht J. Fehske', 'Henrik Klessig', 'Jens Voigt', 'Gerhard Fettweis'] | Concurrent Load-Aware Adjustment of User Association and Antenna Tilts in Self-Organizing Radio Networks | 98,336 |
Adaptive information filtering is an emerging filtering technology that can learn the user interest/topic automatically during the filtering process and adjust its output accordingly. It provides a better performance and broader applicability than the traditional filtering technology, therefore is useful in Internet for managing sensitive information and presenting personalized content to Web user. In this paper we propose a new framework for online adaptive filtering, in which two different scoring/weighting and feedback mechanisms are implemented. Based on them, an incremental profile training method is introduced for locating user interest accurately, and a profile self-learning algorithm is also developed for adjusting user focus in test filtering. The experiments in the Reuters online news show our system performs better than the exist systems in the profile training and overall filtering results. | ['Liang Ma', 'Qunxiu Chen', 'Lianhong Cai'] | An Improved Framework for Online Adaptive Information Filtering | 405,441 |
Energy harvesting communication system enables energy to be dynamically harvested from natural resources and stored in capacitated batteries to be used for future data transmission. In such a system, the amount of future energy to harvest is uncertain and the battery capacity is limited. As a consequence, battery overflow and energy dropping may happen, causing energy underutilization. To maximize the data throughput by using the energy efficiently, a rate-adaptive transmission schedule must address the trade-off between a high-rate transmission which avoids energy overflow and a low-rate transmission which avoids energy shortage. In this paper, we study an online throughput maximization problem without knowing future information. To the best of our knowledge, this is the first work studying the fully-online transmission rate scheduling problem for battery-capacitated energy harvesting communication systems. We consider the problem under two models of the communication channel, a static channel model that assumes the channel status is stable, and a fading channel model that assumes the channel status varies. For the former, we develop an online algorithm that approximates the offline optimal solution within a constant factor for all possible inputs. For the latter, that the channel gains vary in range $[h_{min},h_{max}]$ , we propose an online algorithm with a proven $\Theta (\log (\frac{h_{max}}{h_{min}}))$ -competitive ratio. Our simulation results further validate the efficiency of the proposed online algorithms. | ['Weiwei Wu', 'Jianping Wang', 'Xiumin Wang', 'Feng Shan', 'Junzhou Luo'] | Online Throughput Maximization for Energy Harvesting Communication Systems with Battery Overflow | 704,036 |