abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
The Difficulty of Path Traversal in Information Networks
['Frank W. Takes', 'Walter A. Kosters']
The Difficulty of Path Traversal in Information Networks
792,954
This paper deals with a resource sharing problem in wireless sensor networks (WSNs). The problem calls for identifying k data collection trees that can be managed by independent users to run applications requiring area coverage. The formalized problem, called k-balanced area coverage slices (k-BACS), calls for identifying an ensemble of k trees that share the sink node only (and no other node) in a given WSN. To avoid nodal congestion, each tree is required to satisfy constraints on the maximum degree of its nodes. The objective is to maximize the minimum total area covered by any tree in the ensemble. Existing results in the literature show that the k-BACS problem is NP-complete even if k =2. Thus, effective heuristic algorithms are needed. In this paper, we present and compare the performance of two efficient algorithms for solving the problem. Our results show that the devised algorithms produce well-balanced trees. In addition, the combined use of the computed partitions and the PEAS energy conservation protocol can produce competitive lifetime for networks where a prescribed level of area coverage is required for successful operation.
['Mohamed H. Shazly', 'Ehab S. Elmallah', 'Janelle J. Harms']
Balancing area coverage in partitioned wireless sensor networks
4,133
We analyze and solve a complex optimal control problem in microeconomics which has been investigated earlier in the literature. The complexity of the control problem originates from four control variables appearing linearly in the dynamics and several state inequality constraints. Thus, the control problem offers a considerable challenge to the numerical analyst. We implement a hybrid optimization approach which combines two direct optimization methods. The first step consists in solving the discretized control problem by nonlinear programming methods. The second step is a refinement step where, in addition to the discretized control and state variables, the junction times between bang–bang, singular and boundary subarcs are optimized. The computed solutions are shown to satisfy precisely the necessary optimality conditions of the Maximum Principle where the state constraints are directly adjoined to the Hamiltonian. Despite the complexity of the control structure, we are able to verify sufficient optimality conditions which are based on the concavity of the maximized Hamiltonian.
['Helmut Maurer', 'Hans Josef Pesch']
Direct optimization methods for solving a complex state-constrained optimal control problem in microeconomics
514,039
Firms must strike a delicate balance between the exploitation of well-known business models and the exploration of risky, untested approaches. In this paper, we study financial contracting between an investor and a firm with private information about its returns from exploration and exploitation. The investor-optimal mechanism offers contracts with different tolerance for failures to screen returns from exploitation, and with different exposure to the project's revenues to screen returns from exploration. We derive necessary and sufficient conditions for private information about returns from exploration to have zero value to the firm. When these conditions fail, private information about exploration may even decrease the firm's payoff.
['Rv Gomes', 'Daniel Gottlieb', 'Lucas Maestri']
Experimentation and project selection: Screening and learning
647,706
The ability to reuse or transfer knowledge from one task to another in lifelong learning problems, such as Minecraft, is one of the major challenges faced in AI. Reusing knowledge across tasks is crucial to solving tasks efficiently with lower sample complexity. We provide a Reinforcement Learning agent with the ability to transfer knowledge by learning reusable skills, a type of temporally extended action (also known as Options (Sutton et. al. 1999)). The agent learns reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks (DSNs), are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture. The H-DRLN, a hierarchical extension of Deep Q-Networks, learns to efficiently solve tasks by reusing knowledge from previously learned DSNs. The DSNs are incorporated into the H-DRLN using two techniques: (1) a DSN array and (2) skill distillation, our novel variation of policy distillation (Rusu et. al. 2015) for learning skills. Skill distillation enables the H-DRLN to scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity (by taking advantage of temporally extended actions) compared to the regular Deep Q Network (Mnih et. al. 2015) in sub-domains of Minecraft. We also show the potential to transfer knowledge between related Minecraft tasks without any additional learning.
['Chen Tessler', 'Shahar Givony', 'Tom Zahavy', 'Daniel J. Mankowitz', 'Shie Mannor']
A Deep Hierarchical Approach to Lifelong Learning in Minecraft
720,102
Multispectral image acquisition considerably improves color accuracy in comparison to RGB technology. A common multispectral camera design concept features a filter-wheel consisting of six or more optical bandpass filters. By shifting the filters sequentially into the optical path, the electromagnetic spectrum is acquired through the channels, thus making an approximate reconstruction of the spectrum feasible. However, since the optical filters exhibit different thicknesses, refraction indices and may not be aligned in a perfectly coplanar manner, geometric distortions occur in each spectral channel: The reconstructed RGB images thus show rainbow-like color fringes. To compensate for these, we analyze the optical path and derive a mathematical model of the distortions. Based on this model we present two different algorithms for compensation and show that the color fringes vanish completely after application of our algorithms. We also evaluate our compensation algorithms in terms of accuracy and execution time.
['Johannes Brauers', 'Nils Schulte', 'Til Aach']
Multispectral Filter-Wheel Cameras: Geometric Distortion Model and Compensation Algorithms
148,684
A state of the art speaker identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, mel-frequency cepstral coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with two different classifier paradigms, namely Gaussian Mixture Models (GMM) and Polynomial Classifier (PC) and for various model orders
['Sandipan Chakroborty', 'Anindya Lal Roy', 'Sourav R. Majumdar', 'Goutam Saha']
Capturing Complementary Information via Reversed Filter Bank and Parallel Implementation with MFCC for Improved Text-Independent Speaker Identification
386,030
A frequent complaint about neural net models is that they fail to explain their results in any useful way. The problem is not a lack of information, but an abundance of information that is difficult to interpret. When trained, neural nets will provide a predicted output for a posited input, and they can provide additional information in the form of interelement connection strengths. This latter information is of little use to analysts and managers who wish to interpret the results they have been given. We develop a measure of the relative importance of the various input elements and hidden layer elements, and we use this to interpret the contribution of these components to the outputs of the neural net.
['Brenda Mak', 'Robert W. Blanning']
An empirical measure of element contribution in neural networks
692,836
Digital right management (DRM) is an important research topic of knowledge right protection and multi-agent is a primary technology for DRM system. Digital watermark has been employed widely in copyright protection. The integration of multi-agent and digital watermark has been intensively investigated in recent years. An intelligent DRM system based on multi-agent is presented in this paper. The agent roles are classified as five types. The proposed solution can reduce the load of sever and network because the watermark is detected on the remote hosts. The system model is described and analyzed.
['Liu Quan', 'Liu Hong']
An Intelligent Digital Right Management System Based on Multi-agent
377,637
It is well known that the standard (linear) knapsack problem can be solved exactly by dynamic programming in 𝒪(nc) time, where n is the number of items and c is the capacity of the knapsack. The quadratic knapsack problem, on the other hand, is NP-hard in the strong sense, which makes it unlikely that it can be solved in pseudo-polynomial time. We show, however, that the dynamic programming approach to the linear knapsack problem can be modified to yield a highly effective constructive heuristic for the quadratic version. In our experiments, the lower bounds obtained by our heuristic were consistently within a fraction of a percent of optimal. Moreover, the addition of a simple local search step enabled us to obtain the optimal solution of all instances considered.
['Franklin Djeumou Fomeni', 'Adam N. Letchford']
A Dynamic Programming Heuristic for the Quadratic Knapsack Problem
60,504
Software visualisation is the process of modelling software systems for comprehension. The comprehension of software systems both during and after development is a crucial component of the software process. The complex interactions inherent in the object-oriented paradigm make visualisation a particularly appropriate comprehension technique, and the large volume of information typically generated during visualisation necessitates tool support. In order to address the disadvantages with current visualisation techniques, an approach is proposed that integrates abstraction, structural and behavioural perspectives, and statically and dynamically extracted information. The aim of this research is to improve the effectiveness of visualisation techniques for large-scale software understanding based on the use of abstraction. interrelated facets and the integration of statically and dynamically extracted information.
['Michael Pacione']
Software Visualisation for Object-Oriented Program Comprehension
330,023
Video background recovery is a very important task in computer vision applications. Recent research offers robust principal component analysis RPCA as a promising approach for solving video background recovery. RPCA works by decomposing a data matrix into a low-rank matrix and a sparse matrix. Our previous work shows that when the desired rank of the low-rank matrix is known, fixing the rank in the algorithm called FrALM fixed-rank ALM yields more robust and accurate results than existing RPCA algorithms. However, application of RPCA to video background recovery requires that each frame in the video is encoded as a column in the data matrix. This is impractical in real applications because the videos can be easily larger than the amount of memory in a computer. This paper presents an algorithm called iFrALM incremental fixed-rank ALM that computes fixed-rank RPCA incrementally by splitting the video frames into an initial batch and an incremental batch. Comprehensive tests show that iFrALM uses less memory and time compared to FrALM. Moreover, the initial batch size and batch quality can be carefully selected to ensure that iFrALM reduces memory and time complexity without sacrificing accuracy.
['Jian Lai', 'Wee Kheng Leow', 'Terence Sim']
Incremental Fixed-Rank Robust PCA for Video Background Recovery
675,717
This paper proposes a novel generalized selec- tion combining (GSC)/maximal ratio combining(MRC) switching technique based on error detection in cooperative communica- tions with amplifiy-and-forward (AF) relays. In order to reduce the excessive burdern of MRC with all diversity paths at the destination node, the destination node decides if it performs GSC with order N( <K ) or MRC with order K +1 based on the error detection, where K is the number of relay nodes. Our ana- lytical and simulation results show that the proposed GSC/MRC switching outperforms the conventional MRC in terms of outage probability in AF based cooperative communications since the proposed scheme effectively reduces the spectral efficiency loss with the help of error detection codes. I. INTRODUCTION combined at a destination node since AF relays cannot decode the signal from a source and decide if they forward the signal to a destination node or not. Even though performance of diversity combining is generally proportional to the number of independent diversity paths, combining all the paths from relays might not be optimal in cooperative communications since more relays require more time-slotted orthogonal chan- nels which causes a spectral efficiency loss due to transmit duty cycle. In this context, this paper proposes a generalized selection combining (GSC)/maximal ratio combining (MRC) switching technique based on error detection at a destination node. The idea is motivated by (14) where the error detection was employed for diversity combining at RAKE receivers. When time-slotted orthogonal channels are adopted, the destination node first performs maximal ratio combining only with the earliest N branches (a S-D link and N − 1 R-D links), i.e., GSC with order N , and checks if any error occurs. If any error is not detected, the destination node broadcasts the success to relays to prevent remaining relays from transmitting the signals in their time slots/frames. Otherwise, the destination node waits until it gathers all the diversity paths from K relays and performs MRC with the signals from all the K relay nodes and the source node. Our outage probability analysis shows that the proposed GSC/MRC switching based on error detection improves the cooperative diversity performance in AF based cooperative relay communications since it can effectively reduce the spectral efficiency loss by virtue of error detection codes, despite a slight loss in spectral efficiency by sending the codes. The performance improvement in terms of outage probability becomes more significant as the number of relays K increases. Our analytical and simulation results indicate that a proper selection of N for the given error detection code can further improve diversity performance of AF based relay communications.
['Wan Choi', 'Jun-Pyo Hong', 'Dong In Kim', 'Byounghoon Kim']
An Error Detection Aided GSC/MRC Switching Scheme in AF based Cooperative Communications
170,892
IMSI (International Mobile Station Identity) is a unique number associated with all GSM and UMTS network mobile phone user. IMSI filtering - a prefix filtering, is an important function in the 3G firewall. It is an indefinite filtering which affects the efficiency of the security device. This paper brings forward a parallel processing method which uses vector coding and structure of bloom filter. This method is realized in Intel network processor - IXP 2850. The features of parallel multithreads and hash unit in network processor are taken advantages. The experiments show the excellent performance of the method in 3G firewall.
['ZhenYu Liu', 'Shengli Xie', 'Yue Lai']
A Fast Suffix Matching Method in Network Processor
312,425
VISUAL 3, a highly interactive environment for the visualization of 3D volumetric scientific data, is described. The volume can be broken up in a structured or unstructured manner, and the problem can be static or unsteady in time. Because the data are volumetric and all the information can be changing, traditional CAD techniques are not appropriate. Therefore, VISUAL3 was developed using intermediate mode-rendering methods. A unique aspect of VISUAL3 is the dimensional windowing approach coupled with cursor mapping, which allows efficient pointing in 3D space. VISUAL3 is composed of a large number of visualization tools that can be generally classified into identification, scanning, and probing techniques. >
['Robert Haimes', 'David L. Darmofal']
Visualization in computational fluid dynamics: a case study
95,368
Energy-aware Server Selection Algorithms for Storage and Computation Processes
['Atsuhiro Sawada', 'Hiroki Kataoka', 'Dilawaer Duolikun', 'Tomoya Enokido', 'Makoto Takizawa']
Energy-aware Server Selection Algorithms for Storage and Computation Processes
910,082
Cyclic codes have efficient encoding and decoding algorithms over finite fields, so that they have practical applications in communication systems, consumer electronics and data storage systems. The objective of this paper is to give eight new classes of optimal ternary cyclic codes with parameters $[3^m-1,3^m-1-2m,4]$, according to a result on the non-existence of solutions to a certain equation over $F_{3^m}$. It is worth noticing that some recent conclusions on such optimal ternary cyclic codes are some special cases of our work. More importantly, three of the nine open problems proposed by Ding and Helleseth in [8] are solved completely. In addition, another one among the nine open problems is also promoted.
['Lanqiang Li', 'Li Liu', 'Shixin Zhu']
Several classes of optimal ternary cyclic codes
995,991
In artificial vision applications, such as tracking, a large amount of data captured by sensors is transferred to processors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computational burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architecture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromorphic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, consisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These outputs reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens at the pixel level.
['Julien N. P. Martel', 'Yulia Sandamirskaya']
A Neuromorphic Approach for Tracking using Dynamic Neural Fields on a Programmable Vision-chip
875,907
A Modular Petri Net Used in Synchronous Communication of Sequential Processes.
['Xiaowei Huang', 'Jie Meng']
A Modular Petri Net Used in Synchronous Communication of Sequential Processes.
743,654
The present publication describes the development of an active UHF RFID localization system concerning small size RFID transponders with an integrated GPS module for fawn saving during pasture mowing. The localization of a conventional active UHF RFID transponder has obvious drawbacks in the needed hardware on the reader side. The transponder sends only its label and no additional position information is available at the reader. Therefore, a direction finding system and a received signal strength (rss) detector for distance estimation in addition to the reader itself is needed. The huge hardware effort required reader side can be prevented by developing a transponder with an integrated GPS Module for localization. Thus, the focus is on a small transponder design, which leads to a trade-off between the requirements of the desired performance and the realization of the localization system, consisting of an active GPS-UHF transponder and the corresponding reader unit. Especially the conditions for the communication between the transponder and the reader related to limited power supply has to be minded. Furthermore, the transponder realization is shown and the considerations were metrological demonstrated.
['Alois Ascher', 'Michael Eberhardt', 'Markus Lehner', 'Benedikt Lippert', 'Erwin M. Biebl']
A small UHF-RFID transponder with integrated GPS for localization applications
197,871
Bioinformatics is the computing response to the molecular revolution in biology. This revolution has reshaped the lift sciences and given us a deep understanding of DNA sequences, RNA synthesis and the generation of proteins. This process can be represented as gene expression of molecular autoregulatory feedback loop systems. In this paper, the annealing robust fuzzy basis function (ARFBF) is proposed to improve the problems of fuzzy basis function for modeling of gene expression of molecular autoregulatory feedback loop systems with outliers. Firstly, the support vector regression (SVR) approach is proposed to determine the initial structure of ARFBF. Because of a SVR approach is equivalent to solving a linear constrained quadratic programming problem under a fixed structure of SVR, the number of hidden nodes, the initial parameters and the initial weights of ARFBF are easy obtained via the SVR approach. Secondly, the results of SVR are used as initial structure in ARFBF. At the same time, an annealing robust learning algorithm (ARLA) is used as the learning algorithm for ARFBF, and applied to adjust the parameters as well as weights of ARFBF. That is, an ARLA is proposed to overcome the problems of initialization and the cut-off points in the robust learning algorithm. Hence, when an initial structure of ARFBF is determined by a SVR approach, the ARFBF with ARLA have fast convergence speed and robust against outliers for the modeling of molecular autoregulatory feedback loop systems.
['Jin-Tsong Jeng', 'Chen-Chia Chuang']
Annealing Robust Fuzzy Basis Function for Modeling of Molecular Autoregulatory Feedback Loop Systems with Outliers
117,610
An optimization algorithm for the design of combinational circuits that are robust to single-event upsets (SEUs) is described. A simple, highly accurate model for the SEU robustness of a logic gate is developed. This model is integrated with area and performance constraints into an optimization framework based on geometric programming for design space exploration. Simulation results demonstrate the design tradeoffs that can be achieved with this approach.
['Quming Zhou', 'Mihir R. Choudhury', 'Kartik Mohanram']
Design optimization for robustness to single-event upsets
106,169
The paper discusses the issue of time slips in software development. Increasing time sacrifices toward work constitutes an important part of modern organizational environment. In fact, the reign over time is a crucial element in controlling the labor process. Yet a lack of cultural studies covering different approaches to this issue remains-particularly those focusing on high-skilled salaried workers. This article is a small attempt to fill this gap, based on an analysis of unstructured qualitative interviews with high-tech professionals from a B2B software company. It focuses on the issue of timing in IT projects, as perceived by software engineers. The findings indicate that managerial interruptions in work play an important part in the social construction of delays. However, interruptions from peer software engineers are not perceived as disruptive. This leads to the conclusion that time is used in a symbolic way, both for organizational domination and solidarity rituals. The use of time as a symbolic currency in knowledge-work rites is presented as often influencing the very process of labor and schedules. It is revealed to be the dominant evaluation factor, replacing the officially used measures, such as efficiency, or quality.
['Dariusz Jemielniak']
Time as symbolic currency in knowledge work
23,769
Aerobics is one of the best exercises for a full-body or functional workout. However, the flexibility of aerobics also makes it difficult to estimate calories consumption. The single accelerometer method developed in [20] failed to provide an accurate model due to the complicated movement of aerobics. This manuscript proposed a method to accurately estimate the calories consumption. The proposed method took advantage of Microsoft Kinect to track 10 body joints of exercisers. Since each node could be considered as a 3-axis accelerometer mounting the corresponding joint, so the multiple regression model was exploited to form an estimation model. The accuracy and robustness of the estimation model was evaluated in the experimental studies. The results indicated that the calories consumption of aerobics could be estimated accurately with 14 times of training.
['Pei-Fu Tsou', 'Chao-Cheng Wu']
Estimation of Calories Consumption for Aerobics Using Kinect Based Skeleton Tracking
602,843
Regression Model and Query Expansion for NTCIR-2 Ad Hoc Retrieval Task.
['Kazuaki Kishida']
Regression Model and Query Expansion for NTCIR-2 Ad Hoc Retrieval Task.
803,343
A novel approach of pose estimation is proposed for the object with surface of revolution(SOR). The silhouette of the object is the only information necessary for this method and no cross section circle (latitude circle) is needed. In this article, we explain the property of tangent circle and use it to establish constraint between two images of object with different poses. Such constraint can help to solve the pose of object in both images. We test our method with a simulation experiment and use it to estimate the pose for both rigid body and articulated object.
['Ming Zhang', 'Yinqiang Zheng', 'Yuncai Liu']
Using silhouette for pose estimation of object with surface of revolution
46,329
Based on successive hypothesis testing, we propose an approach for sparse signal recovery and apply it to random access to detect multiple block-sparse signals over frequency-selective fading channels. By introducing the sparsity variable, the proposed approach decides the presence or absence of the signal in each stage. To mitigate the error propagation, adaptive ordering is also employed as a greedy algorithm. From simulation results, it is shown that the proposed approach performs better than the block orthogonal matching pursuit algorithm, which is a well-known greedy compressive sensing algorithm for compressive random access.
['Jinho Choi']
Successive Hypothesis Testing Based Sparse Signal Recovery and Its Application to MUD in Random Access
974,425
Encoding sensor observations across time is a critical component in the ability to model cognitive processes. All biological cognitive systems receive sensory stimuli as continuous streams of observed data over time. Therefore, the perceptual grounding of all biological cognitive processing is in temporal semantic encodings, where the particular grounding semantics are sensor modalities. We introduce a technique that encodes temporal semantic data as temporally integrated patterns stored in Adaptive Resonance Theory (ART) modules.
['Shawn E. Taylor', 'Michael L. Bernard', 'Stephen J. Verzi', 'James Dan Morrow', 'Craig M. Vineyard', 'Michael J. Healy', 'Thomas P. Caudell']
Temporal semantics: An Adaptive Resonance Theory approach
488,233
These lectures deal with the problem of inductive inference, that is, the problem of reasoning under conditions of incomplete information. Is there a general method for handling uncertainty? Or, at least, are there rules that could in principle be followed by an ideally rational mind when discussing scientific matters? What makes one statement more plausible than another? How much more plausible? And then, when new information is acquired how do we change our minds? Or, to put it differently, are there rules for learning? Are there rules for processing information that are objective and consistent? Are they unique? And, come to think of it, what, after all, is information? It is clear that data contains or conveys information, but what does this precisely mean? Can information be conveyed in other ways? Is information physical? Can we measure amounts of information? Do we need to? Our goal is to develop the main tools for inductive inference--probability and entropy--from a thoroughly Bayesian point of view and to illustrate their use in physics with examples borrowed from the foundations of classical statistical physics.
['Ariel Caticha']
Lectures on Probability, Entropy, and Statistical Physics
74,749
The rate of deployment and adoption issues of new network technologies, IPv6 in particular, have recently been hotly debated in the research community. However, the question of how protocols migrate, especially the dynamics of migration, to new paradigms is still largely open. In this paper, we address the issue from a game theoretic point of view. We model and analyze the profit maximizing strategies of Autonomous Systems (ASes); both the properties of ASes and the topology of the Internet is considered. The contribution of our work is threefold. First, we propose an economic model of the ASes and their relations from the IPv4-IPv6 migration viewpoint. Second, we apply the findings of evolutionary dynamics on the problem of migration by incorporating Internet-specific properties to the evolutionary model, namely the size of the ASes and the cost of migration. The analyses show that even if IPv6 has higher payoff than IPv4, the whole migration does not happen always fast. Finally, extensive simulations are carried out based on the proposed models to illustrate the impacts of different parameters on the IPv6 migration dynamics in realistic scenarios.
['Tuan Anh Trinh Trinh', 'László Gyarmati', 'Gyula Sallai']
Migrating to IPv6: A game-theoretic perspective
518,285
Twenty-five years ago, Horace Newcomb and Paul Hirsch proposed a model for studying television as a cultural forum, as the most common reference point for public issues and concerns, particularly in American society. Over the last decade, the internet has emerged as a new communicative infrastructure and cultural forum on a global scale. Revisiting and reworking Newcomb and Hirsch's classic contribution, this article: first, advances a model of the internet as a distinctive kind of medium comprising different communicative genres - one-to-one, one-to-many as well as many-to-many communication; and, second, the article presents an empirical baseline study of their current prevalence. The findings suggest that while blogs, social network sites and other recent genres have attracted much public as well as scholarly attention, ordinary media users may still be more inclined to engage in good old-fashioned broadcasting and interpersonal interaction. Despite a constant temptation to commit prediction, future research is well advised to ask how old communicative practices relate to new media.
['Klaus Bruhn Jensen', 'Rasmus Helles']
The internet as a cultural forum: Implications for research
446,860
Within this paper, we present a novel method for view-independent simple human activity recognition from video frames. In order to tackle the problem, we are going to reduce the number of frames produce by a video sequence, since we are positive that we can identify activities from sparsely sampled sequence of body poses. Then we will use a cooperative set of formal languages. Named SOMA and KINISIS language respectively. SOMA language represents various information regarding the human body (state) in a frame and will assign to it a unique timestamp. While KINISIS language is a sequence based formal language that is going to use the information extracted from SOMA language and segment simple activities into separate actions and then correctly identify each one of them.
['A. Angeleas', 'Nikolaos G. Bourbakis']
A Two Formal Languages Based Model for Representing Human Activities
994,115
We consider the problem of through-the-wall radar imaging (TWRI), in which polarimetric imaging is used for automatic target detection. Two generalized statistical detectors are proposed which perform joint detection and fusion of a set of multipolarization radar images. The first detector is an extension of a previously proposed iterative target detector for multiview TWRI. This extension allows the detector to automatically adapt to statistics that may vary, depending on target locations and electromagnetic-wave polarizations. The second detector is based on Bayes' test and is of interest when target pixel occupancies are known from, e.g., secondary data. Properties of the proposed detectors are delineated and demonstrated by real data measurements using wideband sum-and-delay beamforming, acquired in a semicontrolled lab environment. We examine the performance of the proposed detectors when imaging both metal objects and humans.
['Christian Debes', 'Abdelhak M. Zoubir', 'Moeness G. Amin']
Enhanced Detection Using Target Polarization Signatures in Through-the-Wall Radar Imaging
101,806
Random resolution, defined by Buss, Kolodziejczyk and Thapen (JSL, 2014), is#N#a sound propositional proof system that extends the resolution proof system by#N#the possibility to augment any set of initial clauses by a set of randomly#N#chosen clauses (modulo a technical condition). We show how to apply the general#N#feasible interpolation theorem for semantic derivations of Krajicek (JSL, 1997)#N#to random resolution. As a consequence we get a lower bound for random#N#resolution refutations of the clique-coloring formulas.
['Jan Krajíček']
A feasible interpolation for random resolution
719,579
This article presents a review of the contemporary robotics research with respect to making robots and human–robot interaction (HRI) useful for autism intervention in clinical settings. Robotics research over the past decade has demonstrated that many children with autism spectrum disorders (ASDs) have a strong interest in robots and robot toys and can connect with a robot significantly better than with a human. Despite showing great promise, research in this direction has made minimal progress in advancing robots as clinically useful for ASD intervention. Moreover, the clinicians are generally not convinced about the potential of robots. A major reason behind this is that a vast majority of HRI studies on robot-mediated intervention (RMI) do not follow any standard research design and, consequently, the data produced by these studies is minimally appealing to the clinical community. In clinical research on ASD intervention, a systematic evaluation of the evidence found from a study is performed to determine the effectiveness of an experimental intervention (e.g., a RMI). An intervention that produces a stable positive effect is considered as an evidence-based practice (EBP) in autism. EBPs enable clinicians to choose the best available treatments for an individual with ASD. The ultimate goal of RMI, therefore, is to be considered as an EBP so that they can actually be used for treating autism. There are several criteria to measure the strength of evidence, and they are mostly geared toward rigorous research design. The research on RMI, therefore, needs to follow standard research design to be acceptable by the clinical community. This paper reviews the contemporary literature on robotics and autism to understand the status of RMI with respect to being an EBP in autism treatment. First, a set of guidelines is reported which is considered as a benchmark for research design in clinical research on ASD intervention and can easily be adopted in HRI studies on RMI. The existing literature on RMI is then reviewed with respect to these guidelines. We hope that the guidelines reported in this paper will help the robotics community to design user studies on RMI that meet clinical standards and thereby produce results that can lead RMI toward being considered as an EBP in autism. Note that the paper is exclusively focused on the role of robots in ASD intervention/therapy. Reviews on the use of robots in ASD diagnosis are beyond the scope of this paper.
['Momotaz Begum', 'Richard W. Serna', 'Holly A. Yanco']
Are Robots Ready to Deliver Autism Interventions? A Comprehensive Review
691,272
Summarization of Twitter Microblogs
['Beaux Sharifi', 'David I. Inouye', 'Jugal K. Kalita']
Summarization of Twitter Microblogs
47,218
Reconstruction of three-dimensional images from serial cross-sections is described using structural techniques. Regions derived from seg mentation of cross-sections are linked or grown by application of similarity measures based on geometric properties of image regions. Data structures required for reconstruction and display include the region adjacency graph and region boundary partition.
['John S. Todhunter', 'C. C. Li']
Reconstruction and display of three dimensional images from serial cross-sections: geometric theory for data structures and software
163,575
Optimal Anycast Technique for Delay-Sensitive Energy-Constrained Asynchronous Sensor Networks
['Joohwan Kim', 'Xiaojun Lin', 'Ness B. Shroff']
Optimal Anycast Technique for Delay-Sensitive Energy-Constrained Asynchronous Sensor Networks
121,582
Real-time control of a DNN-based articulatory synthesizer for silent speech conversion: a pilot study.
['Florent Bocquelet', 'Thomas Hueber', 'Laurent Girin', 'Christophe Savariaux', 'Blaise Yvert']
Real-time control of a DNN-based articulatory synthesizer for silent speech conversion: a pilot study.
731,054
We present a novel algorithm, named the 2D-FFAST, to compute a sparse 2D-Discrete Fourier Transform (2D-DFT) featuring both low sample complexity and low computational complexity. The proposed algorithm is based on mixed concepts from signal processing (sub-sampling and aliasing), coding theory (sparse-graph codes) and number theory (Chinese-remainder-theorem) and generalizes the 1D-FFAST 2 algorithm recently proposed by Pawar and Ramchandran [1] to the 2D setting. Concretely, our proposed 2D-FFAST algorithm computes a k-sparse 2D-DFT, with a uniformly random support, of size N = Nx x Ny using O(k) noiseless spatial-domain measurements in O(k log k) computational time. Our results are attractive when the sparsity is sub-linear with respect to the signal dimension, that is, when k -> infinity and k/N -> 0. For the case when the spatial-domain measurements are corrupted by additive noise, our 2D-FFAST framework extends to a noise-robust version in sub-linear time of O(k log4 N ) using O(k log3 N ) measurements. Simulation results, on synthetic images as well as real-world magnetic resonance images, are provided in Section VII and demonstrate the empirical performance of the proposed 2D-FFAST algorithm.
['Frank Ong', 'Sameer Pawar', 'Kannan Ramchandran']
Fast and Efficient Sparse 2D Discrete Fourier Transform using Sparse-Graph Codes
606,947
La complexite croissante des applications actuelles favorise le developpement de systemes multi-agents auto-organisateurs possedant des proprietes self-? . Ces systemes autonomes presentent des capacites interessantes permettant la gestion de la dynamique endogene et exogene des applications etudiees. De nouveaux criteres doivent etre etudies afin de caracteriser et evaluer l'apport de ces proprietes self-? et leur influence sur les performances du systeme. Dans cet article, differentes categories regroupant les principaux criteres d'evaluation sont decrites afin de guider l'evaluation de ce type de systemes depuis les phases de conception jusqu'aux phases d'execution : evaluation du systeme en cours de fonctionnement, caracteristiques intrinseques et methodologie de conception.
['Elsy Kaddoum', 'Marie Pierre Gleizes', 'Jean-Pierre Georgé', 'Pierre Glize', 'Gauthier Picard']
Analyse des critères d'évaluation de systèmes multi-agents adaptatifs
798,497
We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.
['Yashar Mehdad', 'Giuseppe Carenini', 'Frank Wm. Tompa', 'Raymond T. Ng']
Abstractive Meeting Summarization with Entailment and Fusion
616,447
An approach to minimization that is based on a state-of-the-art paradigm for the two-level minimization of functions is presented. Some special properties of relations, in contrast to functions, which must be carefully considered in realizing a high-quality procedure for solving the minimization problem are clarified. An efficient heuristic method to find an optimal sum-of-products representation for a multiple-valued relation is proposed and implemented in the program GYOCRO. It uses multiple-valued decision diagrams (MDDs) to represent the characteristic functions for the relations. Experimental results are presented and compared with previous exact and heuristic Boolean relation minimizers to demonstrate the effectiveness of the proposed method. >
['Yosinori Watanabe', 'Robert K. Brayton']
Heuristic minimization of multiple-valued relations
212,522
Analytical models exist for evaluating gossip-based information propagation. Up to now these models were developed only for fully connected networks. We provide analytical models for information propagation of a push-pull gossiping protocol in a wireless mesh network. The underlying topology is abstracted away by assuming that the wireless nodes are uniformly deployed. We compare our models with simulation results for different topologies.
['Abolhassan Shamsaie', 'Wan Fokkink', 'Jafar Habibi']
Analysis of gossip-based information propagation in wireless mesh networks
612,612
Underwater acoustic sensor networks (UW-ASNs) have been attracting more and more research interests recently due to their various promising applications. For the applications such as the environmental monitoring and event detection services, there will be a large amount of data generated in UW-ASNs. To reliably store and efficiently retrieve the generated data, data-centric storage which stores the same type of data in a specific region and facilitates data retrieval is regarded as an attractive solution. In this paper, we focus on the cross-layer design for the data transmissions of data-centric storage in UW-ASNs. In order to minimize the network transmission delay in a specific data-centric storage procedure, we introduce the spatial reuse concept into the cross-layer design, where multiple communication links are permitted to perform data transmission simultaneously with efficient interference management. Then, we propose a spatial reuse TDMA based cross-layer (SRC-TDMA) protocol by jointly taking data routing and resource scheduling into consideration. Simulation results verify that our proposed SRC-TDMA protocol can significantly reduce the network transmission delay compared with the traditional TDMA protocol and the interference graph based TDMA (IG-TDMA) protocol.
['Rongqing Zhang', 'Xilin Cheng', 'Xiang Cheng', 'Liuqing Yang']
A Cross-Layer Design for Data-Centric Storage in Underwater Acoustic Sensor Networks
704,716
Guarding against channel errors in wireless networks has been a challenging research problem, specially when transmitting time-constrained contents, like streaming video or image. Source diversity in the form of Multiple Description Coding (MDC) has been studied for guarding against wireless channel errors. In MDC, an image is broken into several equally important descriptions which can be sent over multiple paths to the destination. In this work, we leverage the routing scheme to exploit the path diversity in such a way that multiple descriptions are sent over different paths in the network and are merged at the intermediate nodes for possible recovery of corrupted descriptions. The routing uses the idea that when multiple descriptions join at intermediate points it can help in partial recovery of lost or corrupted descriptions by using the link error information, thereby improving the overall image quality at the destination. In order to achieve high gain in terms of peak signal-to-noise ratio (PSNR) with minimum split into descriptions, we investigate an optimum spatial interleaving based MD coding scheme that maximizes the intermediate recovery possibility for a given recovery filter design. We also explore the choice of an optimum number of descriptions at an intermediate merging point that maximizes the PSNR gain. Our simulation study shows that it is possible to achieve a PSNR gain of around 5–6 dB using our coding and routing strategy.
['Pradipta De', 'Nilanjan Banerjee', 'Swades De']
Error-resilient image transmission over multihop wireless networks
470,946
While purchasing a product, consumers often rely on specifications as well as online reviews of the product for decision-making. While comparing, one often has in mind a specific aspect or a set of aspects which are of interest to them. Previous work has used comparative sentences, where two entities are compared directly in a single sentence by the review author, towards the comparison task. In this paper, we extend the existing model by incorporating the feature specifications of the products, which are easily available, and learn the importance to be associated with each of them. To test the validity of these product ranking measures, we comprehensively test it on a digital camera dataset from Amazon.com and the results show good empirical outperformance over the state-of-the-art baselines.
['Abhishek Sikchi', 'Pawan Goyal', 'Samik Datta']
PEQ: An Explainable, Specification-based, Aspect-oriented Product Comparator for E-commerce
908,819
Solar power forecasting using weather type clustering and ensembles of neural networks
['Mashud Rana', 'Irena Koprinska', 'Vassilios G. Agelidis']
Solar power forecasting using weather type clustering and ensembles of neural networks
944,847
An Adaptive Controller Using Wavelet Network for Five-Bar Manipulators with Deadzone Inputs
['Tien Dung Le', 'Hee-Jun Kang']
An Adaptive Controller Using Wavelet Network for Five-Bar Manipulators with Deadzone Inputs
503,394
Extended Access Structures and Their Cryptographic Applications.
['Vanesa Daza', 'Javier Herranz', 'Paz Morillo', 'Carla Ràfols']
Extended Access Structures and Their Cryptographic Applications.
793,142
We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.
['Annie Louis', 'Todd Newman']
Summarization of Business-Related Tweets: A Concept-Based Approach
614,237
In data-intensive cluster computing platforms such as Hadoop YARN, performance and fairness are two important factors for system design and optimizations. Many previous studies are either for performance or for fairness solely, without considering the tradeoff between performance and fairness. Recent studies observe that there is a tradeoff between performance and fairness because of resource contention between users/jobs. However, their scheduling algorithms for bi-criteria optimization between performance and fairness are static, without considering the impact of different workload characteristics on the tradeoff between performance and fairness. In this paper, we propose an adaptive scheduler called Gemini for Hadoop YARN. We first develop a model with the regression approach to estimate the performance improvement and the fairness loss under the sharing computation compared to the exclusive non-sharing scenario. Next, we leverage the model to guide the resource allocation for pending tasks to optimize the performance of the cluster given the user-defined fairness level. Instead of using a static scheduling policy, Gemini adaptively decides the proper scheduling policy according to the current running workload. We implement Gemini in Hadoop YARN. Experimental results show that Gemini outperforms the state-of-the-art approach in two aspects. 1) For the same fairness loss, Gemini improves the performance by up to 225% and 200% in real deployment and the large-scale simulation, respectively, 2) For the same performance improvement, Gemini reduces the fairness loss up to 70% and 62.5% in real deployment and the large-scale simulation, respectively.
['Zhaojie Niu', 'Shanjiang Tang', 'Bingsheng He']
Gemini: An Adaptive Performance-Fairness Scheduler for Data-Intensive Cluster Computing
637,028
implementation of the 64-bit PowerPC Architecturen' optimized for the IBM AS,,400 Commercial environment is described in this paper. This 64-bit BiCMOS semicustom implementation runs al a clock rate of 170 MHz. The processor features a 4-way superscalar pipelined fixed point unit which can dispatch and execute up to 4 instructions each cycle, a floating point unit with a peak rate of 500 MFLOPs, 8-Kbyte LO instnution cache, 256-Kbyre LI cache, and support for 64-Gbyte of main storage. A 4-way tightly-coupled symmetric multi-processor system is one of several configurations supported by this implementation.
['John M. Borkenhagen', 'Glen H. Handlogten', 'John D. Irish', 'Sheldon Bernard Levenstein']
AS/400TM 64-bit PowerPCTM-Compatible Processor Implementaiton
261,857
This paper explores the adage that good privacy is good business. Businesses, like social networks, often seek to create value from personal information and monetize it. Unlocking and harvesting value embedded in personal information can lead to disclosure of private and sensitive information, and subsequent harm. Personal information management practices can be a means to competitive and strategic advantage, however they are also subject to privacy law. We explore the underlying tension between transparency and disclosure in the privacy verses business strategy in the pursuit of innovation arena, and argue that in order achieve sustained innovation next generation applications and services will require a fresh imaginative and strategic privacy by design approach. Personal information management is a complex task and cannot be adequately achieved without significant attention and commitment to privacy requirements in systems analysis and design. Due to the potential power, magnitude, complexity and scope of web technologies there is a pressing need to understand privacy requirements better, and to invest in developing tools and techniques for modeling, analyzing, designing and building more effective personal information management systems that seek consent where appropriate and that offer users natural choices and sophisticated mechanisms for controlling their personal information.
['Mary-Anne Williams']
Privacy Management, the Law & Business Strategies: A Case for Privacy Driven Design
402,188
Experiments with Multiple BDI Agents with Dynamic Learning Capabilities
['Amelia Bădică', 'Costin Bădică', 'Maria Ganzha', 'Mirjana Ivanović', 'Marcin Paprzycki']
Experiments with Multiple BDI Agents with Dynamic Learning Capabilities
834,051
There has been an increase of domestic energy storage system installations during the last years. Those systems provide only local/self-consumption. The implementation of other grid- or market-related services such as frequency and voltage control would allow for a tighter system integration and a possible improvement of the economic feasibility. Providing multiple service in one system leads to interoperability, flexibility, complexity (overlapping of functions) issues. In order to cope with those issues a framework for rapid prototyping of control applications focused on energy storage systems is proposed. In this context, it can be a useful tool to aid component manufacturers in the development of novel storage applications. Also smart grid system integrators and energy utilities can profit using this kind of tool-support. The proposed approach uses model-driven engineering together with well-known smart grid and industrial related standards (i.e., IEC 61850, CIM, and IEC 61499). Its applicability is demonstrated on an illustrative example.
['Claudia Zanabria', 'Filip Andren', 'Johannes Kathan', 'Thomas Strasser']
Towards an integrated development of control applications for multi-functional energy storages
944,104
Nonvolatile logic-in-memory architecture, where nonvolatile memory elements are distributed over a logic-circuit plane, is expected to realize both ultra-low-power and reduced interconnection delay. This paper presents novel nonvolatile logic circuits based on logic-in-memory architecture using magnetic tunnel junctions (MTJs) in combination with MOS transistors. Since the MTJ with a spin-injection write capability is only one device that has all the following superior features as large resistance ratio, virtually unlimited endurance, fast read/write accessibility, scalability, complementary MOS (CMOS)-process compatibility, and nonvolatility, it is very suited to implement the MOS/MTJ-hybrid logic circuit with logic-in-memory architecture. A concrete nonvolatile logic-in-memory circuit is designed and fabricated using a 0.18 µm CMOS/MTJ process, and its future prospects and issues are discussed.
['Shoun Matsunaga', 'Jun Hayakawa', 'Shoji Ikeda', 'Katsuya Miura', 'Tetsuo Endoh', 'Hideo Ohno', 'Takahiro Hanyu']
MTJ-based nonvolatile logic-in-memory circuit, future prospects and issues
297,465
The general linear model provides the most widely applied statistical framework for analyzing functional MRI (fMRI) data. With the increasing temporal resolution of recent scanning protocols, and more elaborate data preprocessing schemes, data independency is no longer a valid assumption. In this paper, we revise the statistical background of the general linear model in the presence of temporal autocorrelations. First, when detecting the activation signal, we explicitly account for the temporal autocorrelation structure, which yields a generalized F-test and the associated corrected (or effective) degrees of freedom (DOF). The proposed approach is data driven and thus independent of any specific preprocessing method. Then, for event-related protocols, we propose a new model for the temporal autocorrelations (''damped oscillator'' model) and compare this model to another, previously used in the field (first-order autoregressive model, or AR(1) model). In the case of long fMRI time series, an efficient approximation for the number of effective DOF is provided for both models. Finally, the validity of our approach is assessed using simulated and real fMRI data and is compared with more conventional methods. © 2002 Elsevier Science B.V. All rights reserved.
['Frithjof Kruggel', 'Mélanie Pélégrini-Issac', 'Habib Benali']
Estimating the effective degrees of freedom in univariate multiple regression analysis.
168,677
An "adaptive" variant of Ruppert's Algorithm for producing quality triangular planar meshes is introduced. The algorithm terminates for arbitrary Planar Straight Line Graph (PSLG) input. The algorithm outputs a Delaunay mesh where no triangle has minimum angle smaller than about 26.45° except "across" from small angles of the input. No angle of the output mesh is smaller than arctan [(sin θ*)/(2-cos θ*)] where θ* is the minimum input angle. Moreover no angle of the mesh is larger than about 137°, independent of small input angles. The adaptive variant is unnecessary when θ* is larger than 36.53°, and thus Ruppert's Algorithm (with concentric shell splitting) can accept input with minimum angle as small as 36.53°. An argument is made for why Ruppert's Algorithm can terminate when the minimum output angle is as large as 30°.
['Gary L. Miller', 'Steven E. Pav', 'Noel J. Walkington']
WHEN AND WHY DELAUNAY REFINEMENT ALGORITHMS WORK
52,988
Sponsored advertising has generated strong advertising revenues for Facebook in recent years. As sponsored ads are built on an interactive platform that could be seen as invasive to user privacy, the growth of this advertising platform has important implications for consumers, and advertisers alike. As little research is available on consumer response to sponsored advertising as an interactive technology innovation, the current study assesses the effects of user perceptions of privacy risk, intrusiveness concerns and utilities of sponsored advertising on consumer attitudes and purchase intent. Testing a model derived form the technology acceptance model (TAM), the study found that privacy and intrusiveness concerns are both valid antecedent variables to perceived usefulness but not perceived ease of use of sponsored advertising. While both antecedent variables also influence consumer attitudes toward sponsored advertising, only privacy concerns have an impact product purchase intentions. The hypothesized relations between perceived usefulness, ease of use, attitudes and purchase intentions were also validated.
['Carolyn A. Lin', 'Tonghoon Kim']
Predicting user response to sponsored advertising on social media via the technology acceptance model
845,738
To enable differential detection of coincident, hard-limited, and reduced sidelobe QPSK type of signals, we introduce asymmetrical pulse shapes. This asymmetry in the I and Q baseband channels leads to coincident (unstaggered) QPSK system applications. We consider the bit error probability performance of these modulation schemes on a hardlimited channel in the presence of uplink and downlink additive Gaussian noise. It is found that the unstaggered QPSK modulation with the asymmetrical pulse has better bit error probability performance than QPSK and staggered QORC.
['Iwao Sasase', 'Kamilo Feher', 'Shinsaku Mori']
Asymmetrical Pulse Shaped QPSK Modulation Systems
238,861
With the growing awareness that individual hardware cores will not continue to produce the same level of performance improvement, there is a need to develop an integrated approach to performance optimization. In this paper we present a paradigm for continuous program optimization (CPO), whereby automatic agents monitor and optimize application and system performance. The monitoring data is used to analyze and create models of application and system behavior. Using this analysis, we describe how CPO agents can improve the performance of both the application and the underlying system. Using the CPO paradigm, we implemented cooperating page size optimization agents that automatically optimize large page usage. An offline agent uses vertically integrated performance data to produce a page size benefit analysis for different categories of data structures within an application. We show how an online CPO agent can use the results of the predictive analysis to automatically improve application performance. We validate that the predictions made by the CPO agent reflect the actual performance gains of up to 60% across a range of scientific applications including the SPEC-cpu2000 floating point benchmarks and two large high performance computing (HPC) applications.
['Calin Cascaval', 'Evelyn Duesterwald', 'Peter F. Sweeney', 'Robert W. Wisniewski']
Multiple page size modeling and optimization
212,427
A video personalization and summarization system is designed and implemented to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. These semantic metadata are provided through the use of our VideoAnnEx MPEG-7 video annotation tool. When the user initiates a request for content, the client communicates the user request and usage environment descriptions to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue summarization on usage environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd editing and composition tool. Finally, a personalization and summarization system is demonstrated on the IBM Websphere Portal Server for PCs and pervasive devices.
['Belle L. Tseng', 'Ching-Yung Lin', 'John R. Smith']
Video personalization and summarization system
292,866
A flexible endoscope could reach the potential surgical site via a single small incision on the patient or even through natural orifices, making it a very promising platform for surgical procedures. However, endoscopic surgery has strict spatial constraints on both tool-channel size and surgical site volume. It is therefore very challenging to deploy and control dexterous robotic instruments to conduct surgical procedures endoscopically. Pioneering endoscopic surgical robots have already been introduced, but the performance is limited by the flexible neck of the robot that passes through the endoscope tool channel. In this article we present a series of new developments to improve the performance of the robot: a force transmission model to address flexibility, elongation study for precise position control, and tissue property modeling for haptic feedback. Validation experiment results are presented for each sector. An integrated control architecture of the robot system is given in the end.
['Zheng Wang', 'Zhenglong Sun', 'Soo Jay Phee']
Haptic feedback and control of a flexible surgical endoscopic robot
318,600
We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.
['Chris Buehler', 'Michael Bosse', 'Leonard McMillan', 'Steven J. Gortler', 'Michael F. Cohen']
Unstructured lumigraph rendering
600,334
People suffering from mild cognitive impairments and early stages of dementia may notice a deterioration in their memory functions. Consequently, disorientation or wanderinglike behaviours might occur in their daily activities. Detecting wandering in trajectories is a complex task highly influenced by the technology/ies used and the context in which people move. Hence, there is no commonly accepted technique to detect wandering automatically. Since technology is a key factor, Smart Cities can open the door to new opportunities to approach the problem. In this article, we briefly summarise the state of the art of wandering detection techniques, we describe some of the benefits that Smart Cities will contribute with, and we provide a preliminary proposal of a new wandering detection method.
['Edgar Batista', 'Fran Casino', 'Agusti Solanas']
Wandering detection methods in smart cities: Current and new approaches
578,323
A binary vertex coloring (labeling) f : V (G) ! Z2 of a graph G is said to be friendly if the number of vertices labeled 0 is almost the same as the number of vertices labeled 1: This friendly labeling induces an edge labeling f ⁄ : E(G) ! Z2 deflned by f ⁄ (uv) = f(u)f(v) for all uv 2 E(G): Let ef(i) = jfuv 2 E(G) : f ⁄ (uv) = igj be the number of edges of G that are labeled i: Product-cordial index of the labeling f is the number pc(f) = jef(0) i ef(1)j: The product-cordial set of the graph G, denoted by PC(G); is deflned by PC(G) = fpc(f) : f is a friendly labeling of G g: In this paper, we will determine the product-cordial sets of long grids Pm£Pn; introduce a class of fully product-cordial trees and suggest new research directions in this topic.
['Ebrahim Salehi', 'Yaroslav Mukhin']
Product Cordial Sets of Long Grids
670,108
We present a probability-based DTW for gesture segmentation.We present the BoVDW framework for gesture classification.New VFHCRH descriptor for depth images. We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach.
['Antonio Hernández-Vela', 'Miguel Ángel Bautista', 'Xavier Perez-Sala', 'Víctor Ponce-López', 'Sergio Escalera', 'Xavier Baró', 'Oriol Pujol', 'Cecilio Angulo']
Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D
164,394
The First International Workshop on Behavior Change Support Systems attracted a great research interest. The selected papers focused on abstraction, implementation and evaluation of Behavior Change Support Systems. The workshop is an evidence of how researchers from around the globe have their own perspective of behavior change interventions. In this abstract, we have attempted to outline core issues that can enhance persuasiveness of such support systems. Finally, we highlight important research questions relating to the development of effective Behavior Change Support Systems
['Julia E.W.C. van Gemert-Pijnen', 'Wolfgang Reitberger', 'Sitwat Langrial', 'Bernd Ploderer', 'Harri Oinas-Kukkonen']
Expanding the Research Area of Behavior Change Support Systems
220,475
We present a more generalized model for the bandwidth packing problem with queuing delays under congestion than available in the extant literature. The problem, under Poison call arrivals and general service times, is set up as a network of spatially distributed independent M/G/1 queues. We further present two exact solution approaches to solve the resulting nonlinear integer programming model. The first method, called finite linearization method, is a conventional Big-M based linearization, resulting in a finite number of constraints, and hence can be solved using an off-the-shelve MIP solver. The second method, called constraint generation method, is based on approximating the non-linear delay terms using supporting hyperplanes, which are generated as needed. Based on our computational study, the constraint generation method outperforms the finite linearization method. Further comparisons of results of our proposed constraint generation method with the Lagrangean relaxation based solution method reported in the literature for the special case of exponential service times clearly demonstrate that our approach outperforms the latter, both in terms of the quality of solution and computation times.
['Navneet Vidyarthi', 'Sachin Jayaswal', 'Vikranth Babu Tirumala Chetty']
Bandwidth packing problem with queueing delays: modelling and exact solution approach
646,156
Technical security metrics provide measurements in ensuring the effectiveness of technical security controls or technology devices/objects that are used in protecting the information systems. However, lack of understanding and method to develop the technical security metrics may lead to unachievable security control objectives and incompetence of the implementation. This paper proposes a model of technical security metric to measure the effectiveness of network security management. The measurement is based on the effectiveness of security performance for (1) network security controls such as firewall, Intrusion Detection Prevention System (IDPS), switch, wireless access point, wireless controllers and network architecture; and (2) network services such as Hypertext Transfer Protocol Secure (HTTPS) and virtual private network (VPN). We use the Goal-Question-Metric (GQM) paradigm [1] which links the measurement goals to measurement questions and produce the metrics that can easily be interpreted in compliance with the requirements. The outcome of this research method is the introduction of network security management metric as an attribute to the Technical Security Metric (TSM) model. Apparently, the proposed TSM model may provide guidance for organizations in complying with effective measurement requirements of ISO/IEC 27001 Information Security Management System (ISMS) standard. The proposed model will provide a comprehensive measurement and guidance to support the use of ISO/IEC 27004 ISMS Measurement template.
['Rabiah Ahmad', 'Shahrin Sahib', "Muhamad Pahri Nor'Azuwa"]
Effective Measurement Requirements for Network Security Management
607,641
This paper describes a unified approach to the training of weighted finite-state automata (WFSA) that is based on a generic interface. Regardless of their internal structure, any automata implementing a simple interface can be managed by the system, not only for decoding but also for training purposes. This novel approach drastically simplifies the effort to incorporate new formalisms into a pattern recognition engine. This methodology has been integrated into Sautrela, a highly modular and pluggable open source package for generic purpose signal processing, focused on speech recognition.
['Mikel Peñagarikano', 'Germán Bordel', 'Luis Javier Rodríguez']
UNIFIED TRAINING OF WFSA THROUGH A GENERIC INTERFACE
488,251
This paper deals with the design, control, and implementation of a three-phase ground power-supply unit for aircraft servicing. Instead of a classical back-to-back converter configuration, a three-phase direct ac-ac (matrix) converter has been used as the power conditioning core of the power supply, working in conjunction with input and output LC filters. An optimized control system in the ABC frame employing a repetitive controller has been successfully implemented, taking into account both the transient and steady-state performance targets together with the system effectiveness under extreme unbalanced conditions. Extensive experimental tests on a 7.5-kVA prototype prove the efficiency of the designed system in meeting the high demanding civil and military international standards requirements.
['Saul Lopez Arevalo', 'Pericle Zanchetta', 'Patrick Wheeler', 'Andrew Trentin', 'Lee Empringham']
Control and Implementation of a Matrix-Converter-Based AC Ground Power-Supply Unit for Aircraft Servicing
263,371
Completely Regular Bishop Spaces
['Iosif Petrakis']
Completely Regular Bishop Spaces
631,204
Much interest exists in broadband network services to deliver a variety of products to consumers, such as Internet access, telephony, interactive TV, and video on demand. Due to its cost efficiency, Hybrid Fiber Coaxial (HFC) technology is currently being considered by most Telcos and cable companies as the technology to deliver these products. The topological HFC network design problem as implemented by several major companies is a form of the capacitated tree-star network design problem. We propose a new formulation for this problem and present a heuristic based on hierarchical decomposition of the problem. The proposed solution methodology exploits an Adaptive Reasoning Technique (ART), embedded as a meta-heuristic over specialized heuristics for the subproblems. In this context, we demonstrate the dynamic use of an exact solution technique within ART. The generalizability of the proposed solution methodology is demonstrated by applying it to a second problem, the Traveling Salesman Problem (TSP). Computational results are presented for both the HFC network design problem and the TSP, indicating high-quality solutions expending a very modest computational effort. The proposed solution method is found to be effective, and is shown to be easily adaptable to new problems without much crafting, and as such, has a broad appeal to the general operations research community.
['Raymond A. Patterson', 'Erik Rolland']
Hybrid Fiber Coaxial Network Design
78,490
Ubiquitous computing as the integration of sensors, smart devices, and intelligent technologies to form a smart space environment relies on the development of both middleware and networking technologies. To realize the environments, it is important to reduce the cost to develop various pervasive computing applications by encapsulating complex issues in middleware infrastructures. We propose a multi-agent-based middleware infrastructure suitable for the smart space: MASBM (Multi-Agent Service Bundle Middleware) which is capable of making it easy to develop pervasive computing applications. We conclude with the initial implementation results and lessons learned from MASAM.
['Minwoo Son', 'Dongkyoo Shin', 'Dongil Shin']
Research on multi-agent service bundle middleware for smart space
822,269
This paper proposes and evaluates hardware mechanisms for supporting prescient instruction prefetch — an approach to improving single-threaded application performance by using helper threads to perform instruction prefetch. We demonstrate the need for enabling store-to-load communication and selective instruction execution when directly pre-executing future regions of an application that suffer I-cache misses. Two novel hardware mechanisms, safe-store and YAT-bits, are introduced that help satisfy these requirements. This paper also proposes and evaluates .nite state machine recall, a technique for limiting pre-execution to branches that are hard to predict by leveraging a counted I-prefetch mechanism. On a research Itanium®SMT processor with next line and streaming I-prefetch mechanisms that incurs latencies representative of next generation processors, prescient instruction prefetch can improve performance by an average of 10.0% to 22% on a set of SPEC 2000 benchmarks that suffer significant I-cache misses. Prescient instruction prefetch is found to be competitive against even the most aggressive research hardware instruction prefetch technique: fetch directed instruction prefetch.
['Tor M. Aamodt', 'Paul Chow', 'Per Hammarlund', 'Hong Wang', 'John Paul Shen']
Hardware Support for Prescient Instruction Prefetch
912,159
In this paper we describe improvements to the techniques used to cryptanalyze SHA-0 and introduce the first results on SHA-1. The results include a generic multi-block technique that uses near-collisions in order to find collisions, and a four-block collision of SHA-0 found using this technique with complexity 251. Then, extension of this and prior techniques are presented, that allow us to find collisions of reduced versions of SHA-1. We give collisions of variants with up to 40 rounds, and show the complexities of longer variants. These techniques show that collisions up to about 53–58 rounds can still be found faster than by birthday attacks.
['Eli Biham', 'Rafi Chen', 'Antoine Joux', 'Patrick Carribault', 'Christophe Lemuet', 'William Jalby']
Collisions of SHA-0 and reduced SHA-1
123,245
Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems.
['Wen-Ke Li', 'Matthew J. Hausknecht', 'Peter Stone', 'Michael D. Mauk']
Using a million cell simulation of the cerebellum: Network scaling and task generality
508,948
Maligner: a fast ordered restriction map aligner.
['Lee Mendelowitz', 'David C. Schwartz', 'Mihai Pop']
Maligner: a fast ordered restriction map aligner.
647,245
A phonetic description of self-initiated self-repair sequences involving the repetition of words in German spontaneous speech is presented. Data are drawn from the Kiel Corpus of Spontaneous Speech. The description is primarily impressionistic auditory, but it also employs acoustic records to verify and objectify the impressionistic findings. A number of different patterns around cut-off are identified. The comparison of phonetic differences between reparandum and repair tokens is used to argue that repair sequences can also provide an interesting insight into the way in which fluent stretches of spontaneous speech are phonetically organized.
['Ramona Benkenstein', 'Adrian P. Simpson']
Phonetic correlates of self-repair involving word repetition in German spontaneous speech
566,173
ABSTRACT Traditional epidemiologic methods test hypotheses focusing on individual risk factors for studying disease of interest. However, complex diseases are triggered and progress due to complicated interactions among both genetic and environmental risk factors. In this paper, we propose a network-based approach by integration of pairwise synergistic interactions to identify potential risk factors and their interactions in disease development. Specifically, we study immunologic and metabolic indices that may provide prognostic and diagnostic information regarding the development of Type-1 Diabetes (T1D) by analyzing measurements from oral glucose tolerance tests (OGTTs) and intravenous glucose tolerance tests (IVGTTs) in subjects with high risk from the Diabetes Prevention Trial-Type 1 (DPT-1) study. Performance comparison of our network-based method with individual factor based analysis demonstrates that the systematic analysis of all potential factors by considering their synergistic relationships help predict the development of clinical T1D better.
['Amin Ahmadi Adl', 'Xiaoning Qian', 'Ping Xu', 'Kendra Vehik', 'Jeffrey P. Krischer']
Feature ranking based on synergy networks to identify prognostic markers in DPT-1
42,076
Most general purpose proof assistants support versions of typed higher order logic. Experience has shown that these logics are capable of representing most of the mathematical models needed in Computer Science. However, perhaps there exist applications where ZF-style set theory is more natural, or even necessary. Examples may include Scott's classical inverse-limit construction of a model of the untyped lambda-calculus (D_inf) and the semantics of parts of the Z specification notation. This paper compares the representation and use of ZF set theory within both HOL and Isabelle. The main case study is the construction of D_inf. The advantages and disadvantages of higher-order set theory versus first-order set theory are explored experimentally. This study also provides a comparison of the proof infrastructure of HOL and Isabelle.
['Sten Agerholm', 'Michael J. C. Gordon']
Experiments with ZF Set Theory in HOL and Isabelle
776,041
Security analysis and design of an efficient ECC‐based two‐factor password authentication scheme
['Tanmoy Maitra', 'Mohammad S. Obaidat', 'Sk Hafizul Islam', 'Debasis Giri', 'Ruhul Amin']
Security analysis and design of an efficient ECC‐based two‐factor password authentication scheme
867,266
This paper describes the system implemented by Fundaci´ o Barcelona Media (FBM) for classifying the polarity of opinion expressions in tweets and SMSs, and which is supported by a UIMA pipeline for rich linguistic and sentiment annotations. FBM participated in the SEMEVAL 2013 Task 2 on polarity classification. It ranked 5th in Task A (constrained track) using an ensemble system combining ML algorithms with dictionary-based heuristics, and 7th (Task B, constrained) using an SVM classifier with features derived from the linguistic annotations and some heuristics.
['Carlos Rodriguez-Penagos', 'Jordi Atserias Batalla', 'Joan Codina-Filbà', "David Garc'ia-Narbona", 'Jens Grivolla', 'Patrik Lambert', 'Roser Saurí']
FBM: Combining lexicon-based ML and heuristics for Social Media Polarities
615,627
On the verge of the convergence between high performance computing HPC and Big Data processing, it has become increasingly prevalent to deploy large-scale data analytics workloads on high-end supercomputers. Such applications often come in the form of complex workflows with various different components, assimilating data from scientific simulations as well as from measurements streamed from sensor networks, such as radars and satellites. For example, as part of the next generation flagship post-K supercomputer project of Japan, RIKEN is investigating the feasibility of a highly accurate weather forecasting system that would provide a real-time outlook for severe guerrilla rainstorms. One of the main performance bottlenecks of this application is the lack of efficient communication among workflow components, which currently takes place over the parallel file system.#R##N##R##N#In this paper, we present an initial study of a direct communication framework designed for complex workflows that eliminates unnecessary file I/O among components. Specifically, we propose an I/O arbitrator layer that provides direct parallel data transfer among job components that rely on the netCDF interface for performing I/O operations, with only minimal modifications to application code. We present the design and an early evaluation of the framework on the K Computer using upi¾źto 4800 nodes running RIKEN's experimental weather forecasting workflow as a case study.
['Jianwei Liao', 'Balazs Gerofi', 'Guo-Yuan Lien', 'Seiya Nishizawa', 'Takemasa Miyoshi', 'Hirofumi Tomita', 'Yutaka Ishikawa']
Toward a General I/O Arbitration Framework for netCDF Based Big Data Processing
895,416
This paper presents an algorithm to ensure the atomicity of a distributed storage that can be read and written by any number of clients. In failure-free and synchronous situations, and even if there is contention, our algorithm has a high write throughput and a read throughput that grows linearly with the number of available servers. The algorithm is devised with a homogeneous cluster of servers in mind. It organizes servers around a ring and assumes point-to-point communication. It is resilient to the crash failure of any number of readers and writers as well as to the crash failure of all but one server. We evaluated our algorithm on a cluster of 24 nodes with dual fast ethernet network interfaces (100 Mbps). We achieve 81 Mbps of write throughput and 8 x 90 Mbps of read throughput (with up to 8 servers) which conveys the linear scalability with the number of servers.
['Rachid Guerraoui', 'Dejan Kostic', 'Ron R. Levy', 'Vivien Quéma']
A High Throughput Atomic Storage Algorithm
262,076
This tutorial paper reviews existing concepts and future directions in selected areas related to simulation of large scale networks. It covers specifically topics in traffic modeling, simulation of routing, network emulation, and real time simulation.
['David M. Nicol', 'Michael Liljenstam', 'Jason Liu']
Advanced concepts in large-scale network simulation
408,972
An annual sequence of wages in England starting in 1245 is used. It is shown that a standard AK-type growth model with capital externality and stochastic productivity shocks is unable to explain important features of the data. Random returns to scale are then considered. Moderate episodes of increasing returns to scale and growth are shown to be compatible with convergence of wage's process towards a unique stationary distribution. This holds true for other relevant values such as GDP and/or capital stock. Furthermore, random returns to scale generate heteroskedasticity, a feature common to macroeconomic time series. Finally, the limit distribution of real wages displays fat tails if returns to scale are episodically increasing. Several inference results supporting randomness of returns to scale are provided.
['Stéphane Auray', 'Aurélien Eyquem', 'Frédéric Jouneau-Sion']
Modeling tails of aggregate economic processes in a stochastic growth model
93,931
For a graph G(V,E) and a vertex s in V, a weighting scheme (w : E -> N) is called a min-unique (resp. max-unique) weighting scheme, if for any vertex v of the graph G, there is a unique path of minimum (resp. maximum) weight from s to v. Instead, if the number of paths of minimum (resp. maximum) weight is bounded by n^c for some constant c, then the weighting scheme is called a min-poly (resp. max-poly) weighting scheme.#R##N# #R##N#In this paper, we propose an unambiguous non-deterministic log-space (UL) algorithm for the problem of testing reachability in layered directed acyclic graphs (DAGs) augmented with a min-poly weighting scheme. This improves the result due to Reinhardt and Allender [Reinhardt/Allender, SIAM J. Comp., 2000] where a UL algorithm was given for the case when the weighting scheme is min-unique.#R##N##R##N#Our main technique is a triple inductive counting, which generalizes the techniques of [Immermann, Siam J. Comp.,1988; Szelepcsenyi, Acta Inf.,1988] and [Reinhardt/Allender, SIAM J. Comp., 2000], combined with a hashing technique due to [Fredman et al.,J. ACM, 1984] (also used in [Garvin et al., Comp. Compl.,2014]). We combine this with a complementary unambiguous verification method, to give the desired UL algorithm.#R##N##R##N#At the other end of the spectrum, we propose a UL algorithm for testing reachability in layered DAGs augmented with max-poly weighting schemes. To achieve this, we first reduce reachability in DAGs to the longest path problem for DAGs with a unique source, such that the reduction also preserves the max-poly property of the graph. Using our techniques, we generalize the double inductive counting method in [Limaye et al., CATS, 2009] where UL algorithms were given for the longest path problem on DAGs with a unique sink and augmented with a max-unique weighting scheme.#R##N##R##N#An important consequence of our results is that, to show NL = UL, it suffices to design log-space computable min-poly (or max-poly) weighting schemes for DAGs.
['Anant Dhayal', 'Jayalal Sarma', 'Saurabh Sawlani']
Polynomial Min/Max-weighted Reachability is in Unambiguous Log-space
638,764
The wide adoption of the Z39.50 protocol from the Libraries exposes their abilities to participate in a distributed environment. In spite of the specification of a unified global access mechanism from the Z39.50 protocol, unsupported Access Points result to query failures and/or inconsistent answers. A challenge to this issue is to substitute an unsupported Access Point with others, so that the most similar semantics to the original Access Point can be obtained. In this paper we present the zSAPN (Z39.50 Semantic Access Point Network), a system which enhance the interoperability of the library search systems, by exploiting the semantics from the Bib-1 Access Point official specification of the Z39.50 information retrieval protocol. zSAPN substitutes each unsupported Access Point with a set of other supported ones, whose appropriate combination would either broaden or narrow the initial semantics, according to the user's choice.
['Michalis Sfakakis', 'Sarantos Kapidakis']
Enhance the interoperability of the library search systems with zSAPN
33,019
We consider how to allocate bandwidth in a multicast tree so as to optimize some global measure of performance. In our model each receiver has a budget to be used for bandwidth reservation on links along its path from the source, and each link has a cost function depending on the amount of total bandwidth reserved at the link by all receivers using that link. We formulate and solve a problem of allocating bandwidth in the multicast tree such that the sum of link costs is minimized.
['Murali S. Kodialam', 'Steven H. Low']
Resource allocation in a multicast tree
59,762
Distributed Detection of Multi-Hop Information Flows With Fusion Capacity Constraints
['Ameya Agaskar', 'Ting He', 'Lang Tong']
Distributed Detection of Multi-Hop Information Flows With Fusion Capacity Constraints
605,495
In this paper we propose a novel fusion strategy which fuses information from multiple physical traits via a cascading verification process. In the proposed system users are verified by each individual modules sequentially in turns of face, voice and iris, and would be accepted once he/she is verified by one of the modules without performing the rest of the verifications. Through adjusting thresholds for each module, the proposed approach exhibits different behavior with respect to security and user convenience. We provide a criterion to select thresholds for different requirements and we also design an user interface which helps users find the personalized thresholds intuitively. The proposed approach is verified with experiments on our in-house face-voice-iris database. The experimental results indicate that besides the flexibility between security and convenience, the proposed system also achieves better accuracy than its most accurate module.
['Ping-Han Lee', 'Lu-Jong Chu', 'Yi-Ping Hung', 'Sheng-Wen Shih', 'Chu-Song Chen', 'Hsin-Min Wang']
Cascading Multimodal Verification using Face, Voice and Iris Information
369,712
Non-coding RNAs (ncRNAs), especially for microRNAs (miRNAs), have b een widely studied as cr ucial negative regulatory molecules. Long non- coding RNAs (lncRNAs) also have attracted the attention of researchers due to their potential contribution in multiple essential biological processes. To under- stand th e p otential in teractions between miRNAs, l ncRNAs and m RNAs a nd their p otential r oles i n tumorigenesis, w e reported an i ntegrative an alysis t o predict clustered ncRNA-mRNA with consistent functions, and predict clusters at the single molecule level. The method aims to discover those potential clus- ters o f coding-non-coding R NAs t hat maybe c ontribute t o oc currence and d e- velopment of human di seases. B ased on e xpression profiles and abnormal ex- pression profiles of m iRNAs, l ncRNAs and m RNAs, co -expression network analysis can be performed at the single molecule and multiple RNA molecules, respectively. Some cl ustered R NAs at t he s ingle R NA m olecule can be ob- tained, an d t hese m embers al ways h ave co nsistent f unctions. Although t hese non-coding R NAs or c oding RNAs ar e an alyzed at t he s ingle m olecule l evel, they have close functional relationships, especially between miRNAs and their target mRNAs. Therefore, based on their potential functional and sequence rela- tionships, further coding-non-coding co-expression network can be constructed based o n i ntegrative expression a nd f unctional analysis acr oss d ifferent m ole- cule levels. The comparison analysis of the single and multiple molecules will provide m ore i nformation t o pr edict i nteraction between miRNAs a nd lncRNAs, ncRNAs and mRNAs. Furthermore, based on special miRNA group
['Li Guo', 'Yang Zhao', 'Sheng Yang', 'Hui Zhang', 'Feng Chen']
An Integrative Analysis of ncRNA-mRNA Using Co-expression Network to Discover Potential Contributions of Coding-non-coding RNA Clusters.
769,697
Mobile applications are becoming very complex since business applications increasingly move to the mobile. Hence the same problem of code maintenance and comprehension of poorly documented apps, as in the desktop world, happen to the mobile today. One technique to help with code comprehension is to reverse engineer the application. Specifically, we are interested in the functional structure of the app i.e. how the classes that implement the use cases interact. Then we adapted, to the iPhone, the code analysis technique we developed for the desktop applications. In this paper we present the reverse engineering process and tool we used to reverse engineer the code of an iPhone app and show, in a case study, how these tools are used.
['Philippe Dugerdil', 'Roland Sako']
Reverse engineering an iPhone applications using dynamic analysis
682,421
The coupling of highly turbulent convection with rotation within a full spherical shell geometry, such as in the solar convection zone, can be studied with the new anelastic spherical harmonic (ASH) code developed to exploit massively parallel architectures. Inter-processor transposes are used to ensure data locality in spectral transforms, a sophisticated load balancing algorithm is implemented and the Legendre transforms, which dominate the workload for large problems, are highly optimized by exploiting the features of cache memory and instruction pipelines. As a result, the ASH code achieves around 120 Mflop/s per node on the Cray T3E and scales nearly linearly for adequately large problem sizes.
['Gary A. Glatzmaier', 'Thomas Clune', 'Jeff Elliott', 'Mark S. Miesch', 'Juri Toomre']
Computational aspects of a code to study rotating turbulent convection in spherical shells
385,473
With the explosive growth of data, distributed databases are widely used in various applications, including e-commerce, social networking, recommendation system, location-based service and etc. Among them, the use of HBase is the most common. However, it does not natively support multi-dimensional query and the existing multi-dimensional index established on HBase has some disadvantages such as unsupporting floating-point numbers and low efficiency of range query. In this paper, we propose a hybrid index for multi-dimensional query in HBase to address these issues. To build the index, we use the z-ordering curve to divide the multi-dimensional space into grids, then we adopt the bit interleaving technique to generate GridID, after that we refer to the coding generation method of Pyramid index in each grid. Combining the z-ordering curve with pyramid technology, our index supports the operation of floating-point numbers, efficient multi-dimensional data processing and range query. Besides, we implement the index structure on HBase and run some experiments on real data. The results of experiments show the index achieves range query and outperforms other index structures.
['Xiaosheng Tang', 'Boda Han', 'Han Chen']
A hybrid index for multi-dimensional query in HBase
959,945
Incremental data can be defined as dynamic data that changes as time advances. Mining frequent patterns from such data is costly as most of the approaches need repetitive scanning and generates a large number of candidate keys. It is important to develop an efficient approach to enhance the performance of mining. This paper proposes a novel tree-based data structure for mining frequent pattern of incremental data called Tree for Incremental Mining of Frequent Pattern (TIMFP) which is compact as well as almost balanced. TIMFP is also suitable for interactive mining (build once and mine many). We have compared TIMFP with canonical-order tree (CanTree), Compressed and Arranged Transaction Sequences (CATS) Tree and Incremental Mining Binary Tree (IMBT). The experimental results show that the proposed work has better performance than other data structures compared in the paper in terms of time required for constructing the tree as well as mining frequent patterns from the tree.
['Rajni Jindal', 'Malaya Dutta Borah']
A novel approach for mining frequent patterns from incremental data
887,155
A Euromet international laboratory comparison (Project 393) has been carried out between 14 national standards laboratories. Thermistor mounts were used, equipped with PC7 as well as Type N connectors. The comparison is carried out using the normal equipment of the laboratory for high-quality external calibration. The results show good agreement in measuring the calibration coefficient of the thermistor mounts within the claimed expanded uncertainty (typically between 1% and 2%). It confirms the equivalence of national standards for RF power up to 18 GHz. In one case, corrective action is proposed.
['J.P.M. de Vreede', 'W. Korfage', 'P. Persson', 'L. Brunetti', 'V. Lopez', 'I. Petras', 'Philippe Morard', 'F. Hejsek', 'A. Torok', 'J. Ruhaak', 'J. Ascroft', 'Edward T. Dressler', 'M. Celep', 'R. Lapuh', 'J. Achkar']
International comparison for RF power in the frequency range up to 18 GHz
354,599