abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
In the field of human-computer interaction, reports of the involvement of its practitioners in system development projects are rarely available for general scrutiny. The paper draws upon the experience of an HCI team at work within a large collaborative software development project. This experience of four years of HCI practice suggests three key, interdependent, factors that are central to the effectiveness of HCI input. The factors are influence, discretion and time available, and are discussed in the context of other report, of the role of HCI practitioners in the field. A number of issues are identified about the nature and scope of HCI in practice. The experience reported is relevant to software development in general, particularly where there are several groups working, sometimes in different sites, towards a unified outcome.
['Nick Rousseau', 'Linda Candy', 'Ernest A. Edmonds']
Influence, discretion and time available: a case study of HCI practice in software development
88,209
This study examines the impact of social media use on participation in large-scale protest campaigns that feature a range of participation opportunities. It develops a theoretical model which distinguishes between support generation and behavior activation effects, differentiates collective action, digital, and personalized action participation, and posits social media use as a mediator between social psychological predictors of protest behavior and actual participation. The empirical analysis focuses on Hong Kong’s Umbrella Movement in 2014. Analyzing a probability sample of university students (N = 795), the findings show that sharing political information and direct connections with political actors via social media have significant impact on both support for and participation in the Umbrella Movement. Social media use has effects on each dependent variable in the causal chain even after all the immediate causes are controlled. Social media use also mediates part of the impact of general political awareness, efficacy, and grievances on movement support and participation.
['Francis L. F. Lee', 'Hsuan-Ting Chen', 'Michael Chan']
Social media use and university students’ participation in a large-scale protest campaign: The case of Hong Kong’s Umbrella Movement
869,527
In this paper, we compare the radiation response of GPUs executing matrix multiplication and FFT algorithms. The provided experimental results demonstrate that for both algorithms, in the majority of cases, the output is affected by multiple errors. The architectural and code analysis highlight that multiple errors are caused by shared resources corruption or thread dependencies. The experimental data and analytical studies can be fruitfully employed to evaluate the expected error rate of GPUs in realistic applications and to design specific and optimized software-based hardening procedures.
['Paolo Rech', 'Laércio Lima Pilla', 'Francesco Silvestri', 'Philippe Olivier Alexandre Navaux', 'Luigi Carro']
Neutron sensitivity and software hardening strategies for matrix multiplication and FFT on graphics processing units
466,174
Spatio-temporal alignment of electronic slides with corresponding presentation video opens up a number of possibilities for making the instructional content more accessible and understandable, such as video quality improvement, better content analysis and novel compression approaches for low bandwidth access. However, these applications need finding accurate transformations between slides and video frames, which is quite challenging in capture settings using pan-tiltzoom (PTZ) cameras. In this paper we present a nonlinear optimization approach for accurate registration of slide images to video frames. Instead of estimating the projective transformation (i.e., homography) between a single pair of slide and frame images, we solve a set of homographies jointly in a frame sequence that is associated with a given slide. Quantitative evaluation confirms that this substantively improves alignment accuracy.
['Quanfu Fan', 'Kobus Barnard', 'Arnon Amir', 'Alon Efrat']
Accurate alignment of presentation slides with educational video
250,315
Breaking away from traditional attempts at coreference resolution from discourseonly inputs, we try to do the same by constructing rich verb semantics from perceptual data, viz. a 2-D video. Using a bottom-up dynamic attention model and relative-motion-features between agents in the video, transitive verbs, their argument ordering etc. are learned through association with co-occurring adult commentary. This leads to learning of synonymous NP phrases as well as anaphora such as “it”,“each other” etc. This preliminary demonstration argues for a new approach to developmental NLP, with multi-modal semantics as the basis for computational language learning.
['Amitabha Mukerjee', 'Kruti Neema', 'Sushobhan Nayak']
Discovering coreference using image-grounded verb models
613,323
We consider a multiuser two-way relay network where multiple pairs of users exchange information with the assistance of a relay node, using orthogonal channels per pair. For a variety of two-way relaying mechanisms, such as decode- and- forward (DF), amplify-and-forward (AF) and compress-and-forward (CF), we investigate the problem of optimally allocating relay's power among the user pairs it assists such that an arbitrary weighted sum rate of all users is maximized, and solve the problem as one or a set of convex problems for each relaying scheme. Numerical results are presented to demonstrate the performance of the optimum relay power allocation as well as the comparison among different two-way relaying schemes.
['Min Chen', 'Aylin Yener']
Power allocation for F/TDMA multiuser two-way relay networks
143,963
Weighted low-rank approximation (WLRA), a dimensionality reduction technique for data analysis, has been successfully used in several applications, such as in collaborative filtering to design recommender systems or in computer vision to recover structure from motion. In this paper, we prove that computing an optimal WLRA is NP-hard, already when a rank-one approximation is sought. In fact, we show that it is hard to compute approximate solutions to the WLRA problem with some prescribed accuracy. Our proofs are based on reductions from the maximum-edge biclique problem and apply to strictly positive weights as well as to binary weights (the latter corresponding to low-rank matrix approximation with missing data).
['Nicolas Gillis', 'François Glineur']
Low-Rank Matrix Approximation with Weights or Missing Data Is NP-Hard
444,100
As we live in the Internet age, we face high threats of data leakage, identity theft, and inconvenience over authenticating ourselves online. Safe and simple digital identification is crucial in the digital realm. In order to solve the above issues, a mediating digital assistive device could possibly act between the user and computer system in order to replace the current identification system. In this paper I present Veri-Pen, a stylus that provides digital identification through the natural extraction of a signature and fingerprint. The proposed concept aims to deliver simple and secure pen-based online identification. The prototype, built upon user case studies, was evaluated in a simulated scenario of digital authentication in comparison to conventional ID-password identification. The user evaluation confirmed that the pen-based identification tool with biometrics delivers a simple and trustworthy experience to users during the procedure of authentication.
['Ji-Hoon Suh']
Veri-Pen: A Pen-based Identification Through Natural Biometrics Extraction
725,563
Repräsentation und Anfragefunktionalität in multimedialen Informationssystemen.
['Norbert Fuhr']
Repräsentation und Anfragefunktionalität in multimedialen Informationssystemen.
754,290
Domain Specific Modeling Languages (dsmls) plays a key role in the development of Safety Critical Systems to model system requirements and implementation. They often need to integrate property and query sub-languages. As a standardized modeling language, ocl can play a key role in their definition as they can rely both on its concepts and textual syntax which are well known in the Model Driven Engineering community. For example, most dsmls are defined using mof for their abstract syntax and ocl for their static semantics as a metamodeling dsml. OCLinEcore in the Eclipse platform is an example of such a metamodeling dsml integrating ocl as a language component in order to benefit from its property and query facilities. dsmls for Safety Critical Systems usually provide formal model verification activities for checking models completeness or consistency, and implementation correctness with respect to requirements. This contribution describes a framework to ease the definition of such formal verification tools by relying on a common translation from a subset of ocl to the Why3 verification toolset. This subset was selected to ease efficient automated verification. This framework is illustrated using a block specification language for data flow languages where a subset of ocl is used as a component language.
['Arnaud Dieumegard', 'Marc Pantel', 'Guillaume Babin', 'Martin Carton']
Tool Paper: A Lightweight Formal Encoding of a Constraint Language for DSMLs
665,646
There are many different types of video coding standards which can only be played with specific decoders. The lack of a decoder requires a user to download and install the proper decoder in order to decode a particular video stream. This however may not be preferable for real-time applications due to real-time constraint. We propose active techniques to dynamically inject video coding software into the transmitted video packets so that users are capable of playing video encoded in any format without having the decoding software pre-installed. The proposed techniques encapsulate the encoded video stream along with the appropriate video decoding software into active packets, and transmit the active packets to the receiving terminal. The receiver then only needs to extract the code from the active packets to get the software to decode the encoded video. The process of creating and extracting active packets is referred as active activation. Depending upon the nature of the video stream, there are different active techniques to encapsulate and restore the video stream. This paper details these active techniques.
['Jyh-Cheng Chen', 'Prathima Agrawal']
Active techniques for real-time video transmission and playback
297,883
This paper addresses the robust performance issue in the presence of controller uncertainty for single-input single-output LTI systems. All system uncertainties are modeled as additive, and the final result is a frequency-domain upper bound for controller uncertainty weighting function which acts as a sufficient condition for maintaining the systempsilas robust performance. The bound is calculated in a simple way which makes it suitable for real-time applications needing multiple controller design attempts due to changes in system components or environmental conditions.
['Vahid R. Dehkordi', 'Benoit Boulet']
Frequency-domain robust performance condition for plant and controller uncertainty in SISO LTI systems
376,318
The Cluster Variation method is a class of approximation methods containing the Bethe and Kikuchi approximations as special cases. We derive two novel iteration schemes for the Cluster Variation Method. One is a fixed point iteration scheme which gives a significant improvement over loopy BP. mean field and TAP methods on directed graphical models. The other is a gradient based method, that is guaranteed to converge and is shown to give useful results on random graphs with mild frustration. We conclude that the methods are of significant practical value for large inference problems.
['Hilbert J. Kappen', 'Wim Wiegerinck']
Novel iteration schemes for the Cluster Variation Method
186,437
With the advances in cloud computing and virtualization technologies, Software-Defined Networking SDN has become a fertile ground for building network applications regarding management and security using the OpenFlow protocol giving access to the forwarding plane. This paper presents an analysis and evaluation of OpenFlow message usage for supporting network security applications. After describing the considered security attacks, we present mitigation and defence strategies that are currently used in SDN environments to tackle them. We then analyze the dependencies of these mechanisms to OpenFlow messages that support their instantiation. Finally, we conduct series of experiments on software and hardware OpenFlow switches in order to validate our analysis and quantify the limits of current security mechanisms with different OpenFlow implementations.
['Sebastian Seeber', 'Gabi Dreo Rodosek', 'Gaetan Hurel', 'Remi Badonnel']
Analysis and Evaluation of OpenFlow Message Usage for Security Applications
848,264
Soft constraints are introduced in an iterative projection approach, in order to make the set of constraints compatible for the case of noisy measurements. The degrees of freedom in the design are then used to arrive at a computationally simple form of the soft constraint algorithm. Simulation shows that the true solution is still feasible under noisy conditions, a property lost with the use of hard constraint algorithms.
['A.A. Beex']
Soft constraint iterative reconstruction from noisy projections
45,015
Alginate nanocomposite hydrogels with incorporated electrochemically synthesized silver nanoparticles were investigated regarding cytotoxicity in vitro. Direct contact test of Ag/alginate discs was applied in 2D monolayer cultures of bovine calf chondrocytes while a 3D culture of bovine articular cartilage explants pressed by the discs was established in a biomimetic bioreactor with dynamic compression in the physiological regime (10 % strain, 0.84 Hz frequency, 1 h on / 1 h off). Moderate cytotoxicity was observed in 2D cell cultures as opposed to findings in 3D explant cultures, which were not affected by the Ag/alginate discs despite the compression.
['Jovana Zvicer', 'M. Samardzic', 'V.B. Mišković-Stanković', 'Bojana Obradovic']
Cytotoxicity studies of Ag/alginate nanocomposite hydrogels in 2D and 3D cultures
581,765
In this paper we present a clustering based approach to partition software systems into meaningful subsystems. In particular, the approach uses lexical information extracted from four zones in Java classes, which may provide a different contribution towards software systems partitioning. To automatically weigh these zones, we introduced a probabilistic model, and applied the Expectation-Maximization (EM) algorithm. To group classes according to the considered lexical information, we customized the well-known K-Medoids algorithm. To assess the approach and the implemented supporting system, we have conducted a case study on six open source software systems.
['Anna Corazza', 'Sergio Di Martino', 'Giuseppe Scanniello']
A Probabilistic Based Approach towards Software System Clustering
204,820
An improved SINR metric is proposed for the random beamforming scheme introduced by Sharif and Hassibi, when the channel observation used to compute the SINR is known to be noisy or outdated. The effect of noise on the MIMO channel estimate is accounted for using results on the perturbation of the eigenspaces of Hermitian matrices. The new metric, designed as a conservative estimate of the real SINR, is based on expectations of bounds on the signal and interference power. It is shown through simulations that it can noticeably reduce the outage probability, in the realistic setting of a 4 times 2 antennas system, at the cost of a minor reduction of the achievable sum-rate.
['Roland Tresch', 'Maxime Guillaud']
SINR Estimation in Random Beamforming with Noisy MIMO Channel Measurements
120,339
Even though hierarchical group communication is a prominent communication model for a variety of applications, featured by hierarchical communication rules, it has not been sufficiently investigated in the security literature. In this paper, we introduce private hierarchical group communication and we determine its specific confidentiality requirements, and then we propose an efficient key management protocol satisfying those requirements. This work is done in the frame of a national french project whose consortium includes the international telecom company EADS, INRIA, CNRS and ENST-Paris. The project is called Safe- Cast and deals with group communication in PMR networks that are used mainly by security corps (police, fire fighters, soldiers, and so forth) in areas where it is difficult to have network infrastructures, such as war battles or following a natural disaster (earthquake, tsunami, tornado, or similar).
['Hani Ragab Hassan', 'Abdelmadjid Bouabdallah', 'Hatem Bettahar', 'Yacine Challal']
An Efficient Key Management Algorithm for Hierarchical Group Communication
453,192
aHead: Considering the Head Position in a Multi-sensory Setup of Wearables to Recognize Everyday Activities with Intelligent Sensor Fusions
['Marian Haescher', 'John Trimpop', 'Denys J. C. Matthies', 'Gerald Bieber', 'Bodo Urban', 'Thomas Kirste']
aHead: Considering the Head Position in a Multi-sensory Setup of Wearables to Recognize Everyday Activities with Intelligent Sensor Fusions
640,625
Background#R##N#There are many methods for analyzing microarray data that group together genes having similar patterns of expression over all conditions tested. However, in many instances the biologically important goal is to identify relatively small sets of genes that share coherent expression across only some conditions, rather than all or most conditions as required in traditional clustering; e.g. genes that are highly up-regulated and/or down-regulated similarly across only a subset of conditions. Equally important is the need to learn which conditions are the decisive ones in forming such gene sets of interest, and how they relate to diverse conditional covariates, such as disease diagnosis or prognosis.
['Joseph Roden', 'Brandon King', 'Diane Trout', 'Ali Mortazavi', 'Barbara J. Wold', 'Christopher E. Hart']
Mining gene expression data by interpreting principal components
294,937
Traffic congestion has already been a distinctly serious problem in both developed and developing countries. Among the proposed methods to solve the traffic congestion problem, variable speed limit (VSL) is considered as one of the most promising methods. But to the traditional VSL, the speed limit sign and the control distance is fixed, which makes VSL lack deployment and control flexibility. Whereas, in connected vehicle (CV), which is a crossing field of intelligent transportation systems (ITS) and internet of things (IoT), control and deployment flexibility can both be achieved. Furthermore, a big data environment is formed in CV to solve the traffic problems. In this paper, connect vehicle-based variable speed limit (CV-VSL) is proposed, and a simulation platform SimIVC is used to study the influence of control distance. The results show that the improvement of traffic performance increases 0.72% when the control distance is 270 metres than that of 250 metres, which means that the traffic performance...
['Lu Pu', 'Xiaowei Xu', 'Han He', 'Hanqing Zhou', 'Zhijun Qiu', 'Yu Hu']
A flexible control study of variable speed limit in connected vehicle systems
16,069
This paper demonstrates a novel approach for motion classification and analysis using pressure sensors worn by a person. The pressure signal is analysed to search for features corresponding to the motion states, and matched against typical human walking pattern. A prototype system is developed which provides motion classification results in real-time. The motion classification results consists of the number of steps taken by the participant together with the corresponding motion state. The system distinguishes the states associated with a person travelling on a lift, walking on stairs, walking on a flat ground and rest. Data from several participants are collected in a measurement campaign using pressure sensors only, which shows a precision rate of over 90% and a recall rate between 89% and 96%, for the states associated with the movement of participant.
['Birendra Ghimire', 'Christian Nickel', 'Jochen Seitz']
Pedestrian motion state classification using pressure sensors
939,392
On the fusion of coalgebraic logics
['Fredrik Dahlqvist', 'Dirk Pattinson']
On the fusion of coalgebraic logics
606,775
Recognition and Localization Method of Overlapping Apples for Apple Harvesting Robot.
['Tian Shen', 'Dean Zhao', 'Weikuan Jia', 'Yu Chen']
Recognition and Localization Method of Overlapping Apples for Apple Harvesting Robot.
940,607
Understanding the internal process of ConvNets is commonly done using visualization techniques. However, these techniques do not usually provide a tool for estimating the stability of a ConvNet against noise. In this paper, we show how to analyze a ConvNet in the frequency domain using a 4-dimensional visualization technique. Using the frequency domain analysis, we show the reason that a ConvNet might be sensitive to a very low magnitude additive noise. Our experiments on a few ConvNets trained on different datasets revealed that convolution kernels of a trained ConvNet usually pass most of the frequencies and they are not able to effectively eliminate the effect of high frequencies. Our next experiments shows that a convolution kernel which has a more concentrated frequency response could be more stable. Finally, we show that fine-tuning a ConvNet using a training set augmented with noisy images can produce more stable ConvNets.
['Elnaz Jahani Heravi', 'Hamed Habibi Aghdam', 'Domenec Puig']
Analyzing Stability of Convolutional Neural Networks in the Frequency Domain
625,282
In this paper, we propose an algorithm for reducing the number of unknown words on blog documents by replacing peculiar expressions with formal expressions. Japanese blog documents contain many peculiar expressions regarded as unknown sequences by morphological analyzers. Reducing these unknown sequences improves the accuracy of morphological analysis for blog documents. Manual registration of peculiar expressions to the morphological dictionaries is a conventional solution, which is costly and requires specialized knowledge. In our algorithm, substitution candidates of peculiar expressions are automatically retrieved from formally written documents such as newspapers and stored as substitution rules. For the correct replacement, a substitution rule is selected based on three criteria; its appearance frequency in retrieval process, the edit distance between substituted sequences and the original text, and the estimated accuracy improvements of word segmentation after the substitution. Experimental results show our algorithm reduces the number of unknown words by 30.3%, maintaining the same segmentation accuracy as the conventional methods, which is twice the reduction rate of the conventional methods.
['Kazushi Ikeda', 'Tadashi Yanagihara', 'Kazunori Matsumoto', 'Yasuhiro Takishima']
Unsupervised Text Normalization Approach for Morphological Analysis of Blog Documents
284,720
While textual reviews have become prominent in many recommendation-based systems, automated frameworks to provide relevant visual cues against text reviews where pictures are not available is a new form of task confronted by data mining and machine learning researchers. Suggestions of pictures that are relevant to the content of a review could significantly benefit the users by increasing the effectiveness of a review. We propose a deep learning-based framework to automatically: (1) tag the images available in a review dataset, (2) generate a caption for each image that does not have one, and (3) enhance each review by recommending relevant images that might not be uploaded by the corresponding reviewer. We evaluate the proposed framework using the Yelp Challenge Dataset. While a subset of the images in this particular dataset are correctly captioned, the majority of the pictures do not have any associated text. Moreover, there is no mapping between reviews and images. Each image has a corresponding business-tag where the picture was taken, though. The overall data setting and unavailability of crucial pieces required for a mapping make the problem of recommending images for reviews a major challenge. Qualitative and quantitative evaluations indicate that our proposed framework provides high quality enhancements through automatic captioning, tagging, and recommendation for mapping reviews and images.
['Roberto Camacho Barranco', 'Laura M. Rodriguez', 'Rebecca Urbina', 'M. Shahriar Hossain']
Is a Picture Worth Ten Thousand Words in a Review Dataset
822,457
It has been shown that convergence to a solution can be significantly accelerated for a number of iterative image reconstruction algorithms, including simultaneous Cimmino-type algorithms, the "expectation maximization" method for maximizing likelihood (EMML) and the simultaneous multiplicative algebraic reconstruction technique (SMART), through the use of rescaled block-iterative (BI) methods. These BI methods involve partitioning the data into disjoint subsets and using only one subset at each step of the iteration. One drawback of these methods is their failure to converge to an approximate solution in the inconsistent case, in which no image consistent with the data exists; they are always observed to produce limit cycles (LCs) of distinct images, through which the algorithm cycles. No one of these images provides a suitable solution, in general. The question that arises then is whether or not these LC vectors retain sufficient information to construct from them a suitable approximate solution; we show that they do. To demonstrate that, we employ a "feedback" technique in which the LC vectors are used to produce a new "data" vector, and the algorithm restarted. Convergence of this nested iterative scheme to an approximate solution is then proven. Preliminary work also suggests that this feedback method may be incorporated in a practical reconstruction method.
['Charles L. Byrne']
Convergent block-iterative algorithms for image reconstruction from inconsistent data
214,516
This paper describes recent work on the DynDial project* towards incremental semantic interpretation in dialogue. We outline our domain-general grammar-based approach, using a variant of Dynamic Syntax integrated with Type Theory with Records and a Davidsonian event-based semantics. We describe a Java-based implementation of the parser, used within the Jindigo framework to produce an incremental dialogue system capable of handling inherently incremental phenomena such as split utterances, adjuncts, and mid-sentence clarification requests or backchannels.
['Matthew Purver', 'Arash Eshghi', 'Julian Hough']
Incremental semantic construction in a dialogue system
618,942
This paper presents the characterization of planar microfabricated coils designed for an electromagnetic system which is realized in batch-type wafer technology. The challenge is to fabricate good coils in the simplest and economic way. The process flow used for their fabrication in the clean room as well as the manufacture results are discussed. After the magnetic simulations and the electric characterization, the thermal behaviour of the microfabricated coils is observed to determine the maximal allowed current density and the corresponding heating. These are the main characteristics to know for the sizing of an electromagnetic system.
['Sebastiano Merzaghi', 'Pascal Meyer', 'Yves Perriard']
Development of Planar Microcoils for an Electromagnetic Linear Actuator Fabricated in Batch-Type Wafer Technology
470,502
This paper presents task description language (TDL) for underwater robots. General primitives of TDL were designed by considering common primitives of conventional computer languages, and underwater robot-specific command primitives were designed by modifying the turtle graphic commands in Lego. An alias and procedure commands make the TDL intuitive and modular. Concurrent processing and sensor-based event handling commands are also available with TDL. An example of survey task with lawnmower pattern shows feasibility and easiness of TDL.
['Tae Won Kim', 'Junku Yuh']
Task description language for underwater robots
364,813
The digits of have intrigued both the public and research mathematicians from the beginning of time. This article briey reviews the history of this venerable constant, and then describes some recent research on the question of whether is normal, or, in other words, whether its digits are statistically random in a specic sense.
['David H. Bailey', 'Jonathan M. Borwein']
Pi day is upon us again and we still do not know if Pi is normal
566,382
Reliability and timeliness are two essential requirements of successful detection of critical events in Wireless Sensor Networks (WSNs). The base station (BS) is particularly interested about reliable and timely collection of data sent by the nodes close to the ongoing event, and at that time, the data sent by other nodes have little importance. In this paper, we propose Congestion and Delay Aware Routing (CODAR) protocol that tries to route data in congestion and delay aware manners. If congestion occurs, it also mitigates congestion by utilizing an accurate data-rate adjustment. Each node collects control information from neighbours and works in a distributed manner. CODAR also puts emphasis on successful collection of these control information which eventually provides desirable performance. Experimental results show that CODAR is capable of avoiding and mitigating congestion effectively, and performs better than similar known techniques in terms of reliable and timely event detection.
['Mohammad Masumuzzaman Bhuiyan', 'Iqbal Gondal', 'Joarder Kamruzzaman']
CODAR: Congestion and Delay Aware Routing to detect time critical events in WSNs
183,356
Atomic force microscopes (AFMs) are used for sample imaging and characterization at nanometer scale. In this work, we consider a metrological AFM, which is used for the calibration of transfer standards for commercial AFMs. The metrological AFM uses a three-degree-of-freedom (DOF) stage to move the sample with respect to the probe of the AFM. The repetitive sample topography introduces repetitive disturbances in the system. To suppress these disturbances, repetitive control (RC) is applied to the imaging axis. A rotated sample orientation with respect to the actuation axes introduces a nonrepetitiveness in the originally fully repetitive errors and yields a deteriorated performance of RC. Directional repetitive control (DRC) is introduced to align the axes of the scanning movement with the sample orientation under the microscope. Experiments show that the proposed directional repetitive controller significantly reduces the tracking error as compared to standard repetitive control.
['Rje Roel Merry', 'Mjc Michael Ronde', 'van de René René Molengraft', 'Kathelijne Koops', 'M Maarten Steinbuch']
Directional Repetitive Control of a Metrological AFM
144,535
Organizations have been used decisions support systems to help them to understand and to predict interesting business opportunities over their huge databases also known as data marts. OLAP tools have been used widely for retrieving information in a summarized way (cube-like) by employing customized cubing methods. The majority of these cubing methods suffer from being just data-driven oriented and not discovery-driven ones. Data marts grow quite fast, so an incremental OLAP mining process is a required and desirable solution for mining evolving cubes. In order to present a solution that covers the previous mentioned issues, we propose a cube-based mining method which can compute an incremental cube, handling concept hierarchy modeling, as well as, incremental mining of multidimensional and multilevel association rules. The evaluation study using real and synthetic datasets demonstrates that our approach is an effective OLAP mining method of evolving data marts.
['Ronnie Alves', 'Orlando Belo', 'Fabio Monteiro da Costa']
Effective OLAP Mining of Evolving Data Marts
139,254
With the adoption of statistical static timing analysis (SSTA), the characterization of standard cell libraries for delay variations and output transition time (output slew) variations, referred to as statistical characterization, is becoming essential. Statistical characterization of intra-cell mismatch variations as well as inter-chip variations need to be performed efficiently with acceptable accuracy as a function of process parameter variations. The conventional approach to this problem is to model these mismatch variations by characterizing each device variation separately. However, this entails a cost that is proportional to the product of the number of devices (n d ) in the cell and the number of local statistical parameters (n p ), and characterization becomes infeasible. In this work, we propose an improved transient sensitivity analysis to accelerate statistical characterization. We compute sensitivities of node voltages with respect to any process/design parameters. These sensitivities are used to extract the sensitivities of delays and transition times. It is more critical to note the sparsity of the circuitpsilas dependence on the statistical parameters (i.e., any given parameter directly impacts only a small portion of the circuit, sometimes only one device). By exploiting this sparsity we obtain a method that is O(n p ), compared to O(n p timesn d ) of the conventional approach. As an example, for an AOI cell with 40 devices, the sensitivity analysis, compared to the standard approach using multiple simulations, results in more than 18X runtime improvements with better accuracy.
['Ben Gu', 'Kiran Kumar Gullapalli', 'Yun Zhang', 'Savithri Sundareswaran']
Faster statistical cell characterization using adjoint sensitivity analysis
353,322
In today's competitive markets for a business success it is essential to fully understand customers, to strive to maximally satisfy their desires and preferences, and on this basis build a solid, long-term and fruitful relationship with customers. This is the core of customer relationship management. Good customer understanding is the basis for increase of customer lifetime value, which encompasses customer segmentation. The goal of customer segmentation is to group customers by common characteristics in the way that created segments are profitable and growing which will enable companies to target each segment with specific offerings. This cannot be done without utilization of intelligent methods and techniques for data analysis. The focus of this research is on business strategy driven customer segmentation, in attempt to maximize customer potentials which is the most important resource in business, with the focus on credit users' segmentation task in banking industry. Presented case study illustrates usage of multilayer feed forward neural network to segment bank customers into two groups: customers who have and who have not problems with payments.
['Zita Bosnjak', 'Olivera Grljevic']
Credit users segmentation for improved customer relationship management in banking
29,527
With the continuous expansion of the scope of traffic sensor networks, traffic sensory data becomes widely available and is continuously being produced. Traffic sensory data gathered by large amounts of sensors show the massive, continuous, streaming and spatio-temporal characteristics compared to traditional traffic data. In order to satisfy the requirements of different applications with these data, we need to have the capability of processing both real-time traffic sensory data in streaming way and historical traffic sensory data in large amount. In this paper, we present an approach and corresponding system for traffic sensory data processing, which is designed to combine spatio-temporal data partition, parallel pipeline processing and stream computing to support traffic sensory data processing in a scalable architecture with real-time guarantee. Three types of applications in real project are also described in detail to show the significant effect gains of the proposed approach and system. Numerical evaluations according to experiment results also show that the system can gain high performance in terms of the processing time of traffic sensory data stream.
['Zhuofeng Zhao', 'Weiling Ding', 'Yanbo Han', 'Jianwu Wang']
A Spatio-temporal Parallel Processing System for Traffic Sensory Data
916,669
Formal techniques allow exhaustive verification on circuit design (at least in theory), but due to actual computational limitations, workarounds must always be adopted to check only a portion of the design at a time. Sequential equivalence checking is an effective approach, but it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. This paper presents a formal methodology to verify RTL descriptions through direct comparison with high-level reference models. By doing so, there is no need to specify or analyze formal properties, as the complete behavior is already contained in the reference model. We also consider the natural discrepancies between system level and RTL code, including non-matching interface and memory elements, and state mapping. In this manner, we are able to prove the functional coherence for the overall sequential behavior of the design under verification.
['Carlos Ivan Castro Marquez', 'Marius Strum', 'Wang Jiang Chau']
Functional verification of complete sequential behaviors: A formal treatment of discrepancies between system-level and RTL descriptions
916,850
An algorithm for performing online clustering on the GPU is proposed which makes heavy use of the atomic operations available on the GPU. The algorithm can cluster multiple documents in parallel in way that can saturate all the parallel threads on the GPU. The algorithm takes advantage of atomic operations available on the GPU in order to cluster multiple documents at the same time. The algorithm results in up to 3X speedup using a real time news document data set as well as on randomly generated data compared to a baseline algorithm on the GPU that clusters only one document at a time.
['Benjamin E. Teitler', 'Jagan Sankaranarayanan', 'Hanan Samet', 'Marco D. Adelfio']
Online Document Clustering Using GPUs
587,817
On-line algorithms, real time, the virtue of laziness, and the power of clairvoyance
['Giorgio Ausiello', 'Luca Allulli', 'Vincenzo Bonifaci', 'Luigi Laura']
On-line algorithms, real time, the virtue of laziness, and the power of clairvoyance
823,184
Special synchronizers exist for special clock relations such as mesochronous, multi-synchronous and ratiochronous clocks, while variants of N-flip-flop synchronizers are employed when the communicating clocks are asynchronous. N-flip-flop synchronizers are also used in all special cases, at the cost of longer latency than when using specialized synchronizers. The reliability of N-flip-flop synchronizers is expressed by the standard MTBF formula. This paper describes cases of coherent clocks that suffer of a higher failure rate than predicted by the MTBF formula, that formula assumes uniform distribution of data edges across the sampling clock cycle, but coherent clocking leads to drastically different situations. Coherent clocks are defined as derived from a common source, and phase distributions are discussed. The effect of jitter is analyzed, and a new MTBF expression is developed. An optimal condition for maximizing MTBF and a circuit that can adaptively achieve that optimum are described. We show a case study of metastability failure in a real 40nm circuit and describe guidelines used to increase its MTBF based on the rules derived in the paper.
['Salomon Beer', 'Ran Ginosar', 'Rostislav (Reuven) Dobkin', 'Yoav Weizman']
MTBF Estimation in Coherent Clock Domains
487,641
The FlexRay bus is a communication standard used in the automotive industry. It offers a deterministic message transmission in the static segment following a time-triggered schedule. Even if its bandwidth is ten times higher than the bandwidth of controller area network (CAN), its throughput limits are going to be reached in high-class car models soon. A solution that could postpone this problem is to use an efficient scheduling algorithm that exploits both channels of the FlexRay. The significant and often neglected feature that can theoretically double the bandwidth is the possibility to use two independent communication channels that can intercommunicate through the gateway. In this paper, we propose a heuristic algorithm that decomposes the scheduling problem to the electronic control unit (ECU)-to-channel assignment subproblem, which decides which channel the ECUs should be connected to and the channel scheduling subproblem that creates static segment communication schedules for both channels. The algorithm is able to create a schedule for cases where channels are configured in the independent mode, as well as in the fault-tolerant mode or in cases where just part of the signals are fault tolerant. Finally, the algorithm is evaluated on real data and synthesized data, and the relation between the portion of fault-tolerant signals and the number of allocated slots is presented.
['Jan Dvorak', 'Zdenek Hanzalek']
Using Two Independent Channels With Gateway for FlexRay Static Segment Scheduling
799,269
This task tries to establish the relative quality of available semantic resources (derived by manual or automatic means). The quality of each large-scale knowledge resource is indirectly evaluated on a Word Sense Disambiguation task. In particular, we use Senseval-3 and SemEval-2007 English Lexical Sample tasks as evaluation bechmarks to evaluate the relative quality of each resource. Furthermore, trying to be as neutral as possible with respect the knowledge bases studied, we apply systematically the same disambiguation method to all the resources. A completely different behaviour is observed on both lexical data sets (Senseval-3 and SemEval-2007).
['Montse Cuadros', 'German Rigau']
SemEval-2007 Task 16: Evaluation of Wide Coverage Knowledge Resources
173,604
A Low Power Trainable Neuromorphic Integrated Circuit That Is Tolerant to Device Mismatch
['Chetan Singh Thakur', 'Runchun Wang', 'Tara Julia Hamilton', 'Jonathan Tapson', 'André van Schaik']
A Low Power Trainable Neuromorphic Integrated Circuit That Is Tolerant to Device Mismatch
700,292
The image sequence is represented as a set of moving regions which make up moving objects. Motion, position and gray level (or color) information is used for segmenting the moving objects. A criterion is proposed for modeling the 3-D motion and segmentation. After identifying the occluding regions, the moving objects are tracked over the next frames. Prediction is employed for estimating the future moving object position and its optical flow.
['Adrian G. Bors', 'Ioannis Pitas']
Motion and segmentation prediction in image sequences based on moving object tracking
154,073
A number of researches have explored the effect of embodied pedagogical agents in multimedia learning environments. Embodied pedagogical agents have been regarded as practical and powerful tools for instructions in these researches. However, it is hard to find a development kits that support the instructor or teacher to create interactive learning applications together with animated pedagogical agents. Therefore, in order to facilitate the instructor or teacher to generate the content materials of digital learning, we proposed an IDML-based embodied pedagogical agent system(IDML-based EPAS). The system provides the opportunity for teachers to integrate their teaching contents into multimedia presentations and to create embodied pedagogical agents for attracting learners' attention. Moreover, learners can interact with digital learning materials for facilitating the knowledge acquirement. Finally, we utilized the instruction of traffic safety as the interactive learning materials in the IDML-based EPAS for assisting learners to gain more positive attitudes and better achievement.
['Kai-Yi Chin', 'Jim-Min Lin', 'Zeng-Wei Hong', 'Kun-Ta Lin', 'Wei-Tsong Lee']
Developing an IDML-Based Embodied Pedagogical Agent System for Multimedia Learning
14,668
We analyze the benefits of infrastructure support in improving the throughput scaling in networks of n randomly located wireless nodes. The infrastructure uses multi-antenna base stations (BSs), in which the number of BSs and the number of antennas at each BS can scale at arbtrary rates relative to n. We introduce two multi-antenna BS-based routing protocols and analyze their throughput scaling laws. Two conventional schemes not using BSs are also shown for comparison. In dense networks, we show that the BS-based routing schemes do not improve the throughput scaling. In contrast, in extended networks, we show what our BS-based routing schemes can, under certain network conditions, improve the throughput scaling significantly.
['Won-Yong Shin', 'Sang-Woon Jeon', 'Natasha Devroye', 'Mai Vu', 'Sae-Young Chung', 'Yong Hoon Lee', 'Vahid Tarokh']
Improved throughput scaling in wireless ad hoc networks with infrastructure
418,251
Most traditional recommender systems lack accuracy in the case where data used in the recommendation process is sparse. This study addresses the sparsity problem and aims to get rid of it by means of a content-boosted collaborative filtering approach applied to a web-based movie recommendation system. The main motivation is to investigate whether further success can be obtained by combining ‘local and global user similarity’ and ‘effective missing data prediction’ approaches, which were previously introduced and proved to be successful separately. The present work improves these approaches by taking the content information of the movies into account during the item similarity calculations. The comparison of the proposed approach with the original methods was carried out using mean absolute error, and more accurate predictions were achieved.
['Gözde Özbal', 'Hilal Karaman', 'Ferda Nur Alpaslan']
A Content-Boosted Collaborative Filtering Approach for Movie Recommendation Based on Local and Global Similarity and Missing Data Prediction
665,371
The contribution of this paper concerns the well-known problem of fuzzy system parameter tuning. At this aim, a software tool based on a fuzzy linguistic approach and applied to an attribute fusion system devoted to three-dimensional (3-D) seismic image analysis is proposed. The fusion is based on interpreters' knowledge and a graphic user interface has been developed in order to have a cooperative behavior between the experts and the system. It provides an original way to adjust, on a two-dimensional part of the block, some of the fusion parameters which are understandable and close to the interpreters' language. Then, in order to control the detection propagation to the whole 3-D seismic block, an automatic parameter adjustment is realized based on a quantitative performance evaluation of the detection. The results obtained for the detection as well as for the handling of the system by interpreters' show the interest of the proposed method.
['Lionel Valet', 'Gilles Mauris', 'Philippe Bolon', 'Naamen Keskes']
A fuzzy linguistic-based software tool for seismic image interpretation
446,044
We present a general-purpose optimization algorithm inspired by “run-and-tumble”, the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales. © 2008 Published by Elsevier Ireland Ltd.
['Dan V. Nicolau', 'Kevin Burrage', 'Philip K. Maini']
'Extremotaxis': Computing with a bacterial-inspired algorithm
453,932
Living Modeling of IT Architectures: Challenges and Solutions
['Thomas Trojer', 'Matthias Farwick', 'Martin Häusler', 'Ruth Breu']
Living Modeling of IT Architectures: Challenges and Solutions
619,278
A multiuser detection (MUD) technique for direct sequence-code division multiple access (DS-CDMA) systems over generalized-K (GK) fading channels using teaching learning based optimization algorithm (TLBO) with two-stage initialization (TSI) is proposed. In DS-CDMA systems, MUD techniques are applied to combat multiple access interference (MAI) and ambient noise. Empirical results proved that the ambient noise is non-Gaussian and impulse in nature which degrades the performance of the system substantially. The DS-CDMA signals are transmitted over channels that introduce impulsive noise, shadowing and fading. In this paper, we develop least-squares (LS), Huber and Hampel M-estimation based MUD technique for joint detection of DS-CDMA signals in the presence of MAI, impulsive noise, modeled by Laplace distribution, and channel fading, modeled by GK distribution. The TLBO with TSI (TLBO-TSI) algorithm is used to minimize a penalty function that is a less rapidly increasing function of residuals. Average bit error rate (BER) is computed to assess the performance of the TLBO-TSI based detector. Obtained results demonstrate that the proposed robust technique offer significant performance gains with increase in signal-to-noise ratio (SNR) and diversity order in the presence of heavy-tailed impulsive noise.
['Lakshmi Manasa Gondela', 'Vinay Kumar Pamula', 'Anil Kumar Tipparti']
Multiuser detection over generalized-K fading channels using two-stage initialized teaching learning based optimization
931,216
Homomorphic encryption allows arithmetic operations to be performed on ciphertext and gives the same result as if the same arithmetic operation is done on the plaintext. Homomorphic encryption has been touted as one of the promising methods to be employed in Smart Grid (SG) to provide data privacy which is one of the main security concerns in SG. In addition to data privacy, real-time data flow is crucial in SG to provide on-time detection and recovery of possible failures. In this paper, we investigate the overhead of using homomorphic encryption in SG in terms of bandwidth and end-to-end data delay when providing data privacy. Specifically, we compare the latency and data size of end-to-end (ETE) and hop-by-hop (HBH) homomorphic encryption within a network of Smart Meters (SMs). In HBH encryption, at each intermediate node, the received encrypted data from downstream nodes are decrypted first before the aggregation, and then the result is encrypted again for transmission to upstream nodes. On the other hand, the intermediate node in ETE encryption only performs aggregation on ciphertexts for transmission to upstream nodes. We implemented secure data aggregation using Paillier cryptosystem and tested it under various conditions. The experiment results have shown that even though HBH homomorphic encryption has additional computational overhead at intermediate nodes, surprisingly it provides comparable latency and fixed data size passing through the network compared to ETE homomorphic encryption.
['Nico Saputro', 'Kemal Akkaya']
Performance evaluation of Smart Grid data aggregation via homomorphic encryption
462,157
Microblog Users' Life Time Activity Prediction.
['Ruibin Geng', 'Xi Chen', 'Shun Cai']
Microblog Users' Life Time Activity Prediction.
772,365
In the smart grid, one of the most important research areas is load forecasting; it spans from traditional time series analyses to recent machine learning approaches and mostly focuses on forecasting aggregated electricity consumption. However, the importance of demand side energy management, including individual load forecasting, is becoming critical. In this paper, we propose deep neural network (DNN)-based load forecasting models and apply them to a demand side empirical load database. DNNs are trained in two different ways: a pre-training restricted Boltzmann machine and using the rectified linear unit without pre-training. DNN forecasting models are trained by individual customer’s electricity consumption data and regional meteorological elements. To verify the performance of DNNs, forecasting results are compared with a shallow neural network (SNN), a double seasonal Holt–Winters (DSHW) model and the autoregressive integrated moving average (ARIMA). The mean absolute percentage error (MAPE) and relative root mean square error (RRMSE) are used for verification. Our results show that DNNs exhibit accurate and robust predictions compared to other forecasting models, e.g., MAPE and RRMSE are reduced by up to 17% and 22% compared to SNN and 9% and 29% compared to DSHW.
['Seunghyoung Ryu', 'Jaekoo Noh', 'Hongseok Kim']
Deep Neural Network Based Demand Side Short Term Load Forecasting
959,850
In view of the uncertainty in the quality evaluation for manufacturing process and multivariate process capability, a comprehensive evaluation approach is proposed by extending classic process capability index with considering variations of key quality characteristics among the whole manufacturing process, which includes the procurement, machining, assembling and delivery inspection processes. Meanwhile, to avoid the influence of uncertainties, the test, feedback and correction processes for the evaluation approach are added based on the Structure Equation Model (SEM). Then, after the evaluation result is calculated according to the evaluation approach, a method to monitor the deviations of the evaluation results is introduced based on the confidence interval. At last, the quality level of total manufacturing process is evaluated by combing the evaluation results and their confidence interval. Finally, a case study is carried out to verify the proposed method.
['Kongjun Gao', 'Yihai He', 'Linbo Wang']
Confidence based quality evaluation for total manufacturing process using comprehensive process capability
603,705
Sustainable Planning: A Methodological Toolkit
['Giuseppe B. Las Casas', 'Francesco Scorza']
Sustainable Planning: A Methodological Toolkit
859,140
In the present paper, an image-based visual servoing scheme is presented for a road following task with an autonomous airship. A new set of visual signals is introduced, namely with the vanishing point coordinates and the vanishing line parameters, which have the advantage of decoupling the rotation DOF and respecting the natural characteristics of the vehicle dynamics. An optimal control design is used for a first implementation of the approach with the simulation platform of the AURORA airship. Simulation results are presented and discussed demonstrating a fair performance even in realistic wind conditions.
['Patrick Rives', 'José Raul Azinheira']
Linear structures following by an airship using vanishing point and horizon line in a visual servoing scheme
451,408
Insiders represent a major threat to the security of an organization’s information resources. Previous research has explored the role of dispositional and situational factors in promoting compliant behavior, but these factors have not been studied together. In this study, we use a scenario-based factorial survey approach to identify key dispositional and situational factors that lead to information security policy violation intentions. We obtained 317 observations from a diverse sample of insiders. The results of a general linear mixed model indicate that dispositional factors (particularly two personality meta-traits, Stability and Plasticity) serve as moderators of the relationships between perceptions derived from situational factors and intentions to violate information security policy. This study represents the first information security study to identify the existence of these two meta-traits and their influence on information security policy violation intentions. More importantly, this study provides new knowledge of how insiders translate perceptions into intentions based on their unique personality trait mix.
['Allen C. Johnston', 'Merrill Warkentin', 'Maranda McBride', 'Lemuria Carter']
Dispositional and situational factors: influences on information security policy violations
636,937
This paper presents a new method to improve the classification performance for remote-sensing applications based on swarm intelligence. Traditional statistical classifiers have limitations in solving complex classification problems because of their strict assumptions. For example, data correlation between bands of remote-sensing imagery has caused problems in generating satisfactory classification using statistical methods. In this paper, ant colony optimization (ACO), based upon swarm intelligence, is used to improve the classification performance. Due to the positive feedback mechanism, ACO takes into account the correlation between attribute variables, thus avoiding issues related to band correlation. A discretization technique is incorporated in this ACO method so that classification rules can be induced from large data sets of remote-sensing images. Experiments of this ACO algorithm in the Guangzhou area reveal that it yields simpler rule sets and better accuracy than the See 5.0 decision tree method.
['Xiaoping Liu', 'Li Xy', 'Lin Liu', 'Jinqiang He', 'Bin Ai']
An Innovative Method to Classify Remote-Sensing Images Using Ant Colony Optimization
245,209
Efficiency frontier analysis has been an important approach of evaluating firms' performance in private and public sectors. There have been many efficiency frontier analysis methods reported in the literature. However, the assumptions made for each of these methods are restrictive. Each of these methodologies has its strength as well as major limitations. This study proposes a non-parametric efficiency frontier analysis method based on the adaptive neural network technique for measuring efficiency as a complementary tool for the common techniques of the efficiency studies in the previous studies. The proposed computational method is able to find a stochastic frontier based on a set of input-output observational data and do not require explicit assumptions about the function structure of the stochastic frontier. In this algorithm, for calculating the efficiency scores, a similar approach to econometric methods has been used. Moreover, the effect of the return to scale of decision making unit (DMU) on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). Also for increasing DMUs' homogeneousness, Fuzzy C-means method is used to cluster DMUs. An example using real data is presented for illustrative purposes. In the application to the power generation sector of Iran, we find that the neural network provide more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored. Moreover, Principle Component Analysis (PCA) is used to verify the findings of the proposed algorithm.
['Ali Azadeh', 'S.F. Ghaderi', 'Mona Anvari', 'Morteza Saberi', 'H. Izadbakhsh']
An integrated artificial neural network and fuzzy clustering algorithm for performance assessment of decision making units
478,904
Developing an efficient parallel application is not an easy task, and achieving a good performance requires a thorough understanding of the program's behavior. Careful performance analysis and optimization are crucial. To help developers or users of these applications to analyze the program's behavior, it is necessary to provide them with an abstraction of the application performance. In this paper, we propose a dynamic performance abstraction technique, which enables the automated discovery of causal execution paths, composed of communication and computational activities, in MPI parallel programs. This approach enables autonomous and low-overhead execution monitoring that generates performance knowledge about application behavior for the purpose of online performance diagnosis. Our performance abstraction technique reflects an application behavior and is made up of elements correlated with high-level program structures, such as loops and communication operations. Moreover, it characterizes all elements with statistical execution profiles. We have evaluated our approach on a variety of scientific parallel applications. In all scenarios, our online performance abstraction technique proved effective for low-overhead capturing of the program's behavior and facilitated performance understanding.
['Anna Sikora', 'Tomàs Margalef', 'Josep Jorba']
Automated and dynamic abstraction of MPI application performance
869,191
Name ambiguity arises from the polysemy of names and causes uncertainty about the true identity of entities referenced in unstructured text. This is a major problem in areas like information retrieval or knowledge management, for example when searching for a specific entity or updating an existing knowledge base. We approach this problem of named entity disambiguation (NED) using thematic information derived from Latent Dirichlet Allocation (LDA) to compare the entity mention's context with candidate entities in Wikipedia represented by their respective articles. We evaluate various distances over topic distributions in a supervised classification setting to find the best suited candidate entity, which is either covered in Wikipedia or unknown. We compare our approach to a state of the art method and show that it achieves significantly better results in predictive performance, regarding both entities covered in Wikipedia as well as uncovered entities. We show that our approach is in general language independent as we obtain equally good results for named entity disambiguation using the English, the German and the French Wikipedia.
['Anja Pilz', 'Gerhard Paaß']
From names to entities using thematic context distance
353,310
Clock compensation for process variations and manufacturing defects is a key strategy to achieve high performance of processors and high end ASIC. However, with the increase in process variations and defect densities, clock compensation is becoming increasingly challenging. A clock distribution system also consumes over 30% of the overall chip level power, so every little bit counts, including compensation schemes. In this paper we propose a new scheme for the compensation of undesirable skews and duty-cycle variations of local clocks of high performance microprocessors and high end ASICs. Our scheme performs compensation continuously, during the microprocessor operation, thus allowing also compensation to clock jitters due to environmental influences during operation. Compared to alternate solutions for local clock compensation, our scheme features lower power consumption, smaller compensation error, and a lower or comparable area overhead, while allowing compensation to be accomplished within the same clock cycle of skew or duty-cycle variation.
['Cecilia Metra', 'Martin Omana', 'T. M. Mak', 'S. Tarn']
Novel compensation scheme for local clocks of high performance microprocessors
397,293
In this paper Schapire and Singer's AdaBoost.MH boosting algorithm is applied to the Word Sense Disambiguation (WSD) problem. Initial experiments on a set of 15 selected polysemous words show that the boosting approach surpasses Naive Bayes and Exemplar-based approaches, which represent state-of-the-art accuracy on supervised WSD. In order to make boosting practical for a real learning domain of thousands of words, several ways of accelerating the algorithm by reducing the feature space are studied. The best variant, which we call LazyBoosting, is tested on the largest sense-tagged corpus available containing 192,800 examples of the 191 most frequent and ambiguous English words. Again, boosting compares favourably to the other benchmark algorithms.
['Gerard Escudero', 'Lluís Màrquez', 'German Rigau']
Boosting Applied to Word Sense Disambiguation
354,024
This project explores the representation of uncertainty in visualizations for archaeological research and provides insights obtained from user feedback. Our 3D models brought together information from standing architecture and excavated remains, surveyed plans, ground penetrating radar (GPR) data from the Carthusian monastery of Bourgfontaine in northern France. We also included information from comparative Carthusian sites and a bird's eye representation of the site in an early modern painting. Each source was assigned a certainty value which was then mapped to a color or texture for the model. Certainty values between one and zero were assigned by one subject matter expert and should be considered qualitative. Students and faculty from the fields of architectural history and archaeology at two institutions interacted with the models and answered a short survey with four questions about each. We discovered equal preference for color and transparency and a strong dislike for the texture model. Discoveries during model building also led to changes of the excavation plans for summer 2015.
['Scott Houde', 'Sheila Bonde', 'David H. Laidlaw']
An evaluation of three methods for visualizing uncertainty in architecture and archaeology
662,749
As new software components become available for an existing system, we can evolve not only the system itself but also its requirements based on the new components. We propose a method to support requirements evolution by replacing a component with another component, and by changing the current requirements so as to adapt to the new component. To explore the possibilities of such a replacement, we use the technique of specification matching. To change the current requirements, we modify the structure by following the concept of Design by Contract.
['Haruhiko Kaiya', 'Kenji Kaijiri']
Conducting requirements evolution by replacing components in the current system
249,586
Variable annuities are very appealing to the investor. For example, in United States, sales volume on variable annuities grew to a record 184 billion in calendar year 2006. However, due to their complicated payoff structure, their valuation and risk management are challenges to the insurers. In this paper, we study a variable annuity contract with cliquet options in Asia markets. The contact has quanto feature. We propose an efficient Monte Carlo method to value the contract. Numerical examples suggest our approach is quite efficient.
['Ming-Hua Hsieh']
Valuation of variable annuity contracts with cliquet options in Asia markets
447,037
The performance degradation of filter-bank-based multicarrier transmission due to timing errors is investigated. The receiver is made of a fractionally spaced linear or decision-feedback equalizer designed for some sampling phase. The actual sampling phase is different and the impact of the difference on the performance is investigated. Sampling phase offset and jitter are considered. Besides, assuming the sampling phase error can be estimated the efficiency of various types of interpolation is investigated.
['Jérôme Louveaux', 'Luc Vandendorpe', 'Laurent Cuvelier', 'Thierry Pollet']
Bit-rate sensitivity of filter-bank-based VDSL transmission to timing errors
332,932
A genetic recurrent fuzzy system which automates the design of recurrent fuzzy networks by a coevolutionary genetic algorithm with divide-and-conquer technique (CGA-DC) is proposed in this paper. To solve temporal problems, the recurrent fuzzy network constructed from a series of recurrent fuzzy if-then rules is adopted. In the CGA-DC, based on the structure of a recurrent fuzzy network, the design problem is divided into the design of individual subrules, including spatial and temporal, and that of the whole network. Then, three populations are created, among which two are created for spatial and temporal subrules searches, and the other for the whole network search. Evolution of the three populations are performed independently and concurrently to achieve a good design performance. To demonstrate the performance of CGA-DC, temporal problems on dynamic plant control and chaotic system processing are simulated. In this way, the efficacy and efficiency of CGA-DC can be evaluated as compared with other genetic-algorithm-based design approaches.
['Chia-Feng Juang']
Genetic recurrent fuzzy system by coevolutionary computation with divide-and-conquer technique
694,107
In this paper, a novel method of pattern discovery is proposed. It is based on the theoretical formulation of a contingency table of events. Using residual analysis and recursive partitioning, statistically significant events are identified in a data set. These events constitute the important information contained in the data set and are easily interpretable as simple rules, contour plots, or parallel axes plots. In addition, an informative probabilistic description of the data is automatically furnished by the discovery process. Following a theoretical formulation, experiments with real and simulated data will demonstrate the ability to discover subtle patterns amid noise, the invariance to changes of scale, cluster detection, and discovery of multidimensional patterns. It is shown that the pattern discovery method offers the advantages of easy interpretation, rapid training, and tolerance to noncentralized noise.
['Tom Chau', 'Andrew K. C. Wong']
Pattern discovery by residual analysis and recursive partitioning
312,509
In this work estimating the position coordinates of Wireless Sensor Network nodes using the concept of rigid graphs is carried out in detail. The range based localization approaches use the distance information measured by the RSSI, which is prone to noise, due to effects of path loss, shadowing, and so forth. In this work, both the distance and the bearing information are used for localization using the trilateration technique. Rigid graph theory is employed to analyze the localizability, that is, whether the nodes of the WSN are uniquely localized. The WSN graph is divided into rigid patches by varying appropriately the communication power range of the WSN nodes and then localizing the patches by trilateration. The main advantage of localizing the network using rigid graph approach is that it overcomes the effect of noisy perturbed distance. Our approach gives a better performance compared to robust quads in terms of percentage of localizable nodes and computational complexity.
['Shamantha Rai B', 'Shirshu Varma']
An Algorithmic Approach to Wireless Sensor Networks Localization Using Rigid Graphs
832,826
Successive-Approximation-Register (SAR) Analog- to-Digital Converters (ADC) have been shown to be suitable for low-power applications at aggressively scaled CMOS technology nodes. This is desirable for many mobile and portable applications. Unfortunately, SAR ADCs tend to incur significant area cost and reference loading due to the large capacitor array used in its Digital-to-Analog Converter (DAC). This has traditionally made it difficult to implement large numbers of SAR ADC in parallel. This paper describes a compact 8b SAR ADC measuring only 348 μm×7 μm. It uses a new pilot-DAC (pDAC) technique to reduce the power consumption in its capacitor array; moreover, the accuracy of the pDAC scheme is protected by a novel mixed-signal Forward Error Correction (FEC) algorithm with minimal circuit overhead. Any DAC error made during pDAC operation can be recovered later by an additional switching phase. Prototype measurements in 0.18 μm technology shows that the DAC's figure-of-merit (FoM) is reduced from 61.3 fJ/step to 39.8 fJ/step by adopting pDAC switching with no apparent deterioration in Fixed-Pattern Noise (FPN) and thermal noise.
['Denis Guangyin Chen', 'Fang Tang', 'Amine Bermak']
A Low-Power Pilot-DAC Based Column Parallel 8b SAR ADC With Forward Error Correction for CMOS Image Sensors
262,828
The proposed system provides a solution to analyze the traffic flow under challenging nighttime conditions when the surveillance camera is raindrop tampered. To deal with the challenging scenes, we extract effective features via salient region detection and block segmentation. We use the extracted features in the region of interest to construct a regression model to get an estimated vehicle number for each frame. The vehicle numbers in consecutive frames form a vehicle number sequence. A mapping model utilizing state transition likelihoods is proposed to acquire the desired per minute traffic flow from the vehicle number sequence. The experiments on highly challenging datasets have demonstrated that the proposed system can effectively estimate the traffic flow for rain-drop tampered highway surveillance cameras at night.
['Hsu-Yung Cheng', 'Chih-Chang Yu']
Nighttime Traffic Flow Analysis for Rain-Drop Tampered Cameras
123,585
A federated database consists of several loosely integrated databases, where each database may contain hundreds of tables and thousands of columns,interrelated by complex foreign key relationships. In general, there exists a lot of semistructured data elements outside the database represented by documents (files), created and updated by multiple users and programs. Documents have references to multiple databases and subsets of their tables and columns. Manually tracking which specific tables and columns are referred to by a document, accessed by a specific program or user is a daunting task. With such a goal in mind, we present a system that builds metadata models for a federated database using a relational database as a central object type and metadata repository. Metadata includes table and columns coming from logical data models corresponding to each database, as well as documents representing external semistructured data sources. SQL statements assemble metadata and data types to create objects and relationships as relational tables for easy querying. We discuss potential applications in federated scientific databases.
['Carlos Ordonez', 'Zhibo Chen', 'Javier García-García']
Metadata management for federated databases
90,623
In responding to disasters, Twitter is extensively used, both for information exchange and mapping the crisis, among citizens, and in relation to national and international humanitarian responders. This paper reports Twitter analysis aimed at identifying the most pressing issues that arose in the short term recovery phase starting about a week after the Nepal earthquake, including the heretofore neglected topic of mismatch between international relief and local cultures. Based on Twitter data collected between April 30th and May 6th 2015. 1,074,864 raw messages apparently related to the Nepal earthquake were retrieved, filtered and analyzed. This exploratory adapts established frameworks for use in the dataset. The results show that our framework can identify several unique problems, including that disaster relief efforts can trigger negative sentiments when they are conducted without understanding of the local cultures.
['Jaziar Radianti', 'Starr Roxanne Hiltz', 'Leire Labaka']
An Overview of Public Concerns During the Recovery Period after a Major Earthquake: Nepal Twitter Analysis
662,561
Interactive Spatial AR for Classroom Teaching
['Yanxiang Zhang', 'ZiQiang Zhu']
Interactive Spatial AR for Classroom Teaching
846,607
The spinal cord is a vital organ that serves as the only communication link between the brain and the various parts of the body. It is vulnerable to traumatic spinal cord injury and various diseases such as tumors, infections, inflammatory diseases and degenerative diseases. The exact segmentation and localization of the spinal cord are essential to effective clinical management of such conditions. In recent years, due to the advances in imaging technology, the structure of internal organs and tissues can be captured accurately, and various abnormalities are diagnosed based on scanned images. In this paper, we present an unsupervised segmentation method that automatically extracts the spinal canal in the sagittal plane of magnetic resonance (MR) images. This segmentation method based on a novel saliency-driven attention model and a standard active contour model requires no human intervention and no training. Experiments based on 60 patients' data show that this procedure performs segmentation robustly, achieving the Dice's similarity index of 0.71 between the segmentation by our model and reference segmentation, as compared to the Dice's similarity index of 0.90 between two observers.
['Jaehan Koh', 'Peter D. Scott', 'Vipin Chaudhary', 'Gurmeet Dhillon']
An automatic segmentation method of the spinal canal from clinical MR images based on an attention model and an active contour model
396,959
DietTalk: Diet and Health Assistant Based on Spoken Dialog System
['Sohyeon Jung', 'Seonghan Ryu', 'Sangdo Han', 'Gary Geunbae Lee']
DietTalk: Diet and Health Assistant Based on Spoken Dialog System
707,098
State-of-the-art Complex Event Processing technology (CEP), while effective for pattern matching on event streams, is limited in its capability of reacting in real-time to opportunities and risks detected when monitoring the physical or virtual world. We propose to tackle this problem by embedding active rule support within the CEP engine, henceforth called Active Complex Event Processing technology, or short, ACEP. We design an ACEP infrastructure that integrates the active rule component into the CEP kernel. This not only allows fine-grained but also more efficient rule processing. Based on the infrastructure we develop optimization techniques to improve the responsiveness of our system. We demonstrate the power of ACEP technology by applying it to the development of our real-time healthcare system being deployed in Univ. of Massachusetts Medical School hospital. Through performance experiments using real-world workloads collected from the hospital, we show that our ACEP solution is effective and efficient at supporting business process in event-based systems compared to possible alternatives.
['Di Wang', 'Elke A. Rundensteiner', 'Richard T. Ellison', 'Han Wang']
Active Complex Event Processing infrastructure: Monitoring and reacting to event streams
391,372
In many applications of current interest, the observations are represented as a signal defined over a graph. The analysis of such signals requires the extension of standard signal processing tools. Building on the recently introduced Graph Fourier Transform, the first contribution of this paper is to provide an uncertainty principle for signals on graph. As a by-product of this theory, we show how to build a dictionary of maximally concentrated signals on vertex/frequency domains. Then, we establish a direct relation between uncertainty principle and sampling, which forms the basis for a sampling theorem of signals defined on graph. Based on this theory, we show that, besides sampling rate, the samples' location plays a key role in the performance of signal recovery algorithms. Hence, we suggest a few alternative sampling strategies and compare them with recently proposed methods.
['Mikhail Tsitsvero', 'Sergio Barbarossa', 'Paolo Di Lorenzo']
Uncertainty principle and sampling of signals defined on graphs
563,601
Parallel least-squares solution of general and Toeplitz systems
['Victor Y. Pan']
Parallel least-squares solution of general and Toeplitz systems
250,254
The enumeration of points contained in a polyhedron is one of the key algorithmic problems in the transformation of scientific programs. However, current algorithms can only operate on convex and "regularly non-convex" polyhedra. If the iteration sets to be enumerated do not fit in either category, the final code must scan a superset of the union of iteration domains and determine at run-time the domains (if any) each point belongs to. We present an algorithm which generates loop structures that exactly scan iteration sets representable as arbitrary unions of dense convex polyhedra. Our algorithm is based on an incremental construction of a nested loop sequence containing no conditional bound expressions and no guarding predicates, thus dramatically reducing the overhead of loop execution in the final code. >
['Zbigniew Chamski']
Enumeration of dense non-convex iteration sets
373,273
Sonmez [2013] and Sonmez and Switzer [2013] used matching theory with unilaterally substitutable preferences [Hatfield and Kojima, 2010] to propose mechanisms to match cadets to military branches. Unilaterally substitutable priorities in general exhibit complementarities. In this paper, I construct the same mechanisms in matching markets with substitutable priorities. My results show that cadet-branch matching does not require weakened substitutes conditions. The cadet-branch matching market with substitutable branch priorities constructed in this paper can be often regarded as a labor market in the model of Kelso and Crawford [1982]. The length of a contract is interpreted as the inverse of a salary, and the branches' priorities can be taken to be quasi-linear. Therefore, the substitutable branch priorities are appealing from a theoretical perspective. My results complement the original constructions of the proposed cadet-branch matching mechanisms. Sonmez and Switzer [2013] showed that the currently-implemented USMA priority structure is compatible with fairness, strategy-proofness, and respect for improvements. My approach to the construction of the proposed mechanisms relies on deviation of the branch priorities from the exact currently-implemented priorities, but this modification does not affect the proposed matching mechanisms. Together, my results and those of Sonmez and Switzer [2013] show that cadet-branch matching can be performed in Kelso-Crawford economies so that the deferred acceptance mechanisms, but not the branch priorities in the economies, are consistent with the currently-implemented USMA priorities.
['Ravi Jagadeesan']
Cadet-Branch Matching in a Quasi-Linear Labor Market
848,042
Swarm is a scalable, modular storage system that uses agents to customize low-level storage functions to meet the needs of high-level services. Agents influence low-level storage functions such as data layout, metadata management, and crash recovery. An agent is a program that is attached to data in the storage system and invoked when events occur during the data's lifetime. For example, before Swarm writes data to disk, agents attached to the data are invoked to determine a layout policy. Agents are typically persistent, remaining attached to the data they manage until the data are deleted; this allows agents to continue to affect how the data are handled long after the application or storage service that created the data has terminated. In this paper, we present Swarm's agent architecture, describe the types of agents that Swarm supports and the infrastructure used to support them, and discuss their performance overhead and security implications. We describe how several storage services and applications use agents, and the benefits they derive from doing so. Copyright © 2005 John Wiley & Sons, Ltd.
['John H. Hartman', 'Scott M. Baker', 'Ian Murdock']
Customizing the swarm storage system using agents
422,066
As the volume of biomedical literature increases, it can be challenging for clinicians to stay up-to-date. Graphical summarization systems help by condensing knowledge into networks of entities and relations. However, existing systems present relations out of context, ignoring key details such as study population. To better support precision medicine, summarization systems should include such information to contextualize and tailor results to individual patients. This paper introduces "contextualized semantic maps" for patient-tailored graphical summarization of published literature. These efforts are demonstrated in the domain of driver mutations in non-small cell lung cancer (NSCLC). A representation for relations and study population context in NSCLC was developed. An annotated gold standard for this representation was created from a set of 135 abstracts; Fl-score annotator agreement was 0.78 for context and 0.68 for relations. Visualizing the contextualized relations demonstrated that context facilitates the discovery of key findings that are relevant to patient-oriented queries.
['Jean I. Garcia-Gathright', 'Nicholas J. Matiasz', 'Edward B. Garon', 'Denise R. Aberle', 'Ricky K. Taira', 'Alex A. T. Bui']
Toward patient-tailored summarization of lung cancer literature
720,456
This paper shows that a predictive digital control combined with the principle of direct torque control (DTC) leads to an excellent dynamic behavior of the synchronous machine with surface-mounted permanent magnets and is a real alternative to the classical field-orientated control. The advantages are a DTC control scheme with constant switching frequency and a predictable torque ripple. The settling times of the torque are reduced compared to the classical field-orientated control. The application in servo drives in which the rotor position is always measured can easily be achieved by using a commercial digital signal processor. Numerous simulations and measurements confirm the theoretical work.
['Mario Pacas', 'Jürgen Weber']
Predictive direct torque control for the PM synchronous machine
555,731
In this paper we present a novel shape descriptor based on shape context, which in combination with hierarchical distance based hashing is used for word and graphical pattern based document image indexing and retrieval. The shape descriptor represents the relative arrangement of points sampled on the boundary of the shape of object. We also demonstrate the applicability of the novel shape descriptor for classification of characters and symbols. For indexing, we provide anew formulation for distance based hierarchical locality sensitive hashing. Experiments have yielded promising results.
['Ehtesham Hassan', 'Santanu Chaudhury', 'Madan Gopal']
Shape Descriptor Based Document Image Indexing and Symbol Recognition
141,566
Intrusions detection is one of the major issues that worry organizations in wireless sensor networks (WSNs). Many researchers have dealt with this problem and have proposed many methods for detecting different kinds of intrusions such as selective forwarding, which is a serious attack that may obstruct communications in WSNs. However, as the applications of mobile computing, vehicular networks, and internet of things (IoT) are spreading immensely, selective forwarding detection in Mobile Wireless Sensor Networks (MWSNs) has become a key demand. This paper introduces the problem of selective forwarding in MWSNs, and discusses how available techniques for mitigation this problem in WSNs are not applicable in handling the problem in MWSNs due to sensors mobility. Therefore, the paper proposes a model that provides a global monitoring capability for tracing moving sensors and detecting malicious ones. The model leverages the infrastructure of Fog Computing to achieve this purpose. Furthermore, the paper provides a complete algorithm, a comprehensive discussion and experiments that show the correctness and importance of the proposed approach.
['Qussai Yaseen', 'Firas Albalas', 'Yaser Jararweh', 'Mahmoud Al-Ayyoub']
A Fog Computing Based System for Selective Forwarding Detection in Mobile Wireless Sensor Networks
964,331
We consider single machine scheduling problems with deteriorating jobs and SLK/DIF due window assignment, where the deteriorating rates of jobs are assumed to be job-dependent. We consider two different objectives under SLK and DIF due window assignment, respectively. The first objective is to minimise total costs of earliness, tardiness, due window location and due window size, while the second objective is to minimise a cost function that includes number of early jobs, number of tardy jobs and the costs for due window location and due window size. We study the optimality properties for all problems and develop algorithms for solving these problems in polynomial time.
['Qing Yue', 'Guohua Wan']
Single Machine Slk/Dif Due Window Assignment Problem with Job-Dependent Linear Deterioration Effects
782,194
This paper proposes a robust hybrid-MAC protocol for direct communication among M2M devices with gateway coordination. The proposed protocol combines the benefits of both contention-based and reservation-based MAC schemes. The authors assume that the contention and reservation portion of M2M devices is a frame structure, which is comprised of two sections: Contention Interval CI and Transmission Interval TI. The CI is based on p-persistent CSMA mechanism, which allows M2M devices to compete for the transmission slots with equal priorities. After contention, only those devices, which have won time-slots, are allowed to transmit data packets during TI. In the authors' proposed MAC scheme, the TI is basically a TDMA frame and each M2M device is 802.11 enabled. Each M2M transmitter device and its corresponding one-hop distant receiver communicate using IEEE 802.11 DCF protocol within each TDMA slot to overcome the limitations of TDMA mechanism. The simulation results demonstrate the effectiveness of the proposed hybrid-MAC protocol.
['Pawan Kumar Verma', 'Rajesh Verma', 'Arun Prakash', 'Rajeev Tripathi']
Throughput-Delay Evaluation of a Hybrid-MAC Protocol for M2M Communications
695,675
Requirements Gathering and Domain Understanding for Assistive Technology to Support Low Vision and Sighted Students
['Stephanie Ludi']
Requirements Gathering and Domain Understanding for Assistive Technology to Support Low Vision and Sighted Students
880,203
The constant modulus (CM) array is a blind adaptive beamformer that can separate cochannel signals. A follow-on adaptive signal canceler may be used to perform direction finding of the source captured by the array. In this paper, we analyze the convergence and tracking properties of the CM array using a least-mean-square approximation. Expressions are derived for the misadjustment of the adaptive algorithms, and a tracking model is developed that accurately predicts the behavior of the system during fades. It is demonstrated that the adaptive canceler contributes more to the overall misadjustment than does the adaptive CM beamformer. Computer simulations are presented to illustrate the transient properties of the system and to verify the analytical results.
['Arvind V. Keerthi', 'Amit Mathur', 'John J. Shynk']
Misadjustment and tracking analysis of the constant modulus array
358,604
We develop a social group utility maximization (SGUM) framework for cooperative wireless networking that takes into account both social relationships and physical coupling among users. Specifically, instead of maximizing its individual utility or the overall network utility, each user aims to maximize its social group utility that hinges heavily on its social tie structure with other users. We show that this framework provides rich modeling flexibility and spans the continuum between non-cooperative game and network utility maximization (NUM)—two traditionally disjoint paradigms for network optimization. Based on this framework, we study three important applications of SGUM, in database assisted spectrum access, power control, and random access control, respectively. For the case of database assisted spectrum access, we show that the SGUM game is a potential game and always admits a socially-aware Nash equilibrium (SNE). We also develop a distributed spectrum access algorithm that can converge to the SNE and also quantify the trade-off between the performance and convergence time of the algorithm. For the cases of power control and random access control, we show that there exists a unique SNE and the network performance improves as the strength of social ties increase. Numerical results corroborate that the SGUM solutions can achieve superior performance using real social data trace. Furthermore, we show that the SGUM framework can be generalized to take into account both positive and negative social ties among users, which can be a useful tool for studying network security problems.
['Xu Chen', 'Xiaowen Gong', 'Lei Yang', 'Junshan Zhang']
Exploiting Social Tie Structure for Cooperative Wireless Networking: A Social Group Utility Maximization Framework
699,441
This paper presents an important result addressing a fundamental question in synthesizing binary reversible logic circuits for quantum computation. We show that any even-reversible-circuit of n (n > 3) qubits can be realized using NOT gate and Toffoli gate ('2'-Controlled-Not gate), where the numbers of Toffoli and NOT gates required in the realization are bounded by (n+[n 3] )(3 x 2 2n-3 -2 n+2 ) and 4n(n+ [n 3] j )2, respectively. A provable constructive synthesis algorithm is derived. The time complexity of the algorithm is 10 3n 2 . 2 n . Our algorithm is exponentially lower than breadth-first search based synthesis algorithms with respect to space and time complexities.
['Guowu Yang', 'Xiaoyu Song', 'William N. N. Hung', 'Fei Xie', 'Marek A. Perkowski']
Group theory based synthesis of binary reversible circuits
827,592
This study identifies the role of corporate knowledge management process and how it deals with business innovation and organisational change. A relationship model is created to illustrate the interrelationships among these three components. The implementation processes of Knowledge management, Innovation and Organisational change (K-I-O) and the conclusion and future development are also discussed.
['David C. Chou', 'Amy Y. Chou']
Knowledge management for organisational innovation and change: a cross-functional analysis
417,889
The convergence of primal and dual central paths associated to entropy and exponential functions, respectively, for semidefinite programming problem are studied in this paper. It is proved that the primal path converges to the analytic center of the primal optimal set with respect to the entropy function, the dual path converges to a point in the dual optimal set and the primal-dual path associated to this paths converges to a point in the primal-dual optimal set. As an application, the generalized proximal point method with the Kullback-Leibler distance applied to semidefinite programming problems is considered. The convergence of the primal proximal sequence to the analytic center of the primal optimal set with respect to the entropy function is established and the convergence of a particular weighted dual proximal sequence to a point in the dual optimal set is obtained.
['O. P. Ferreira', 'P. R. Oliveira', 'R. C. M. Silva']
On the convergence of the entropy-exponential penalty trajectories and generalized proximal point methods in semidefinite optimization
541,855
This paper proposes an efficient way to handle fault in controller area network (CAN)-based networked control system (NCS). A fault in a bus line of CAN will induce a data error which will result in data dropout or time delay, and subsequently may lead to performance degradation or system instability. A strategy to handle fault occurrence in CAN bus is proposed to properly analyze the effect of the fault to CAN-based NCS performance. The fault occurrences are modeled based on fault interarrival time, fault bursts’ duration, and Poisson law. Using fault and messages’ attributes, response time analysis (RTA) is performed and the probability of control message missing its deadline is calculated. Utilizing the new error handling algorithm to replace the native error handling of CAN, the probability of a control message missing its deadline can be translated into the probability of data dropout for control message. This methodology is evaluated using steer-by-wire system of vehicle to analyze the effect of fault occurrences in CAN. It is found that the proposed error handling mechanism has resulted in better NCS performance and the range of data dropout probability for control message also could be obtained, which serves as crucial input for NCS controller design.
['Mohd Badril Nor Shah', 'Abdul Rashid Husain', 'Hüseyin Aysan', 'Sasikumar Punnekkat', 'Radu Dobrin', 'Fernando Augusto Bender']
Error Handling Algorithm and Probabilistic Analysis Under Fault for CAN-Based Steer-by-Wire System
699,506