abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
In this paper we propose an original sparse vector model for symbol retrieval task. More specifically, we apply the K-SVD algorithm for learning a visual dictionary based on symbol descriptors locally computed around interest points. Results on benchmark datasets show that the obtained sparse representation is competitive related to state-of-the-art methods. Moreover, our sparse representation is invariant to rotation and scale transforms and also robust to degraded images and distorted symbols. Thereby, the learned visual dictionary is able to represent instances of unseen classes of symbols. HighlightsWe study how to use sparse representations for symbol description in retrieval tasks.We propose an original extension of the tf-idf model to sparse representations.We use the K-SVD algorithm using the shape context descriptor applied on keypoints.This is the first attempt of using this kind of representation in symbol retrieval tasks.
['Thanh Ha Do', 'Salvatore Tabbone', 'Oriol Ramos Terrades']
Sparse representation over learned dictionary for symbol recognition
600,901
Instances and classes in software engineering
['Christopher A. Welty', 'David A. Ferrucci']
Instances and classes in software engineering
41,643
Behavior Predictability Despite Non-Determinism in the SAPERE Ecosystem Preliminary Ideas.
['Gabriella Castelli', 'Marco Mamei', 'Alberto Rosi', 'Franco Zambonelli']
Behavior Predictability Despite Non-Determinism in the SAPERE Ecosystem Preliminary Ideas.
803,928
Verbal Satiation of Chinese Bisyllabic Words: A Semantic Locus and its Time Course.
['Bruno Galmar', 'Jenn-Yeu Chen']
Verbal Satiation of Chinese Bisyllabic Words: A Semantic Locus and its Time Course.
767,946
The use of an ensemble of feature spaces trained with distance metric learning methods has been empirically shown to be useful for the task of automatically designing local image descriptors. In this paper, we present a quantitative analysis which shows that in general, nonlinear distance metric learning methods provide better results than linear methods for automatically designing local image descriptors. In addition, we show that the learned feature spaces present better results than state of- the-art hand designed features in benchmark quantitative comparisons. We discuss the results and suggest relevant problems for further investigation.
['Gustavo Carneiro']
A Comparative Study on the Use of an Ensemble of Feature Extractors for the Automatic Design of Local Image Descriptors
250,278
We propose a cognitive algorithm based on Fuzzy C-Means (FCM) technique for the learning and decision-making functionalities of software-defined optical networks (SDONs). SDON is a new optical network paradigm where the control plane is decoupled from the data plane, thus providing a degree of software programmability to the network. Our proposal is to add the FCM algorithm to the SDON control plane in order to achieve a better network performance, when compared with a non-cognitive control plane. In this context, we illustrate the use of the FCM algorithm for determining, in real time and autonomously, the modulation format of high-speed flexible rate transponders in accordance with the quality of transmission of optical channels. The performance of this FCM algorithm is evaluated via computational simulations for a long-haul network and compared to the case-based reasoning (CBR) algorithm, which is commonly used in optical cognitive networks. We demonstrate that FCM outperforms CBR in both fastness and error avoidance, achieving 100 % of successful classifications, being two orders of magnitude faster. Additionally, we propose a definition of cognitive optical networking and an architecture for the SDON control plane including the FCM engine.
['Tania Regina Tronco', 'Miquel Garrich', 'Amilcar C. Cesar', 'Monica de Lacerda Rocha']
Cognitive algorithm using fuzzy reasoning for software-defined optical network
709,310
Surveillance in wide-area spatial environments is characterised by complex spatial layouts, large state space, and the use of multiple cameras/sensors. To solve this problem, there is a need for representing the dynamic and noisy data in the tracking tasks, and dealing with them at different levels of detail. This requirement is particularly suited to the layered dynamic probabilistic network (LDPN), a special type of dynamic probabilistic network. In this paper, we propose the use of LDPN as the integrated framework for tracking in wide-area environments. We illustrate, with the help of a synthetic tracking scenario, how the parameters of the LDPN can be estimated from training data, and then used to draw predictions and answer queries about unseen tracks at various levels of detail.
['Hung Hai Bui', 'Svetha Venkatesh', 'Geoff A. W. West']
A probabilistic framework for tracking in wide-area environments
117,507
Database triggers allow database users to specify integrity constraints and business logics by describing the reactions to events. Traditional database triggers can handle mutating events such as insert, update, and delete. This paper describes our approach to incorporate timer-triggers to handle temporal events that are generated at a given time or at certain time intervals. We propose a trigger language, named FZ-Trigger, to allow fuzziness in database triggers. FZ-Triggers allow fuzzy expressions in the condition part of a trigger with either a mutating event or a temporal event. This paper describes the generation of temporal events, the language of FZ-Triggers, and the system implementation. We also present a motivating example that illustrates the use of FZ-Trigger in the case of reacting to temporal events.
['Ying Jin', 'Tejaswitha Bhavsar']
Incorporating fuzziness into timer-triggers for temporal event handling
466,660
This study explores the relationship between involvement and brand loyalty among flight passengers in an airline service context. We adopted three dimensions from a consumer involvement profile (CIP) and two dimensions of brand loyalty (attitude loyalty and purchase loyalty) as a conceptual framework. A total of 271 valid responses were obtained at Taiwan Taoyuan International Airport for SEM analysis. Results revealed significant relationships between attitudinal loyalty and the two involvement dimensions of pleasure and sign value. In addition, attitudinal loyalty was a significant explanatory variable in the prediction of behavioural loyalty. Implications for airline practice and further research are also discussed.
['Lily Shui Lien Chen', 'Michael Chih Hung Wang', 'Julian Ming Sung Cheng', 'Hadi Kuntjara']
Consumer involvement and brand loyalty in services: evidence from the commercial airline industry in Taiwan
186,525
An index SSI for measuring the degree of variation of the surface is proposed.A method for identifying potential feature points that are subordinate to different feature lines is present.Existing methods for thinning the potential feature points have been improved. An index of measuring the variation on a surface called the smooth shrink index (SSI) which presents robustness to noise and non-uniform sampling is developed in this work. Afterwards, a new algorithm used for extracting the feature lines was proposed. Firstly, the points with an absolute value of SSI greater than a given threshold are selected as potential feature points. Then, the SSI is applied as the growth condition to conduct region segmentation of the potential feature points. Finally, a bilateral filter algorithm is employed to obtain the final feature points by thinning the potential feature points iteratively. While thinning the potential feature points, the tendency of the feature lines is acquired using principle component analysis (PCA) to restrict the drift direction of the potential feature points, so as to prevent the shrink in the endpoints of the feature lines and breaking of the feature lines induced by non-uniform sampling. Display Omitted
['Jianhui Nie']
Extracting feature lines from point clouds based on smooth shrink and iterative thinning
710,932
Reasoning about Heap Manipulating Programs using Automata Techniques.
['Supratik Chakraborty']
Reasoning about Heap Manipulating Programs using Automata Techniques.
738,151
We describe the design and testing of an inductive coupling system used to power an implantable minipump for applications in ambulating rats. A 2 MHz class-E oscillator driver powered a coil transmitter wound around a 33-cm-diameter rat cage. A receiver coil, a filtered rectifier, and a voltage-sensitive switch powered the implant. The implant DC current at the center of the primary coil (5.1 V) exceeded the level required to activate the solenoid valve in the pump. The variations of the implant current in the volume of the primary coil reflected the variations of the estimated coupling coefficient between the two coils. The pump could be activated in-vivo, while accommodating the vertical and horizontal movements of the animal. Advantages of this design include a weight reduction for the implant, an operation independent from a finite power source, and a remote activation/deactivation
['William H. Moore', 'Daniel P. Holschneider', 'Tina K. Givrad', 'Jean-Michel I. Maarek']
Transcutaneous RF-Powered Implantable Minipump Driven by a Class-E Transmitter
281,342
In the last decade, considerable concern has arisen over the electricity saving due to the issue of reducing greenhouse gases. Previous studies on usage pattern utilization mainly are focused on power disaggregation and appliance recognition. Little attention has been paid to utilizing pattern mining for the target of energy saving. In this paper, we develop an intelligent system which analyzes appliance usage to extract users' behavior patterns in a smart home environment. With the proposed system, users can acquire the electricity consumption of each appliance for energy saving easily. In advance, if the electricity cost is high, users can observe the abnormal usage of appliances from the proposed system. Furthermore, we also apply our system on real-world dataset to show the practicability of mining usage pattern in smart home environment.
['Yi-Cheng Chen', 'Yu-Lun Ko', 'Wen-Chih Peng']
An Intelligent System for Mining Usage Patterns from Appliance Data in Smart Home Environment
909,027
In this paper we address the application of a denoising algorithm based on wavelet packet decomposition and quantile noise estimation to noise suppression for automatic speech recognition. The denoising algorithm is adapted to suit the different requirements in machine recognition, as compared to human perception, and is tested in combination with state-of-the-art speech recognition systems. The results show, that, if the proposed algorithm is integrated with the recognition system - including the training process - a performance comparable to recent high-quality noise suppression methods is achieved
['Erhard Rank', 'Tuan Van Pham', 'Gernot Kubin']
Noise Suppression Based Onwavelet Packet Decomposition and Quantile Noise Estimation for Robust Automatic Speech Recognition
296,006
Community-based recommender systems have attracted much research attention. Forming communities allows us to reduce data sparsity and focus on discovering the latent characteristics of communities instead of individuals. Previous work focused on how to detect the community using various algorithms. However, they failed to consider users' social attributes, such as social activeness and dynamic interest, which have strong correlations to users' preference and choice. Intuitively, people have different social activeness in a social network. Ratings from users with high activeness are more likely to be trustworthy. Temporal dynamic of interest is also significant to user's preference. In this paper, we propose a novel community-based framework. We first employ PLSA-based model incorporating social activeness and dynamic interest to discover communities. Then the state-of-the-art matrix factorization method is applied on each of the communities. The experiment results on two real world datasets validate the effectiveness of our method for improving recommendation performance.
['Bin Yin', 'Yujiu Yang', 'Wenhuang Liu']
Exploring social activeness and dynamic interest in community-based recommender system
681,150
A new generation of optical components and the advance of the Generalized Multi-Protocol Label Switching (GMPLS) control plane supporting dynamic provisioning and restoration of optical connections (i.e., lightpaths), brings the vision of the dynamic all-optical network closer to reality. An emerging technology is the conversion between wavelengths, which removes the wavelength continuity constraint, thus allowing an easier and more flexible connection allocation. A limitation in the number of wavelength converters impairs their benefits especially during the restoration phase, when many simultaneous recovery attempts must share residual resources. This paper investigates the restoration performance of GMPLS-controlled all-optical networks with limited wavelength converter deployment. We investigate how different restoration methods, namely span restoration, segment restoration, and end-to-end restoration are affected by the availability of a limited number of wavelength converters at each node. For this purpose an enhanced wavelength assignment scheme compliant with GMPLS signaling is exploited, aiming at saving converters by assigning a higher preference to wavelengths not requiring conversion. An extensive simulation study has been conducted comparing the performance of this scheme to the most advanced scheme based on standard GMPLS signaling for the three restoration methods. Simulation results show that the enhanced wavelength assignment scheme significantly reduces the number of wavelength converters (WCs) necessary to achieve good recovery performance. The enhanced scheme especially improves span restoration performance, where the matching between the stubs' and recovery segment wavelength may require a WC. End-to-end restoration is the least affected, due to a higher degree of freedom in the route choice, while segment restoration performance lies in between.
['Sarah Ruepp', 'Nicola Andriolli', 'Jakob Buron', 'Lars Dittmann', 'Lars Ellegaard']
Restoration in all-optical GMPLS networks with limited wavelength conversion
480,359
To accommodate emergencies involving the solitary aged, we have developed a collapse-sensing phone with a GPS receiving chipset and a CDMA sending chipset that reports the location of the individual to a local control center. A GIS has been developed to display the position of the caller on the map of a control system that enables administrative officers to rescue the aged people.
['Duk-Sung Jang', 'Seungchan Choi', 'Taesoon Park']
Development of Collapse-Sensing Phone for Emergency Positioning System
254,177
Randomized algorithms are widely used either for finding efficiently approximated solutions to complex problems, for instance primality testing, or for obtaining good average behavior, for instance in distributed computing. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of the programs. Providing tools for the mechanization of reasoning is consequently an important issue. Our paper presents a new method for proving properties of randomized algorithms in a proof assistant based on higher-order logic. It is based on the monadic interpretation of randomized programs as probabilistic distribution [1]. It does not require the definition of an operational semantics for the language nor the development of a complex formalization of measure theory, but only use functionals and algebraic properties of the unit interval. Using this model, we show the validity of general rules for estimating the probability for a randomized algorithm to satisfy certain properties, in particular in the case of general recursive functions. We apply this theory for formally proving a program implementing a Bernoulli distribution from a coin flip and the termination of a random walk. All the theories and results presented in this paper have been fully formalized and proved in the COQ proof assistant [2].
['Philippe Audebaud', 'Christine Paulin-Mohring']
Proofs of randomized algorithms in CoQ
848,040
A study on modeling, scheduling and optimal control problems for a class of hybrid manufacturing systems is investigated. In this framework, the discrete entities have a state characterized by a temporal component whose evolution is described by event-driven dynamics and a physical component whose evolution is described by continuous time-driven dynamics, thus it is a typical hybrid system. Not only the optimal control for manufacturing processes like that discussed in many references but also the optimal machining sequence are considered. The whole problem is solved by a two-level optimization method: in the inner level, for any given machining sequence of the jobs, the optimal control for the manufacturing process is considered, which is a nonsmooth, nonconvex optimization problem in the general case; while at the outer level, we use an improved genetic algorithm to decide the optimal machining sequence of a batch of jobs to be processed. Finally, some examples are given to illustrate the validity of our algorithm.
['Jihui Zhang', 'Likuan Zhao', 'Wook Hyun Kwon']
Scheduling and optimization for a class of single-stage hybrid manufacturing systems
839,782
Although teachers and authors of textbooks make extensive use of examples, little has been published on assessing and classifying pedagogic examples in engineering and science. This study reviews various characteristics of examples intended for a course on probability for electrical engineers. Twelve examples are constructed to illustrate some characteristics of the correlation coefficient. A survey incorporating these examples was administered to professors and students at Rensselaer who have taught or taken a course in probability. Statistical tests are applied to determine which examples professors and students prefer and to what extent they agree in their preferences. New bipolar criteria are proposed to classify objectively a broader set of examples that appear in textbooks. Even though preferences depend on educational background and maturity, textbooks on probability are sharply differentiated by the proposed classification criteria.
['George Nagy', 'Biplab Sikdar']
Classification and Evaluation of Examples for Teaching Probability to Electrical Engineering Students
403,903
This paper proposes an adaptive framework for optimising energy efficiency in large scale antenna systems (LSAS) by utilising knowledge about the users’ requirement and properties in a novel discontinuous transmission (DTx) scheme. The users’ requirement includes bit rate and latency while the users’ properties include pathloss between the Macro basestation (MBS) and users, and information about the average received interference by each user. The proposed DTx divides the transmission into L scheduled transmissions (STs). In each ST, one or more users will be released from the overall scheduled transmission hence, leaving more spectrum for the remaining users that require longer transmission. It will be shown that the proposed scheme provides energy efficiency (EE) and Quality of Service (QoS) improvement for various number of users. This demonstrates that DTx based LSAS is an effective candidate for future cellular network for suburban and rural area.
['Wahyu Pramudito', 'Emad Alsusa', 'Daniel K. C. So', 'Khairi Ashour Hamdi']
Adaptive LSAS transmission for energy optimisation in low density cellular networks
913,138
Scheduling of concurrent processors in a real-time image processing system on FPGA (field programmable gate array) hardware is a not a trivial task. We propose a number of graphical representations for scheduling which were evaluated for use in a visual language for image processing on FPGAs. The proposed representations are illustrated and their strengths and weakness discussed and the reasons for adoption of the state chart notation are given.
['Christopher T. Johnston', 'Paul J. Lyons', 'Donald G. Bailey']
A Visual Notation for Processor and Resource Scheduling
246,205
In this paper, we propose a new method to integrate multiview normal fields using level sets. In contrast with conventional normal integration algorithms used in shape from shading and photometric stereo that reconstruct a 2.5D surface using a single-view normal field, our algorithm can combine multiview normal fields simultaneously and recover the full 3D shape of a target object. We formulate this multiview normal integration problem by an energy minimization framework and find an optimal solution in a least square sense using a variational technique. A level set method is applied to solve the resultant geometric PDE that minimizes the proposed error functional. It is shown that the resultant flow is composed of the well known mean curvature and flux maximizing flows. In particular, we apply the proposed algorithm to the problem of 3D shape modelling in a multiview photometric stereo setting. Experimental results for various synthetic data show the validity of our approach.
['Ju Yong Chang', 'Kyoung Mu Lee', 'Sang Uk Lee']
Multiview normal field integration using level set methods
334,417
Dagstuhl Seminar 14452 "Algorithmic Cheminformatics" brought together leading researchers from both chemistry and computer science. The meeting successfully aimed at bridging in the apparent gap between the two disciplines. The participants surveyed areas of overlapping interests and identified possible fields of joint future research.
['Wolfgang Banzhaf', 'Christoph Flamm', 'Daniel Merkle', 'Peter F. Stadler']
Algorithmic Cheminformatics (Dagstuhl Seminar 14452)
644,364
Mycobacterium tuberculosis has distinctive ability to detoxify various microbicidal superoxides and hydroperoxides via a redox catalytic cycle involving thiol reductants of peroxiredoxin (Prx) and thioredoxin (Trx) systems which has conferred on it resistance against oxidative killing and survivability within host. We have used computational approach to disrupt catalytic functions of Prx-Trx complex which can possibly render the pathogen vulnerable to oxidative killing in the host. Using protein–protein docking method, we have successfully constructed the Prx-Trx complex. Statistics of interface region revealed contact area of each monomer less than 1500 A2 and enriched in polar amino acids indicating transient interaction between Prx and Trx. We have identified ZINC40139449 as a potent interface binding molecule through virtual screening of drug-like compounds from ZINC database. Molecular dynamics (MD) simulation studies showed differences in structural properties of Prx-Trx complex both in apo and ligand bound states with regard to root mean square deviation (RMSD), radius of gyration (Rg), root mean square fluctuations (RMSF), solvent accessible surface area (SASA) and number of hydrogen bonds (NHBs). Interestingly, we found stability of two conserved catalytic residues Cys61 and Cys174 of Prx and conserved catalytic motif, WCXXC of Trx upon binding of ZINC40139449. The time dependent displacement study reveals that the compound is quite stable in the interface binding region till 30 ns of MD simulation. The structural properties were further validated by principal component analysis (PCA). We report ZINC40139449 as promising lead which can be further evaluated by in vitro or in vivo enzyme inhibition assays.
['Arun Bahadur Gurung', 'Amit Kumar Das', 'Atanu Bhattacharjee']
Disruption of redox catalytic functions of peroxiredoxin-thioredoxin complex in Mycobacterium tuberculosis H37Rv using small interface binding molecules
961,441
We consider the problem of optimization of the training sequence length when a turbo-detector composed of a maximum a posteriori (MAP) equalizer and a MAP decoder is used. At each iteration of the receiver, the channel is estimated using the hard decisions on the transmitted symbols at the output of the decoder. The optimal length of the training sequence is found by maximizing an effective signal-to-noise ratio (SNR) taking into account the data throughput loss due to the use of pilot symbols.
['Imed Kacem', 'Noura Sellami', 'Inbar Fijalkow', 'Aline Roumy']
Training sequence length optimization for a turbo-detector using decision-directed channel estimation
463,820
Neuroscience databases linking genes, proteins, (patho)physiology, anatomy and behaviour across species will be valuable in a broad range of studies of the nervous system. G2Cdb is such a neuroscience database aiming to present a global view of the role of synapse proteins in physiology and disease. G2Cdb warehouses sets of genes and proteins experimentally elucidated by proteomic mass spectroscopy of signalling complexes and proteins biochemically isolated from mammalian synapse preparations, giving an experimentally validated definition of the constituents of the mammalian synapse. Using automated text-mining and expert (human) curation we have systematically extracted information from published neurobiological studies in the fields of synaptic signalling electrophysiology and behaviour in knockout and other transgenic mice. We have also surveyed the human genetics literature for associations to disease caused by mutations in synaptic genes. The synapse proteome datasets that G2Cdb provides offer a basis for future work in synapse biology and provide useful information on brain diseases. They have been integrated in a such way that investigators can rapidly query whether a gene or protein is found in brain-signalling complex(es), has a phenotype in rodent models or whether mutations are associated with a human disease. G2Cdb can be freely accessed at http://www.genes2cognition.org.
['Michael D. R. Croning', 'Michael C. Marshall', 'Peter McLaren', 'J. Douglas Armstrong', 'Seth G. N. Grant']
G2Cdb: the Genes to Cognition database
390,413
This paper uses Nonlinear Principal Component Analysis (NLPCA) and Principal Component Analysis (PCA) to determine Total Electron Content (TEC) anomalies in the ionosphere for the Nakri Typhoon on 29 May, 2008 (UTC). NLPCA, PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00UT on 29 May 2008 when the wind was most intense. Results show that at a height of approximately 150-200km the TEC anomaly using NLPCA is more localized; however its intensity increases with height and becomes more widespread. The TEC anomalies are not found by PCA. Potential causes of the results are discussed with emphasis given to vertical acoustic gravity waves. The approximate position of the typhoon's eye can be detected if the GIM is divided into fine enough maps with adequate spatial-resolution at GPS-TEC receivers. This implies that the trace of the typhoon in the regional GIM is caught using NLPCA.
['Jyh-Woei Lin']
Ionospheric total electron content anomalies due to Typhoon Nakri on 29 May 2008: A nonlinear principal component analysis
266,030
In signal processing systems, aliasing is normally treated as a disturbing signal. That motivates the need for effective analog, optical and digital anti-aliasing filters. However, aliasing also conveys valuable information on the signal above the Nyquist frequency. Hence, an effective processing of the samples, based on a model of the input signal, would virtually allow the sampling frequency to be increased using slower and cheaper converters. We present such an algorithm for bandlimited signals that are sampled below twice the maximum signal frequency. Using a subspace method in the frequency domain, we show that these signals can be reconstructed from multiple sets of samples. The offset between the sets is unknown and can have arbitrary values. This approach can be applied to the creation of super-resolution images from sets of low resolution images. In this application, registration parameters have to be computed from aliased images. We show that parameters and high resolution images can be computed precisely, even when high levels of aliasing are present on the low resolution images.
['Patrick Vandewalle', 'Luciano Sbaiz', 'Joos Vandewalle', 'Martin Vetterli']
How to take advantage of aliasing in bandlimited signals
148,439
In this paper, the robust attitude controller designed for flexible spacecraft with quality characteristic parameter uncertainty is studied. The problem about satellite attitude control is transformed into a standard H ∞ control problem through approximate linearization method, finally the dynamic output feedback controller can be designed by using variable substitution method, which can transform nonlinear matrix inequalities into linear ones that are easy to be solved by LMI toolbox. The simulation results show that the controller has strong robustness and can adapt to biggish quality characteristic parameter uncertainty and overcome both internal and external disturbances well. This paper has some referential value in achieving very accurate and stable satellite attitude control.
['Yanru Zhou', 'Xiaoqi Shen', 'Jianping Zeng', 'Hongfei Sun']
Robust attitude control of flexible spacecraft with quality characteristic parameter uncertainty
70,634
Gap junctions are intercellular pores allowing direct passage of ions and small molecules between adjacent cells. They are physiologically significant, playing important roles in cellular growth and tissue function. Most models of gap junctions focus on single channel currents. Here we present a model exhibiting macroscopic currents in homotypic gap junctions, and thereby allowing evaluation of cellular features. The technique is novel in its modular approach, whereby modules of hemi-channels are developed, followed by their serial combination to obtain the complete gap junction. It is an improvement over earlier models as it is able to demonstrate the phenomena of contingent gating. We present certain biophysical properties observed in a 3-D syncytium under different gap junction subtypes. Finally, we test the efficacy of this approach towards modeling of heterotypic gap junctions.
['Shailesh Appukuttan', 'Rohan Sathe', 'Rohit Manchanda']
Modular approach to modeling homotypic and heterotypic gap junctions
564,765
Most of the real deployed peer-to-peer streaming systems adopt pull-based streaming protocol. In this paper, we demonstrate that, besides simplicity and robustness, with proper parameter settings, when the server bandwidth is above several times of the raw streaming rate, which is reasonable for practical live streaming system, simple pull-based P2P streaming protocol is nearly optimal in terms of peer upload capacity utilization and system throughput even without intelligent scheduling and bandwidth measurement. We also indicate that whether this near optimality can be achieved depends on the parameters in pull-based protocol, server bandwidth and group size. Then we present our mathematical analysis to gain deeper insight in this characteristic of pull-based streaming protocol. On the other hand, the optimality of pull-based protocol comes from a cost -tradeoff between control overhead and delay, that is, the protocol has either large control overhead or large delay. To break the tradeoff, we propose a pull-push hybrid protocol. The basic idea is to consider pull-based protocol as a highly efficient bandwidth-aware multicast routing protocol and push down packets along the trees formed by pull-based protocol. Both simulation and real-world experiment show that this protocol is not only even more effective in throughput than pull-based protocol but also has far lower delay and much smaller overhead. And to achieve near optimality in peer capacity utilization without churn, the server bandwidth needed can be further relaxed. Furthermore, the proposed protocol is fully implemented in our deployed GridMedia system and has the record to support over 220,000 users simultaneously online.
['Meng Zhang', 'Qian Zhang', 'Lifeng Sun', 'Shiqiang Yang']
Understanding the Power of Pull-Based Streaming Protocol: Can We Do Better?
162,605
In this paper, we propose a new discrminant analysis based on datawise formulation of scatter matrices to deal with the data of non-normal distribution. Starting from original LDA, datawise formulation of scatter matrices is derived and its meaning is clarified. Based on this formulation, a new feature extraction algorithm is presented. In this formulation, assumption on distribution of data is no more necessary, so appropriate feature space can be found from the data whose distribution is non-normal, as well as multimodally normal. Limitation on the feature dimension also can be removed, and by replacing the inverse matrix of within-class scatter matrix with especially assigned weights, computational problems originating from matrix inversion of within-scatter matrix can be fundamentally avoided. As a result, good feature space for classification task can be found without the problems of LDA. Performance of this algorithm has been evaluated by using feature for real classification tasks.
['Myoung Soo Park', 'Jin Young Choi']
A new discriminant analysis for non-normally distributed data based on datawise formulation of scatter matrices
37,449
This paper presents an all-digital fractional-N PLL with a low-power TDC operating at the retimed reference clock. Two retimed reference clocks are employed to implement the proposed TDC estimating the fractional phase error between the reference clock and CKV clock. The application of the retimed reference clocks to TDC does not only reduce dynamic power in TDC delay inverter chain, but also simplify e r estimation including a new T v calculation algorithm. Also, phase-error compensation block is presented to compensate for the big phase-error change due to timing skew in the output bits produced from variable-phase counter. And loop settling scanning block is invented to shift DCO operation mode and additionally decrease PLL channel switching time for frequency hopping applications. The proposed all-digital PLL represents − 36dBc integrated phase noise (1kHz – 20MHz), 778fs rms jitter, 9.6mW power consumption. The channel switching time of the ADPLL is measured as 630nsec.
['Ja-Yol Lee', 'Mi-Jeong Park', 'Byonghoon Mhin', 'Seong-Do Kim', 'Moon-Yang Park', 'Hyunku Yu']
A 4-GHz all digital fractional-N PLL with low-power TDC and big phase-error compensation
485,075
Accurate indirect jump prediction is critical for some applications. Proposed methods are not efficient in terms of chip area. Our proposal evaluates a mechanism called target encoding that provides a better ratio between prediction accuracy and the amount of bits devoted to the predictor. The idea is to encode full target addresses into shorter target identifiers, so that more entries can be stored with a fixed memory budget, and longer branch histories can be used to increase prediction accuracy. With a fixed area budget, the increase in accuracy for the proposed scheme ranges from 10% to up to 90%. On the other hand, the new scheme provides the same accuracy while reducing predictor size by between 35% and 70%.
['Juan C. Moure', 'Domingo Benitez', 'Dolores Rexachs', 'Emilio Luque']
Target encoding for efficient indirect jump prediction
305,465
We investigate the problem of securing the delivery of scalable video streams so that receivers can ensure the authenticity (originality and integrity) of the video. Our focus is on recent scalable video coding techniques, e.g., H.264/SVC, that can provide three scalability types at the same time: temporal, spatial, and quality (or PSNR). This three-dimensional scalability offers a great flexibility that enables customizing video streams for a wide range of heterogeneous receivers and network conditions. This flexibility, however, is not supported by current stream authentication schemes in the literature. We propose an efficient authentication scheme that accounts for the full scalability of video streams: it enables verification of all possible substreams that can be extracted and decoded from the original stream. Our evaluation study shows that the proposed authentication scheme is robust against packet losses, adds low communication and computation overheads, and is suitable for live streaming systems as it has short delay.
['Kianoosh Mokhtarian', 'Mohamed Hefeeda']
End-to-end secure delivery of scalable video streams
519,445
The use of optimization techniques has been recently proposed to build models for software development effort estimation. In particular, some studies have been carried out using search-based techniques, such as genetic programming, and the results reported seem to be promising. At the best of our knowledge nobody has analyzed the effectiveness of Tabu search for development effort estimation. Tabu search is a meta-heuristic approach successful used to address several optimization problems. In this paper we report on an empirical analysis carried out exploiting Tabu Search on a publicity available dataset, i.e., Desharnais dataset. The achieved results show that Tabu Search provides estimates comparable with those achieved with some widely used estimation techniques. © Springer-Verlag Berlin Heidelberg 2009.
['Filomena Ferrucci', 'Carmine Gravino', 'Rocco Oliveto', 'Federica Sarro']
Using Tabu search to estimate software development effort
650,486
Information and communication technologies (ICT) have been initially used mainly for supporting or automating firms' pre-existing business processes, in order to improve their efficiency. Subsequently it was realised that much more value can be generated from ICT if we exploit their great potential to drive innovations in firms' products/services and processes. However, limited empirical research has been conducted concerning the effects of the many different types of enterprise systems (ES) that firms use on their innovation performance. This paper contributes in this direction. It investigates empirically and compares the effects of six important and widely used types of ES (ERP, CRM, e-sales, telework and collaboration support systems) on firms' product/service and process innovation. Our study is based on a large dataset collected from 14.065 European firms through the e-Business Watch Survey of the European Commission, which has been used for estimating innovation models. We have been found that all examined types of ES have some positive effect on both product/service and process innovation, however, these effects differ in magnitude. Our results indicate that the e-sales are the strongest drivers of product/service innovation, followed by the CRM and external collaboration support systems, with respect to the process innovation the e-sales the strongest drivers of it as well, followed by the telework systems.
['Niki Kyriakou', 'Euripides Loukis', 'Spyros Arvanitis']
Enterprise Systems and Innovation -- An Empirical Investigation
685,125
We present the first polynomial time construction procedure for generating graceful double-wheel graphs. A graph is graceful if its vertices can be labeled with distinct integer values from {0;..., e}, where e is the number of edges, such that each edge has a unique value corresponding to the absolute difference of its endpoints. Graceful graphs have a range of practical application domains, including in radio astronomy, X-ray crystallography, cryptography, and experimental design. Various families of graphs have been proven to be graceful, while others have only been conjectured to be. In particular, it has been conjectured that so-called double-wheel graphs are graceful. A double-wheel graph consists of two cycles of N nodes connected to a common hub. We prove this conjecture by providing the first construction for graceful double-wheel graphs, for any N > 3, using a framework that combines streamlined constraint reasoning with insights from human computation. We also use this framework to provide a polynomial time construction for diagonally ordered magic squares.
['Ronan Le Bras', 'Carla P. Gomes', 'Bart Selman']
Double-wheel graphs are graceful
641,821
The first impression is what matters: a neuroaesthetic study of the cerebral perception and appreciation of paintings by Titian
['Francesca Babiloni', 'Dario Rossi', 'Patrizia Cherubino', 'Arianna Trettel', 'Daniela Picconi', 'Anton Giulio Maglione', 'Giovanni Vecchiato', 'Fabrizio De Vico Fallani', 'Mario Chavez', 'Fabio Babiloni']
The first impression is what matters: a neuroaesthetic study of the cerebral perception and appreciation of paintings by Titian
667,972
We design polar codes for empirical coordination and strong coordination in two-node networks. Our constructions hinge on the fact that polar codes enable explicit low complexity schemes for soft covering. We leverage this property to propose explicit and low-complexity coding schemes that achieve the capacity regions of both empirical coordination and strong coordination for sequences of actions taking value in an alphabet of prime cardinality. Our results improve previously known polar coding schemes, which (i) were restricted to uniform distributions and to actions obtained via binary symmetric channels for strong coordination, (ii) required a non-negligible amount of common randomness for empirical coordination, and (iii) assumed that the simulation of discrete memoryless channels could be perfectly implemented. As a by-product of our results, we obtain a polar coding scheme that achieves channel resolvability for an arbitrary discrete memoryless channel whose input alphabet has prime cardinality.
['Remi A. Chou', 'Matthieu R. Bloch', 'Joerg Kliewer']
Empirical and Strong Coordination via Soft Covering with Polar Codes
883,095
The PDE Framework Peano: An Environment for Efficient Flow Simulations.
['Tobias Neckel']
The PDE Framework Peano: An Environment for Efficient Flow Simulations.
755,579
This study?s aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent?s complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition?s cases. Comparison with manual expert measurements validates the segmentation?s accuracy and shows an almost perfect concordance of automated results.
['Florian Dubuisson', 'Emilie Pery', 'Lemlih Ouchchane', 'Nicolas Combaret', 'Claude Kauffmann', 'Géraud Souteyrand', 'Pascal Motreff', 'Laurent Sarry']
Automated peroperative assessment of stents apposition from OCT pullbacks
504,247
In this paper, we present a simple algorithm for obstacle detection, road surface extraction and tracking using Kalman filter and u-v-disparity images. The proposed approach is based on the use of an unsynchronized camera system and the use of sparse maps instead of dense ones due to the unsynchronization constraint. First, a sparse disparity map is computed from two images then the u-v-disparity images are built from it. Road and obstacles are extracted using a modified Hough transform. Our experimental results on real images show the efficiency of our algorithm.
['Rawia Mhiri', 'Hichem Maiza', 'Stéphane Mousset', 'Khaled Taouil', 'Pascal Vasseur', 'Abdelaziz Bensrhair']
Obstacle detection using unsynchronized multi-camera network
589,597
Doubling the number of processing cores on a single processor chip with each technology generation has become conventional wisdom. While future manycore processors promise to offer much increased computational throughput under a given power envelope, sharing critical on-chip resources, such as caches and core- to-core interconnects, poses challenges to guaranteeing predictable performance to an application program. This paper focuses on the problem of sharing on-chip caching capacity among multiple programs scheduled together, especially at the L2 cache level. Specifically, two design aspects of a large shared L2 cache are considered: (1) non-uniform cache access latency and (2) cache contention. We observe that both the aspects have to do with where, among many cache slices, a cache block is mapped to, and present an OS-based approach to managing the on-chip L2 cache memory by carefully mapping data to a cache at the page granularity. We show that a reasonable extension to the OS memory management subsystem and simple architectural support enable enforcing high-level policies to achieve application performance isolation and improve program performance predictability thereof.
['Sangyeun Cho', 'Lei Jin', 'Ki-Yeon Lee']
Achieving Predictable Performance with On-Chip Shared L2 Caches for Manycore-Based Real-Time Systems
446,601
The Quality of Experience (QoE) of streaming service is often degraded by frequent playback interruptions. To mitigate the interruptions, the media player prefetches streaming contents before starting playback, at a cost of delay. We study the QoE of streaming from the perspective of flow dynamics. First, a framework is developed for QoE when streaming users join the network randomly and leave after downloading completion. We compute the distribution of prefetching delay using partial differential equations (PDEs), and the probability generating function of playout buffer starvations using ordinary differential equations (ODEs). Second, we extend our framework to characterize the throughput variation caused by opportunistic scheduling at the base station in the presence of fast fading. Our study reveals that the flow dynamics is the fundamental reason of playback starvation. The QoE of streaming service is dominated by the average throughput of opportunistic scheduling, while the variance of throughput has very limited impact on starvation behavior.
['Yuedong Xu', 'Salah Eddine Elayoubi', 'Eitan Altman', 'Rachid El-Azouzi']
Impact of flow-level dynamics on QoE of video streaming in wireless networks
372,996
A Software Analysis Framework for Automotive Embedded Software.
['Jochen Quante']
A Software Analysis Framework for Automotive Embedded Software.
989,484
Summary: With the introduction of the Hi-C method new and fundamental properties of the nuclear architecture are emerging. The ability to interpret data generated by this method, which aims to capture the physical proximity between and within chromosomes, is crucial for uncovering the three dimensional structure of the nucleus. Providing researchers with tools for interactive visualization of Hi-C data can help in gaining new and important insights. Specifically, visual comparison can pinpoint changes in spatial organization between Hi-C datasets, originating from different cell lines or different species, or normalized by different methods. Here, we present CytoHiC, a Cytsocape plugin, which allow users to view and compare spatial maps of genomic landmarks, based on normalized Hi-C datasets. CytoHiC was developed to support intuitive visual comparison of Hi-C data and integration of additional genomic annotations. Availability: The CytoHiC plugin, source code, user manual, example files and documentation are available at: http://apps.cytoscape. org/apps/cytohicplugin
['Yoli Shavit', 'Pietro Liò']
CytoHiC: a cytoscape plugin for visual comparison of Hi-C networks
13,515
The 3D facial geometry contains ample information about human facial expressions. Such information is invariant to pose and lighting conditions, which have imposed serious hurdles on many 2D facial analysis problems. In this paper, we perform person and gender independent facial expression recognition based on properties of the line segments connecting certain 3D facial feature points. The normalized distances and slopes of these line segments comprise a set of 96 distinguishing features for recognizing six universal facial expressions, namely anger, disgust, fear, happiness, sadness, and surprise. Using a multi-class support vector machine (SVM) classifier, an 87.1% average recognition rate is achieved on the publicly available 3D facial expression database BU-3DFE. The highest average recognition rate obtained in our experiments is 99.2% for the recognition of surprise. Our result outperforms the result reported in the prior work, which uses elaborately extracted primitive facial surface features and an LDA classifier and which yields an average recognition rate of 83.6% on the same database.
['Hao Tang', 'Thomas S. Huang']
3D facial expression recognition based on properties of line segments connecting facial feature points
911,835
Critical to accurate reconstruction of sparse signals from low-dimensional low-photon count observations is the solution of nonlinear optimization problems that promote sparse solutions. In this paper, we explore recovering high-resolution sparse signals from low-resolution measurements corrupted by Poisson noise using a gradient-based optimization approach with non-convex regular-ization. In particular, we analyze zero-finding methods for solving the p-norm regularized minimization subproblems arising from a sequential quadratic approach. Numerical results from fluorescence molecular tomography are presented.
['Aramayis Orkusyan', 'Lasith Adhikari', 'Joanna Valenzuela', 'Roummel F. Marcia']
Analysis of p-norm regularized subproblem minimization for sparse photon-limited image recovery
793,508
Large amount of categories with skewed category distribution over documents still not a closed question in the state-of-the-art technologies in automated text classification. In this paper we present a proof of concept for an automatic model of complaints screening, using text mining. Through a complaints link of the Office of the Comptroller General (CGU) site, citizens have access to a form to file their complaints. These complaints must be screened and delivered to one of 64 CGU's coordination by subject. Nowadays the complaints screening is done manually. Considering the complaints storage on the database now and the arrival of new complaints, combined with the time spent on manual sorting, the timely analysis of the reported occurrences it becomes increasingly difficult. We compare two approaches: supervised learning with classifiers algorithms and unsupervised leaning with topic modeling and text search. The best results were achieved using ranking based on the Huffman Tree algorithm. This proof of concept demonstrated the feasibility of automatic sorting without the loss of quality compared to manual sorting.
['Patricia Andrade', 'Marcelo Ladeira', 'Rommel N. Carvalho']
A Comparison Between Supervised and Unsupervised Models for Identify a Large Number of Categories
968,040
Given the heterogeneity of cancer tumors, and the propensity of tumor cells to become resistant to therapy even when they are initially responsive, the current thinking is to administer multi-drug therapy to cancer patients. However, given the complexity of cancer, predicting the efficacy of multi-drug therapy, and the emergence of resistance, are important challenges. In this paper, a mini-survey is presented of some of the currently popular methods, and challenges for the future are indicated.
['Niharika Challapalli', 'M. Eren Ahsen', 'M. Vidyasagar']
Modelling drug response and resistance in cancer: Opportunities and challenges
976,008
Security against Replay Chosen-Ciphertext Attack.
['Zvi Galil', 'Stuart Haber', 'Moti Yung']
Security against Replay Chosen-Ciphertext Attack.
767,936
A novel signal source estimation method using switching voltage divider technology was developed in our previous works. The aim of this method is to reduce the number of electrodes required for signal source estimation. Using this method, voltage and position information about the signal source inside the human body can be obtained simultaneously. The purpose of this paper is to estimate the changes in the signal source location according to ventricular activation. One healthy male (31 years old) participated in an ECG measurement experiment that utilized switching voltage divider technology. Nine signal electrodes and one ground electrode were attached to the participant's body surface, and the electrocardiogram was measured with the patient seated. The signal sources for early QRS, mid QRS, and late QRS, were estimated. Results suggest that changes in the signal source location could be estimated during ventricular activation.
['Yusuke Sakaue', 'Masaaki Makikawa']
Signal source estimation inside the human heart during ventricular activation using switching voltage divider
907,081
Recently, we have shown that a translating bar on which blindfolded participants position their hand is perceived as also rotating. Here, we investigated whether such an illusory rotation would also be found if a sphere or a plane (i.e. a stimulus without a clear orientation) was used as translating stimulus. We indeed found similar rotation biases: on average a stimulus that translates over a distance of 60 cm has to rotate 25 ◦ to be perceived as non-rotating. An additional research ques- tion was whether the biases were caused by the same underlying biasing egocentric reference frame. To our surprise, the correlations between the sizes of the biases of the individual participants in the various conditions were not high and mostly not even significant. This was possibly due to day-to-day variations, but clearly, more research is needed to answer this second research question.
['Astrid M. L. Kappers', 'Wouter M. Bergmann Tiest']
Illusory Rotations in the Haptic Perception of Moving Spheres and Planes
137,229
This letter describes a method for estimating seismic quality ( Q ) factors from spectral correlation (SC). For a linear frequency attenuation model, the SC coefficient is examined between the amplitude spectrum of a reference pulse multiplied by an absorption filter and that of a target pulse, and then, the Q factor can be determined from the absorption filter which yields the maximum SC coefficient. In this way, Q -factor estimation is converted into an optimization problem which can be quickly implemented by the Newton iteration scheme. Synthetic tests with different source signatures show that the SC method is free of the type of source wavelet. Noisy tests indicate that the SC method has higher noise resistance than the logarithm spectral ratio and the centroid frequency shifting methods. Field test indicates that the depth range of lower Q values estimated by the SC method well corresponds to the distribution of gas reservoirs. With better Q -factor evaluation, the SC method may become a more practical tool for gas reservoir characterization than before.
['Senlin Yang', 'Jinghuai Gao']
Seismic Quality Factors Estimation From Spectral Correlation
421,155
The statistics of gray-level di!erences have been successfully used in a number of texture analysis studies. In this paper we propose to use signed gray-level di!erences and their multidimensional distributions for texture description. The present approach has important advantages compared to earlier related approaches based on gray level cooccurrence matrices or histograms of absolute gray-level di!erences. Experiments with di$cult texture classi"cation and supervised texture segmentation problems show that our approach provides a very good and robust performance in comparison with the mainstream paradigms such as cooccurrence matrices, Gaussian Markov random "elds, or Gabor "ltering. ( 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
['Timo Ojala', 'Kimmo Valkealahti', 'Erkki Oja', 'Matti Pietikäinen']
Texture discrimination with multidimensional distributions of signed gray-level differences
141,333
Partitioning operating systems (POSs) have been widely applied in safety-critical domains from aerospace to automotive. In order to improve the safety and the certification process of POSs, the ARINC 653 standard has been developed and complied with by the mainstream POSs. Rigorous formalization of ARINC 653 can reveal hidden errors in this standard and provide a necessary foundation for formal verification of POSs and ARINC 653 applications. For the purpose of reusability and efficiency, a novel methodology by integrating ontology and refinement is proposed to formally specify and analyze POSs in this paper. An ontology of POSs is developed as an intermediate model between informal descriptions of ARINC 653 and the formal specification in Event-B. A semiautomatic translation from the ontology and ARINC 653 into Event-B is implemented, which leads to a complete Event-B specification for ARINC 653 compliant POSs. During the formal analysis, six hidden errors in ARINC 653 have been discovered and fixed in the Event-B specification. We also validate the existence of these errors in two open-source POSs, i.e., XtratuM and POK. By introducing the ontology, the degree of automatic verification of the Event-B specification reaches a higher level.
['Yongwang Zhao', 'David Sanán', 'Fuyuan Zhang', 'Yang Liu']
Formal Specification and Analysis of Partitioning Operating Systems by Integrating Ontology and Refinement
744,142
In quantum computing, the synthesis of reversible circuits is an important topic. Reversible circuit synthesis is particularly challenging because the complexity grows as the number of bits increases. To date, many reversible circuit synthesis algorithms have been proposed, but most are unable to find the optima within an acceptable time. Because traditional methods only consider partial interests, the resulting cost would be more gates. This paper proposes a novel method, called Bound Oriented Algorithm, which has the ability to find the optimal solution with a high hit rate, one greater than 75% on average. Moreover, with the prediction of the optima by bound, it can reduce excess calculation to further improve efficiency. In addition, a special library containing only Toffoli gates is used, which simplifies algorithm design and is more easily converted to a common library. The experiment result shows that the proposed method performs better than other methods in terms of solution quality and time cost.
['Yu-Shan Yang', 'H.J.H. Chen', 'Shu-Yu Kuo', 'Guo-Jyun Zeng', 'Yao-Hsin Chou']
A Novel Efficient Optimal Reversible Circuit Synthesis Algorithm
604,622
We describe DysToPic, a theorem prover for the preferen- tial Description Logic ALC +Tmin.This is a nonmonotonic extension of standard ALC based on a typicality operator T ,w hich enjoys ap refer- ential semantics. DysToPic is a multi-engine Prolog implementation of a labelled, two-phase tableaux calculus for ALC + Tmin whose basic idea is that of performing these two phases by different machines. The per- formances of DysToPic are promising, and significantly better than the ones of its predecessor PreDeLo 1.0 recently introduced.
['Laura Giordano', 'Valentina Gliozzi', 'Nicola Olivetti', 'Gian Luca Pozzato', 'Luca Violanti']
A Multi-engine Theorem Prover for a Description Logic of Typicality
630,386
Hand jitters result in unintentional fluctuation of image sequences taken by hand-held video cameras. Stabilization of the foreground object of interest in pictures is essential for good visual quality. In this paper, foreground stabilization algorithms of image sequences are proposed and evaluated. After performing the block-based motion estimation, grouping techniques are used to identify the foreground motion. The tested grouping techniques include the iterative centroid of foreground, the k-means clustering, and the LMedS (least median of squares) algorithms. An adaptive IIR filtering technique is subsequently applied for motion correction. Compared to conventional stabilization techniques without foreground separation, the proposed methods substantially improve the visual quality of image sequences both subjectively and objectively.
['Shih-Hsuan Yang', 'Pei-Cheng Huang']
Foreground stabilization of image sequences
283,953
A major challenge for dealing with multi-perspective specifications, and more concretely, with merging of several descriptions or views is toleration of incompleteness and inconsistency: views may be inconclusive, and may have conflicts over the concepts being modeled. The desire of being able to tolerate both phenomena introduces the need to evaluate and quantify the significance of a detected inconsistency as well as to measure the degree of conflict and uncertainty of the merged view as the specification process evolves. We show in this paper to what extent disagreement and incompleteness are closely interrelated and play a central role to obtain a measure of the level of inconsistency and to define a merging operator whose aim is getting the model which best reflects the combined knowledge of all stakeholders. We will also propose two kinds of interesting and useful orderings among perspectives which are based on differences of behavior and inconsistency, respectively.
['Ana Belén Barragáns Martínez', 'José J. Pazos Arias', 'Ana Fernández Vilas', 'Jorge García Duque', 'Martín López Nores', 'Rebeca P. Díaz Redondo', 'Yolanda Blanco Fernández']
On the interplay between inconsistency and incompleteness in multi-perspective requirements specifications
118,615
The capabilities of embedded devices such as smartphones are steadily increasing and provide the great flexibility of data access and collaboration while being mobile. From the distributed computing point of view, fundamental issues in mobile computing include heterogeneity in terms of varying device capabilities (i.e., operating systems and various hardware platforms), performance characteristics and real‐time behavior, and the ability to discover and interact with peers seamlessly. Web services are a family of XML based protocols to achieve interoperability among loosely coupled networked applications. We propose the use of Web services on embedded devices in order to solve interoperability issues in distributed mobile systems. We discuss various toolkits available for embedded devices and investigate performance characteristics of embedded Web services on smartphones. Our goal is to guide the design of Web services based applications on mobile devices, and provide estimates of performance that can be e...
['Daniel Schall', 'Marco Aiello', 'Schrahram Dustdar']
Web services on embedded devices
408,431
Abstract#R##N##R##N#This paper gives a tail routing algorithm that meets a flight schedule with a minimum number of aircraft. The flight segments are identical from period to period and form a partially ordered set. The algorithm takes advantage of the periodic nature of the schedule to reduce the problem size. For a domestic airline whose schedule is identical each week, one may solve two problems, each with a seven and one half day time horizon instead of one larger problem over the entire time horizon which may be several months.
['Richard D. Wollmer']
An airline tail routing algorithm for periodic schedules
242,087
Decision-Making Policies for Heterogeneous Autonomous Multi-Agent Systems with Safety Constraints.
['Ruohan Zhang', 'Yue Yu', 'Mahmoud El Chamie', 'Behcet Acikmese', 'Dana H. Ballard']
Decision-Making Policies for Heterogeneous Autonomous Multi-Agent Systems with Safety Constraints.
992,007
In this paper, we present new features based on Spatial-Gradient-Features (SGF) at block level for identifying six video scripts namely, Arabic, Chinese, English, Japanese, Korean and Tamil. This works helps in enhancing the capability of the current OCR on video text recognition by choosing an appropriate OCR engine when video contains multi-script frames. The input for script identification is the text blocks obtained by our text frame classification method. For each text block, we obtain horizontal and vertical gradient information to enhance the contrast of the text pixels. We divide the horizontal gradient block into two equal parts as upper and lower at the centroid in the horizontal direction. Histogram on the horizontal gradient values of the upper and the lower part is performed to select dominant text pixels. In the same way, the method selects dominant pixels from the right and the left parts obtained by dividing the vertical gradient block vertically. The method combines the horizontal and the vertical dominant pixels to obtain text components. Skeleton concept is used to reduce pixel width to a single pixel to extract spatial features. We extract four features based on proximity between end points, junction points, intersection points and pixels. The method is evaluated on 770 frames of six scripts in terms of classification rate and is compared with an existing method. We have achieved 82.1% average classification rate.
['Danni Zhao', 'Palaiahnakote Shivakumara', 'Shijian Lu', 'Chew Lim Tan']
New Spatial-Gradient-Features for Video Script Identification
182,858
OStrich: Fair Scheduling for Multiple Submissions
['Joseph Emeras', 'Vinicius Pinheiro', 'Krzysztof Rządca', 'Denis Trystram']
OStrich: Fair Scheduling for Multiple Submissions
991,662
In recent years, a wide range of Business Intelligence (BI) technologies have been applied to different areas in order to support the decision-making process. BI enables the extraction of knowledge from the data stored. The healthcare industry is no exception, and so BI applications have been under investigation across multiple units of different institutions. Thus, in this article, we intend to analyze some open-source/free BI tools on the market and their applicability in the clinical sphere, taking into consideration the general characteristics of the clinical environment. For this purpose, six BI tools were selected, analyzed, and tested in a practical environment. Then, a comparison metric and a ranking were defined for the tested applications in order to choose the one that best applies to the extraction of useful knowledge and clinical data in a healthcare environment. Finally, a pervasive BI platform was developed using a real case in order to prove the tool viability.
['Andreia Brandão', 'Eliana Pereira', 'Marisa Esteves', 'Filipe Portela', 'Manuel Filipe Santos', 'António Abelha', 'José Carlos Machado']
A Benchmarking Analysis of Open-Source Business Intelligence Tools in Healthcare Environments
904,353
High levels of stress are detrimental for optimal function. Increasingly, it has been investigated whether robots can help with stress reduction. Previous studies using the Paro robot have shown that it can improve mood, reduce stress, and encourage social interaction between robot and human, and human and human. The objective of the proposed study is to investigate the circumstances under which interacting with the robot can contribute to a reduction in negative affect and stress; and to establish the features of robot responsible for any such effects. The focus is on investigating the effects of interaction between humans and the Paro robot on psychophysiological stress responses.
['Raihah Aminuddin', 'Amanda J. C. Sharkey', 'Liat Levita']
Interaction With the Paro Robot May Reduce Psychophysiological Stress Responses
720,167
Using other efficient methods to solve a convex variational model.An improved method for a variational is proposed.A new variational model which is efficient for multiplicative noise removal is proposed. In this paper, a convex variational model for multiplicative noise removal is studied. Accelerating primal-dual method and proximal linearized alternating direction method are also discussed. An improved primal-dual method is proposed. Algorithms above produce more desired results than primal-dual algorithm when we solve the convex variational model. Inspired by the statistical property of the Gamma multiplicative noise and I-divergence, a modified convex variational model is proposed, for which the uniqueness of solution is also provided. Moreover, the property of the solution is presented. Without inner iterations, primal-dual method is efficient to the modified model, and running time can be reduced dramatically also with good restoration. When we set parameter α to 0, the convex variational model we proposed turns into the model in Steidl and Teuber (2010). By altering α , our model can be used for different noise level.
['Min Liu', 'Qibin Fan']
A modified convex variational model for multiplicative noise removal
621,995
We consider the problem of downlink channel estimation for millimeter wave (mmWave) MIMO-OFDM systems, where both the base station (BS) and the mobile station (MS) employ large antenna arrays for directional precoding/beamforming. Hybrid analog and digital beamforming structures are employed in order to offer a compromise between hardware complexity and system performance. Different from most existing studies that are concerned with narrowband channels, we consider estimation of wideband mmWave channels with frequency selectivity, which is more appropriate for mmWave MIMO-OFDM systems. By exploiting the sparse scattering nature of mmWave channels, we propose a CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients). In our proposed method, the received signal at the BS is expressed as a third-order tensor. We show that the tensor has the form of a low-rank CP decomposition, and the channel parameters can be estimated from the associated factor matrices. Our analysis reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small. Hence the proposed method has the potential to achieve substantial training overhead reduction. We also develop Cramer-Rao bound (CRB) results for channel parameters, and compare our proposed method with a compressed sensing-based method. Simulation results show that the proposed method attains mean square errors that are very close to their associated CRBs, and presents a clear advantage over the compressed sensing-based method in terms of both estimation accuracy and computational complexity.
['Zhou Zhou', 'Jun Fang', 'Linxiao Yang', 'Hongbin Li', 'Zhi Chen', 'Rick S. Blum']
Low-Rank Tensor Decomposition-Aided Channel Estimation for Millimeter Wave MIMO-OFDM Systems
929,156
A system is considered in which V users are competing for the transmission capacity of a link. The users generate messages in a Poisson manner. The message length distribution of each user is arbitrary and may differ for different users. The objective is to investigate nonpreemptive service-time independent scheduling as a means of selectively controlling the average waiting time of the users. The average waiting time assignments that can be realized are characterized. They can be used to establish, in O(V log V) computations, whether a given average waiting time assignment is feasible. The proof of the result relies on a universal scheduling strategy which is simple, is time-invariant, and can be used to realize any feasible average waiting time assignment. A waiting time cost function is associated with each user in order to investigate the problem of finding a nonpreemptive scheduling strategy that minimizes the overall waiting time cost. A set of optimality conditions is given for this problem, and an algorithm is constructed solving it in O(V) log V steps. With a simple modification, the algorithm also solves the problem of finding a nonpreemptive scheduling strategy that minimizes the lexicographic ordering of the waiting time costs. Results are extended to the preemptive case. >
['J. Regnier', 'Pierre A. Humblet']
Average waiting time assignment. I. The single link case
109,091
ELiRF: A SVM Approach for SA tasks in Twitter at SemEval-2015
['Mayte Giménez', 'Ferran Pla', 'Lluís-F. Hurtado']
ELiRF: A SVM Approach for SA tasks in Twitter at SemEval-2015
613,138
Business process modeling is a well established methodology for analyzing and optimizing complex processes. To address critical challenges in ubiquitous black-box approaches, we develop a two-stage business process optimization framework. The first stage is based on an analytical approach that exploits structural properties of the underlying stochastic network and renders a near-optimal solution. Starting from this candidate solution, the second stage employs advanced simulation optimization to locally search for optimal business process solutions. Numerical experiments demonstrate the efficacy of our approach.
['Soumyadip Ghosh', 'Aliza Heching', 'Mark S. Squillante']
A two-phase approach for stochastic optimization of complex business processes
493,427
Biomimetics is a rapidly developing discipline and has been suggested applicable in machine vision and image processing because human vision system has almost evolved to be perfect. Previously proposed BA+DRF method is a biomimetic image processing method which improves images quality effectively on the basis of the brightness adaption and disinhibitory properties of concentric receptive field (DRF). However, BA+DRF is not automatic and dynamic leading to the lack of practicability. This paper proposes an improved biomimetic image processing method, the parameterized LDRF method, to make BA+DRF method more adaptive and dynamic. Parameterized LDRF method constructed a parameterized logarithmic model to automatically enhance the image's global quality and constructed a model to dynamically adjust the gain factor which is used in improving the image's local quality. The experimental results have proved its ability of enhancing the image quality with keeping details. The improved biomimetic image processing method is applicable and automatic.
['Chen Chen', 'Weijun Li', 'Liang Chen']
An improved biomimetic image processing method
637,744
Modelling longitudinal changes in organs is fundamental for the understanding of biological and pathological processes. Most of the previous works on spatio-temporal modelling of image time series relies on the assumption of stationarity of the local spatial correlation, and on the separability between spatial and temporal processes. These assumptions are often made in order to lead to computationally tractable approaches to longitudinal modelling, but inevitably lead to an oversimplification of the complex spatial and temporal dynamics underlying the biological processes. In this work we propose a novel spatio-temporal generative model of time series of images based on kernel convolutions of a white noise Gaussian process. The proposed model is parameterised by a sparse set of control points independently identified by specific spatial and temporal parameters. This formulation is highly flexible and can naturally account for spatially and temporally varying dynamics of changes. We demonstrate a preliminary application of our non-parametric method on the modelling of within-subject structural changes in the context of longitudinal analysis in Alzheimer's disease. In particular we show that our method provides an accurate description of the pathological evolution of the brain, while showing high flexibility in modelling and predicting region-specific non-linearity due to accelerated structural decline in dementia.
['Lorenzi Marco', 'Gabriel Ziegler', 'Daniel C. Alexander', 'Sebastien Ourselin']
Modelling Non-stationary and Non-separable Spatio-Temporal Changes in Neurodegeneration via Gaussian Process Convolution
676,533
Research and research funding organizations are becoming more and more aware of the need to conduct research that proves some form of utility to the society and has some form of practical impact. There are several different ways of making research that has practical relevance and that can contribute to changing and improving society. This talk aims at discussing ways to plan, conduct research with the aim of improving the society and also show how we should make use of our research knowledge and positions to influence politics and public policy making.
['Jan Gulliksen']
Human computer interaction and societal impact: can HCI influence public policy making IT politics?
574,238
In this paper,we present the concept of (α, β)-intuitionistic fuzzy subring(ideal). And we show that, in 16 kinds of (α, β)- intuitionistic fuzzy subrings(ideals), the significant ones are the (∈, ∈)- intuitionistic fuzzy subring(ideal), the (∈, ∈ ∨ q )-intuitionistic fuzzy subring(ideal) and the (∈ ∧q; ∈)- intuitionistic fuzzy subring(ideal). We also show that A is a (∈, ∈)- intuitionistic fuzzy subring(ideal) of R if and only if, for any a ∈ (0, 1], the cut set A a of A is a 3-valued fuzzy subring(ideal) of R, and A is a (∈,∈ ∨q)- intuitionistic fuzzy subring(ideal)( or (∈ ∧q, ∈)-intuitionistic fuzzy subring(ideal)) of R if and only if, for any a ∈ (0, 0.5] (or for any a ∈ (0.5,1]), the cut set A a of A is a 3-valued fuzzy subring(ideal) of R. At last, we generalize the (∈,∈)- intuitionistic fuzzy subring(ideal), the (∈,∈ ∨q)-intuitionistic fuzzy subring(ideal) and the (∈ ∧q, ∈)- intuitionistic fuzzy subring(ideal) to intuitionistic fuzzy subring(ideal) with thresholds, i.e.,(s, t]- intuitionistic fuzzy subring(ideal). We show that A is a (s, t]- intuitionistic fuzzy subring(ideal) of R if and only if, for any a ∈ (s, t], the cut set A a of A is a 3-valued fuzzy subring(ideal) of R. We also characterize the (s, t]- intuitionistic fuzzy subring(ideal) by the neighborhood relations between a fuzzy point x a and an intuitionistic fuzzy set A.
['Bin Yu', 'Xue-hai Yuan']
The intuitionistic fuzzy subrings and fuzzy ideals
296,566
In this paper, we describe idea-map, a mash-up application building on top of Linked Data Cloud. It reads in user's keywords (about research ideas) and executes a SPARQL query against DBLP endpoint. Spatial and temporal information is extracted and parsed from the query results and is further transformed to SIMILE/EXHIBIT to show a spatiotemporal map for the research ideas. Idea-map shows the feasibility of combing various techniques such as YQL, SIMILE/Exhibit and SPARQL query answering to provide an insightful interface to better understand the interested research ideas.
['He Hu', 'Xiaoyong Du']
Idea-map: A spatiotemporal view of research ideas
285,938
Most existing on-demand mobile ad hoc network routing protocols continue using a route until a link breaks. During the route reconstruction, packets can be dropped, which may cause significant throughput degradation. In this paper, we add a link breakage prediction algorithm to the Dynamic Source Routing (DSR) protocol. The mobile node uses signal power strength from the received packets to predict the link breakage time, and sends a warning to the source node of the packet if the link is soon-to-be-broken. The source node can perform a pro-active route rebuild to avoid disconnection. Experiments demonstrate that adding link breakage prediction to DSR can significantly reduce the total number of dropped data packets (by at least 20%). The tradeoff is an increase in the number of control messages by at most 33.5%. We also found that the pro-active route maintenance does not cause significant changes in average packet latency and average route length.
['Liang Qin', 'Thomas Kunz']
Pro-active route maintenance in DSR
279,777
We present a case study concerned with the animation of behavioral specifications through code generation for a payment system; namely, electronic funds transfer system (EFT). The exchange of messages between a central bank and two client banks during daily operations is modeled as a communications model of Live Sequence Charts (LSCs). Using an LSC to Java/AspectJ code generator, the communications model is converted to a base code and then the animation code is woven into this base code. Execution of the resulting code animates the messages exchanged among the central bank’s EFT server, central bank’s branch and two client banks’ EFT servers for sample money transfer operations as a sequence of events respecting the partial order specified by the LSC. The woven aspect code also addresses two additional issues: One is domain specific processing such as queue operations and settlement operations at the central banks’ EFT server, and the other is scenario processing for money transfers.
['Ozan Deniz', 'Mehmet Adak', 'Halit Oğuztüzün']
Animation of Behavioral Specifications through Code Generation for a Payment System
361,114
The subject of occlusion culling of large 3D environments has received substantial contribution. However the major amount of research into the area has focussed on occlusion culling of static scenes using spatial partitioning. The primary aim of all these schemes is to minimize the load on the GPU by reducing the number of primitives to be rendered. We present an efficient algorithm for visibility culling that supports static and dynamic scenes with equal ease with significant performance improvements over existing schemes. For a given camera position the status of the object nodes in an object hierarchy can be seen as a visibility cut, the nodes of which are either outside the view frustum, or hidden or visible. We propose an efficient update scheme of this visibility cut while processing each frame, taking full advantage of the object hierarchy with spatial and temporal coherency. The whole scene walk through is modelled as a discrete event simulation where every change generates an event scheduled for that particular frame. For occlusion culling, we employ occlusion queries which helps the system to be output sensitive. The system supports transparency of entities without a major performance hit. We propose a scheme to select the level of detail of an object based on the results of occlusion queries.
['Soumyajit Deb', 'Ankit Gupta']
Visibility Cuts: A System for Rendering Dynamic Virtual Environments
914,034
A first CMOS chip, containing line interface/speech, power extraction, loudhearing, handsfree, DC/DC converter and ringer amplifier in a 28 pin package powered only from the line, is presented. The chip provides an enhanced anti-Larsen circuit to prevent acoustic howling in loudhearing mode and a revolutionary "handsfree mode" voice control system which is virtually independent of any background noise and works in a dynamic half duplex as close to full duplex as the acoustic loop gain allows. This paper describes the architectural features of the system and the design of the main building blocks of the integrated circuit. >
['K. Hayat-Dawoodi', 'Oluf Alminde', 'V. Kunc', 'Manfred Pauritsch']
A universal telephone audio circuit with loudhearing and handsfree operation in CMOS technology
198,639
Abstract In (Adbel-Ghaffar et al., 1986) it is shown that for each integer b ⩾1 infinitely many optimum cyclic b -burst correcting codes exist. In the first part of this correspondence the parameters of all optimum cyclic four- and five-burst-correcting codes are given explicitly. Tables are included. In (van Tilborg, 1993) a very brief indication of the decoding of optimum cyclic burst-correcting codes is given. In the second part of this correspondence the decoding algorithm is analyzed in further detail. It turns out that the bulk of the decoding steps can be performed completely in parallel.
['Pw Petra Heijnen', 'van Hca Henk Tilborg']
Two observations concerning optimum cyclic burst-correcting codes
356,923
Reinforcement learning addresses the question of programming an autonomous agent to execute tasks that are described as reinforcement functions. Then, the agent is responsible for discovering the best actions to fulfil such task. Most of the work on reinforcement learning considers that reinforcements are given by the environment, not addressing the problem of how to describe tasks as reinforcement functions. Preference elicitation addresses the problem of describing a human preference through utility functions, from which reinforcement functions are special cases. This paper proposes an approach where preference elicitation and reinforcement learning are handled in an integrated manner, providing an autonomous method of programming an agent. The agent is programmed through pairwise evaluations over observed behaviours of the agent, where the evaluations are summarised in the reinforcement function. In this paper we present an approach to solve such a problem based on evaluations over observed behaviours. We propose a new algorithm, PEOB-RS, that can be shown to converge towards an optimal policy, providing the number of trials for each behaviour tends to infinity. Experimental results from learning in a grid stochastic environment are used to obtain a reinforcement function, illustrating the effectiveness of PEOB-RS, even if requiring too many evaluations. Such reinforcement function is then transferred to a more real-like environment simulating a pioneer robot, showing the abstraction property of utility functions.
['V.F. da Silva', 'P. Lima', 'A.H.R. Costa']
Eliciting preferences over observed behaviours based on relative evaluations
233,501
Genetic Programming parity with only XOR is not elementary. GP parity can be represented as the sum of k /2+1 elementary landscapes. Statistics, including fitness distance correlation (FDC), of Parity's fitness landscape are calculated. Using Walsh analysis the eigen values and eigenvectors of the Laplacian of the two bit, three bit, n-bit and mutation only Genetic Algorithm fitness landscapes are given. Indeed all elementary bit string landscapes are related to the discrete Fourier functions. However most are rough (λ/ d ≈ 1). Also in many cases fitness autocorrelation falls rapidly with distance. GA runs support eigenvalue/graph degree (λ/ d ) as a measure of the ruggedness of elementary landscapes for predicting problem difficulty. The elementary needle in a haystack (NIH) landscape is described.
['William B. Langdon']
Elementary bit string mutation landscapes
425,819
The design of a source-controlled turbo coding scheme for the transmission of strongly nonuniform binary memoryless sources over additive white Gaussian noise (AWGN) channels is considered. The use of nonbinary signaling allows performance improvements over previous existing schemes that use binary modulation. The basic idea is to allocate more energy to the transmitted symbols associated with the information bits that occur less likely and to exploit the source nonuniformity in the decoding process. If the source and channel parameters are unknown, they can be estimated jointly with the encoding/decoding process. No performance degradation is observed in this case.
['Felipe Cabarcas', 'Richard Demo Souza', 'Javier Garcia-Frias']
Turbo coding of strongly nonuniform memoryless sources with unequal energy allocation and PAM signaling
110,441
In this work we present an algorithm for extracting region level annotations from flickr images using a small set of manually labelled regions to guide the selection process. More specifically, we construct a set of flickr images that focuses on a certain concept and apply a novel graph based clustering algorithm on their regions. Then, we select the cluster or clusters that correspond to the examined concept guided by the manually labelled data. Experimental results show that although the obtained regions are of lower quality compared to the manually labelled regions, the gain in effort compensates for the loss in performance.
['Elisavet Chatzilari', 'Spiros Nikolopoulos', 'Symeon Papadopoulos', 'Christos Zigkolis', 'Yiannis Kompatsiaris']
Semi-supervised object recognition using flickr images
164,922
Design and Development of Information Display Systems for Monitoring Overboard.
['Tadasuke Furuya', 'Atsushi Suzuki', 'Atsushi Shimamura', 'Takeshi Sakurada', 'Yoichi Hagiwawa', 'Takafumi Saito']
Design and Development of Information Display Systems for Monitoring Overboard.
764,175
Shape inspection options in most of the current online shape repositories provide limited information on the shape of a desired model. In addition, stored models can be downloaded only at the original level of detail (LOD). In this paper, we present our application that combines remote interactive inspection of a digital shape with real-time simplification. Simplification is parameterised, is performed in real-time and the results are again available for inspection. We have embedded the application in a shape repository whereby, having found a suitable simplification, users can download the model at that LOD.
['Emanuele Danovaro', 'Laura Papaleo', 'Davide Sobrero', 'Marco Attene', 'Waqar Saleem']
Advanced remote inspection and download of 3D shapes
272,122
Covert Communications When the Warden Does Not Know the Background Noise Power
['Dennis Goeckel', 'Boulat A. Bash', 'Saikat Guha', 'Donald F. Towsley']
Covert Communications When the Warden Does Not Know the Background Noise Power
677,355
In this paper, we introduce cooperative autonomous driving algorithms for vehicular networks with nonlinear mobile robot dynamics in urban environments that take human safety into account and are capable of performing vehicle-to-vehicle (V2V) and vehicle-to-pedestrian (V2P) collision avoidance. We argue that “flocks” are multi-agent models of vehicular traffic on roads and propose novel autonomous driving architectures and algorithms for cyber-physical vehicles capable of performing autonomous driving tasks such as lane-driving, lane-changing, braking, passing, and making turns. Our proposed autonomous driving algorithms are inspired by Olfati-Saber's flocking theory. Though, there are notable differences between autonomous driving on urban roads and flocking behavior — flocks have a single desired destination whereas most drivers on road do not share the same destination. We refer to this collective behavior (driving) as “multi-objective flocking.” The self-driving vehicles in our framework turn out to be hybrid systems with a finite number of discrete states that are related to the driving modes of vehicles. Complex driving maneuvers can be performed using a sequence of mode switchings. We use near-identity nonlinear transformations to extend the application of particle-based autonomous driving algorithms to multi-robot networks with nonlinear dynamics. The derivation of the mode switching conditions that preserve safety is non-trivial and an important part of the design of autonomous driving algorithms. We present several examples of driving tasks that can be effectively performed using our proposed driving algorithms.
['Lamia Iftekhar', 'Reza Olfati-Saber']
Autonomous driving for vehicular networks with nonlinear dynamics
314,078
In massive multiple-input multiple-output (MIMO) systems, the effect of inter-user interference and noise can be suppressed through simple signal processing techniques. However, pilot contamination, which results from the use of a correlated pilot, causes performance bottleneck for massive MIMO networks. In this letter, a pilot design algorithm based on alternating minimization is developed to alleviate the pilot contamination in multi-cell massive MIMO systems. Contrary to the existing pilot reuse protocols that restrict the pilot length to be an integer multiple of the number of users, the proposed method designs a pilot with an arbitrary length. Hence, pilot sequence can be designed more flexibly to maximize spectral efficiency.
['Yonghee Han', 'Jungwoo Lee']
Uplink Pilot Design for Multi-Cell Massive MIMO Networks
801,989
In this paper we present the requirements, design and pre-deployment testing of a transportation bus as a Mobile Enterprise Sensor Bus (M-ESB) service in China that supports two main requirements: to monitor the urban physical environment, and to monitor road conditions. Although, several such projects have been proposed previously, integrating both environment and road condition monitoring and using a data exchange interface to feed a data cloud computing system, is a novel approach. We present the architecture for M-ESB and in addition propose a new management model for the bus company to act as a Virtual Mobile Service Operator. Pre-deployment testing was undertaken to validate our system.
['Lin Kang', 'Stefan Poslad', 'Weidong Wang', 'Xiuhua Li', 'Yinghai Zhang', 'Chaowei Wang']
A Public Transport Bus as a Flexible Mobile Smart Environment Sensing Platform for IoT
925,137
Power and performance optimization through MPI supported dynamic voltage and frequency scaling.
['Florian Thoma', 'Michael Hübner', 'Diana Göhringer', 'Hasam Ümitcan Yilmaz', 'Jürgen Becker']
Power and performance optimization through MPI supported dynamic voltage and frequency scaling.
781,278
Optimization of Continuous Queries in Federated Database and Stream Processing Systems.
['Yuanzhen Ji', 'Zbigniew Jerzak', 'Anisoara Nica', 'Gregor Hackenbroich', 'Christof Fetzer']
Optimization of Continuous Queries in Federated Database and Stream Processing Systems.
757,403
This paper presents an extension of Gaussian process implicit surfaces (GPIS) by the introduction of geometric object priors. The proposed method enhances the probabilistic reconstruction of objects from three-dimensional (3-D) pointcloud data, providing a rigorous way of incorporating prior knowledge about objects expected in a scene. The key ideas, including the systematic use of surface normal information, are illustrated with one-dimensional and two-dimensional examples, and then applied to simulated and real pointcloud data for 3-D objects. The performance of our method is demonstrated in two different application scenarios, using complete and partial surface observations. Qualitative and quantitative analysis of the results reveals the superiority of the proposed approach over existing GPIS configurations that do not exploit prior knowledge.
['Wolfram Martens', 'Yannick Poffet', 'Pablo Ramon Soria', 'Robert Fitch', 'Salah Sukkarieh']
Geometric Priors for Gaussian Process Implicit Surfaces
937,666
TCP proxies are basic building blocks for many advanced middleboxes. In this paper we present Miniproxy, a TCP proxy built on top of a specialized minimalistic cloud operating system. Miniproxy's connection handling performance is comparable to that of full-fledged GNU/Linux TCP proxy implementations, but its minimalistic footprint enables new use cases. Specifically, Miniproxy requires as little as 6 MB to run and boots in tens of milliseconds, enabling massive consolidation, on-the-fly instantiation and edge cloud computing scenarios. We demonstrate the benefits of Miniproxy by implementing and evaluating a TCP acceleration use case.
['Giuseppe Siracusano', 'Roberto Bifulco', 'Simon Kuenzer', 'Stefano Salsano', 'Nicola Blefari Melazzi', 'Felipe Huici']
On the Fly TCP Acceleration with Miniproxy
779,286
Service Self-customization in a Network Context: Requirements on the Functionality of a System for Service Self-customization
['Doreen Mammitzsch', 'Bogdan Franczyk']
Service Self-customization in a Network Context: Requirements on the Functionality of a System for Service Self-customization
846,241
The Disaster Response Network Enabled Platform (DRNEP) is a system that integrates a set of independently developed infrastructure and disaster simulators. This paper describes some of the architectural choices that we made for DRNEP. The overall system uses a master-slave pattern, with one master simulator orchestrating all of the others, based on a central system clock. As the various simulators are developed by different organizations, they each have their own data models, with data elements not matching one for one, or with different representations, or not useful for collaboration. To integrate them in DRNEP, and to avoid developing n2 distinct translators, we devised a single common data model, akin to the mediator pattern, and we therefore need only one data translator per simulator. Developing this common data model poses many challenges: on one hand it must contain the right abstractions to communicate with a variety of existing and future simulators, in particular the topology of their underlying models, yet reduce the overall complexity of the system, and also minimize the likelihood of too many drastic changes when the system will evolve. We used principles from system theory to develop this common data model, and will be used with 2 simulators connected to UBC's Infrastructure Interdependency Simulator (I2Sim) serving as master.
['M. Gonzalez', 'Jose R. Marti', 'Philippe Kruchten']
A canonical data model for simulator interoperation in a collaborative system for disaster response simulation
391,394