abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
Real-Time Automated Aerial Refueling Using Stereo Vision.
['Christopher M. Parsons', 'Scott Nykl']
Real-Time Automated Aerial Refueling Using Stereo Vision.
955,038
Leveraging the Customer Base: Creating Competitive Advantage Through Knowledge Management
['Elie Ofek', 'Miklos Sarvary']
Leveraging the Customer Base: Creating Competitive Advantage Through Knowledge Management
856,137
Engaging CS Alumni from Afar
['Christine Shannon', 'James D. Kiper', 'Samuel A. Rebelsky', 'Janet Davis']
Engaging CS Alumni from Afar
647,713
The land surface temperature (LST) product of the Advanced Along-Track Scanning Radiometer (AATSR) was validated with ground measurements at the following two thermally homogeneous sites: Lake Tahoe, CA/NV, USA, and a large rice field close to Valencia, Spain. The AATSR LST product is based on the split-window technique using the 11- and 12- mum channels. The algorithm coefficients are provided for 13 different land-cover classes plus one lake class (index i). Coefficients are weighted by the vegetation-cover fraction (f). In the operational implementation of the algorithm, i and f are assigned from a global classification and monthly fractional vegetation-cover maps with spatial resolutions of 0.5deg times 0.5deg. Since the validation sites are smaller than this, they are misclassified in the LST product and treated incorrectly despite the fact that the higher resolution AATSR data easily resolve the sites. Due to this problem, the coefficients for the correct cover types were manually applied to the AATSR standard brightness temperature at sensor product to obtain the LST for the sites assuming they had been correctly classified. The comparison between the ground-measured and the AATSR-derived LSTs showed an excellent agreement for both sites, with nearly zero average biases and standard deviations les 0.5degC. In order to produce accurate and precise estimates of LST, it is necessary that the land-cover classification is revised and provided at the same resolution as the AATSR data, i.e., 1 km rather than the 0.5deg resolution auxiliary data currently used in the LST product.
['César Coll', 'Simon J. Hook', 'Joan M. Galve']
Land Surface Temperature From the Advanced Along-Track Scanning Radiometer: Validation Over Inland Waters and Vegetated Surfaces
377,727
In this letter, we propose a gray code order of antenna index permutations for differential spatial modulation (DSM). To facilitate the implementation, the well-known Trotter–Johnson ranking and unranking algorithms are adopted, which result in similar computational complexity to the existing DSM that uses the lexicographic order. The signal-to-noise ratio gain achieved by the proposed gray code order over the lexicographic order is also analyzed and verified via simulations. Based on the gray coding framework, we further propose a diversity-enhancing scheme named intersected gray (I-gray) code order, where the permutations of active antenna indices are selected directly from the odd (or even) positions of the full permutations in the gray code order. From analysis and simulations, it is shown that the I-gray code order can harvest an additional transmit diversity order with respect to the gray code order.
['Jun Li', 'Miaowen Wen', 'Xiang Cheng', 'Yier Yan', 'Sang Seob Song', 'Moon Ho Lee']
Differential Spatial Modulation With Gray Coded Antenna Activation Order
571,841
Compound document images contain graphic or textual content along with pictures. They are a very common form of documents, found in magazines, brochures, Web-sites, etc. We focus our attention on the mixed raster content (MRC) multi-layer approach for compound image compression. We study block thresholding as a means to segment an image for MRC. An attempt is made to optimize the block-threshold in a rate-distortion sense. Rate-distortion curves are presented to demonstrate the performance of the proposed algorithm.
['R.L. de Queiroz', 'Zhigang Fan', 'Trac D. Tran']
Optimizing block-threshold segmentation for MRC compression
328,040
Network performance isolation is the key to virtualization-based cloud services. For latency-sensitive cloud applications like media streaming, both predictable network bandwidth and low-jittered network latency are desirable. The current resource sharing methods for virtual machines (VMs) mainly focus on resource proportional sharing such as CPU amount, memory size and I/O bandwidth, whereas they ignore the fact that I/O latency in VM-hosted platforms is mostly related to the resource provisioning rate. Even if the VM is allocated with adequate resources, network jitter can still be very serious if the resources are not provided in a timely manner. This paper systematically analyzes the causes of unpredictable network latency and proposes a combined solution to guarantee network performance isolation: (1) in the hypervisor, we design a proportional share CPU scheduling with soft real-time support to reduce scheduling delay for network packets; (2) in network traffic shaper, we introduce the concept of smooth window with feedback control to smooth the packet delay. The experimental results with both real-life applications and low-level benchmarks show that our solutions can significantly reduce network jitter, and meanwhile effectively maintain resource proportionality.
['Luwei Cheng', 'Cho-Li Wang']
Network performance isolation for latency-sensitive cloud applications
92,136
We present a new technique for robust secret reconstruction with $$\mathcal {O}n$$ communication complexity. By applying this technique, we achieve $$\mathcal {O}n$$ communication complexity per multiplication for a wide class of robust practical Multi-Party Computation MPC protocols. In particular our technique applies to robust threshold computationally secure protocols in the case of $$t<n/2$$ in the pre-processing model. Previously in the pre-processing model, $$\mathcal {O}n$$ communication complexity per multiplication was only known in the case of computationally secure non-robust protocols in the dishonest majority setting i.e. with $$t<n$$ and in the case of perfectly-secure robust protocols with $$t<n/3$$. A similar protocol was sketched by Damgard and Nielsen, but no details were given to enable an estimate of the communication complexity. Surprisingly our robust reconstruction protocol applies for both the synchronous and asynchronous settings.
['Ashish Choudhury', 'Emmanuela Orsini', 'Arpita Patra', 'Nigel P. Smart']
Linear Overhead Optimally-Resilient Robust MPC Using Preprocessing
841,613
Feature Selection in large multi-dimensional data sets is becoming increasingly important for several real world applications. One such application, used by network administrators, is Network Intrusion Detection. The major problem with anomaly based intrusion detection systems is high number of false positives. Motivated by such a requirement, we propose sv(M)kmeans: a two step hybrid feature selection technique. The proposed technique applies classification on false-positives and true positives; and on false-positives and true-negatives after an initial round of clustering. Specifically, SVM-RFE is applied on the results obtains from MK-Means. sv(M)kmeans is evaluated for its real world applicability using the benchmark NSL-KDD data set. We show the feature subset to significantly reduce the false positive rate and increase the accuracy in network anomaly detection.
['Shubham Saini', 'Shraey Bhatia', 'I. Sumaiya Thaseen']
sv(M)kmeans: a hybrid feature selection technique for reducing false positives in network anomaly detection
649,275
Background: High content screening techniques are increasingly used to understand the regulation and progression of cell motility. The demand of new platforms, coupled with availability of terabytes of data has challenged the traditional technique of identifying cell populations by manual methods and resulted in development of high-dimensional analytical methods. Results: In this paper, we present sub-populations analysis of cells at the tissue level by using dynamic features of the cells. We used active contour without edges for segmentation of cells, which preserves the cell morphology, and autoregressive modeling to model cell trajectories. The sub-populations were obtained by clustering static, dynamic and a combination of both features. We were able to identify three unique sub-populations in combined clustering. Conclusion: We report a novel method to identify sub-populations using kinetic features and demonstrate that these features improve sub-population analysis at the tissue level. These advances will facilitate the application of high content screening data analysis to new and complex biological problems.
['Merlin Veronika', 'James G. Evans', 'Paul Matsudaira', 'Roy E. Welsch', 'Jagath C. Rajapakse']
Sub-population analysis based on temporal features of high content images.
195,069
Multicircular synthetic aperture radar (SAR) (MCSAR) is an extension of circular SAR (CSAR) characterized by the formation of a synthetic aperture in elevation with several circular flights. This imaging mode allows an improved resolution in the plane perpendicular to the line of sight ( LOS⊥), thus suppressing the 3-D cone-shaped sidelobes that are formed when focusing with CSAR. This letter presents the first polarimetric MCSAR airborne experiment acquired at P-band by the German Aerospace Center (DLR)'s F-SAR system over a forested area in Vordemwald, Switzerland. This letter also includes a phase calibration method based on the singular value decomposition (SVD) using ground signatures to estimate constant phase offsets within a stack of 2-D images. Focusing methods, such as fast-factorized back projection (FFBP), beamforming (BF), and compressive sensing (CS), described in previous publications are used to solve the complex reflectivity in the (x, y, z) space.
['Octavio Ponce', 'Pau Prats-Iraola', 'Rolf Scheiber', 'Andreas Reigber', 'Alberto Moreira', 'Esteban Aguilera']
Polarimetric 3-D Reconstruction From Multicircular SAR at P-Band
145,817
This paper is about the aspects of ability, selfhood, and normalcy embodied in people's relationships with prostheses. Drawing on interviews with 14 individuals with upper-limb loss and diverse experiences with prostheses, we find people not only choose to use and not use prosthesis throughout their lives but also form close and complex relationships with them. The design of "assistive" technology often focuses on enhancing function; however, we found that prostheses played important roles in people's development of identity and sense of normalcy. Even when a prosthesis failed functionally, such as was the case with 3D-printed prostheses created by an on-line open-source maker community (e-NABLE), we found people still praised the design and initiative because of the positive impacts on popular culture, identity, and community building. This work surfaces crucial questions about the role of design interventions in identity production, the promise of maker communities for accelerating innovation, and a broader definition of "assistive" technology.
['Cynthia L. Bennett', 'Keting Cen', 'Katherine M. Steele', 'Daniela K. Rosner']
An Intimate Laboratory?: Prostheses as a Tool for Experimenting with Identity and Normalcy
792,479
The high arithmetic rates of media processing applications require architectures with tens to hundreds of functional units, multiple register files, and explicit interconnect between functional units and register files. Communication scheduling enables scheduling to these emerging architectures, including those that use shared buses and register file ports. Scheduling to these shared interconnect architectures is difficult because it requires simultaneously allocating functional units to operations and buses and register file ports to the communications between operations. Prior VLIW scheduling algorithms are limited to clustered register file architectures with no shared buses or register file ports. Communication scheduling extends the range of target architectures by making each communication explicit and decomposing it into three components: a write stub, zero or more copy operations, and a read stub. Communication scheduling allows media processing kernels to achieve 98% of the performance of a central register file architecture on a distributed register file architecture with only 9% of the area, 6% of the power consumption, and 37% of the access delay, and 120% of the performance of a clustered register file architecture on a distributed register file architecture with 56% of the area and 50% of the power consumption.
['Peter R. Mattson', 'William J. Dally', 'Scott Rixner', 'Ujval J. Kapasi', 'John D. Owens']
Communication scheduling
686,660
In this paper, we present a novel approach to learning semantic localized patterns with binary projections in a supervised manner. The pursuit of these binary projections is reformulated into a problem of feature clustering, which optimizes the separability of different classes by taking the members within each cluster as the nonzero entries of a projection vector. An efficient greedy procedure is proposed to incrementally combine the sub-clusters by ensuring the cardinality constraints of the projections and the increase of the objective function. Compared with other algorithms for sparse representations, our proposed algorithm, referred to as Discriminant Localized Binary Projections (dlb), has the following characteristics: 1) dlb is supervised, hence is much more effective than other unsupervised sparse algorithms like Non-negative Matrix Factorization (NMF) in terms of classification power; 2) similar to NMF, dlb can derive spatially localized sparse bases; furthermore, the sparsity of dlb is controllable, and an interesting result is that the bases have explicit semantics in human perception, like eyes and mouth; and 3) classification with dlb is extremely efficient, and only addition operations are required for dimensionality reduction. Extensive experimental results show significant improvements of dlb in sparsity and face recognition accuracy in comparison to the state-of-the-art algorithms for dimensionality reduction and sparse representations.
['Shuicheng Yan', 'Tianqiang Yuan', 'Xiaoou Tang']
Learning Semantic Patterns with Discriminant Localized Binary Projections
351,600
The paper presents a system for detection of some important internal log defects via analysis of axial CT images. Two major procedures are used. The first is the segmentation of a single computer tomography (CT) image slice which extracts defect-like regions from the image slice, the second is correlation analysis of the defect-like regions across CT image slices. The segmentation algorithm for a single CT image is basically a complex form of multiple thresholding that exploits both the prior knowledge of wood structure and gray value characteristics of the image. The defect-like region extraction algorithm first locates the pith, groups the pixels in the segmented image on the basis of their connectivity and classifies each region as either a defect-like region or a defect-free region using shape, orientation and morphological features. Each defect-like region is classified as a defect or non-defect via correlation analysis across corresponding defect-like regions in neighboring CT image slices.
['Suchendra M. Bhandarkar', 'Timothy D. Faust', 'Mengjin Tang']
A system for detection of internal log defects by computer analysis of axial CT images
509,975
This paper discusses a multi-agent system whose global goal is the minimization of entropy of an environment, based on a novel tree in-motion mapping method
['Rami S. Abielmona', 'Emil M. Petriu', 'Thomas E. Whalen']
Multi-agent system environment mapping by entropy reduction
920,820
A new ionic liquid loaded silica gel amine (SG-APTMS-N,N-EPANTf2) was developed, as an adsorptive material, for selective adsorption and determination of zirconium, Zr(IV), without the need for a chelating intermediate. Based on a selectivity study, the SG-APTMS-N,N-EPANTf2 phase showed a perfect selectivity towards Zr(IV) at pH 4 as compared to other metallic ions, including gold [Au(III)], copper [Cu(II)], cobalt [Co(II)], chromium [Cr(III)], lead [Pb(II)], selenium [Se(IV)] and mercury [Hg(II)] ions. The influence of pH, Zr(IV) concentration, contact time and interfering ions on SG-APTMS-N,N-EPANTf2 uptake for Zr(IV) was evaluated. The presence of incorporated donor atoms in newly synthesized SG-APTMS-N,N-EPANTf2 phase played a significant role in enhancing its uptake capacity of Zr(IV) by 78.64% in contrast to silica gel (activated). The equilibrium and kinetic information of Zr(IV) adsorption onto SG-APTMS-N,N-EPANTf2 were best expressed by Langmuir and pseudo second-order kinetic models, respectively. General co-existing cations did not interfere with the extraction and detection of Zr(IV). Finally, the analytical efficiency of the newly developed method was also confirmed by implementing it for the determination of Zr(IV) in several water samples.
['Hadi M. Marwani', 'Amjad E Alsafrani', 'Abdullah M. Asiri', 'Mohammed M. Rahman']
Silica-gel Particles Loaded with an Ionic Liquid for Separation of Zr(IV) Prior to Its Determination by ICP-OES
836,163
When maintaining a feature in preprocessor-based Software Product Lines (SPLs), developers are susceptible to introduce problems into other features. This is possible because features eventually share elements (like variables and methods) with the maintained one. This scenario might be even worse when hiding features by using techniques like Virtual Separation of Concerns (VSoC), since developers cannot see the feature dependencies and, consequently, they become unaware of them. Emergent Interfaces was proposed to minimize this problem by capturing feature dependencies and then providing information about other features that can be impacted during a maintenance task. In this paper, we present Emergo, a tool capable of computing emergent interfaces between the feature we are maintaining and the others. Emergo relies on feature-sensitive dataflow analyses in the sense it takes features and the SPL feature model into consideration when computing the interfaces.
['Márcio Ribeiro', 'Társis Tolêdo', 'Johnni Winther', 'Claus Brabrand', 'Paulo Borba']
Emergo: a tool for improving maintainability of preprocessor-based product lines
9,789
In this paper we propose a method that analyzes attack patterns and extracts watermark after restoring the watermarked image from the geometric attacks. The proposed algorithm consists of a spatial-domain key insertion part for attack analysis and a frequency-domain watermark insertion part using discrete wavelet transform. With the spatial-domain key extracted from the damaged image, the proposed algorithm analyzes distortion and finds the attack pattern. After restoring the damaged image, the algorithm extracts the embedded watermark. By using both spatial domain key and frequency domain watermark, the proposed algorithm can achieve robust watermark extraction against geometrical attacks and image compressions such as JPEG.
['Dongeun Lee', 'Taekyung Kim', 'Seongwon Lee', 'Joonki Paik']
A robust watermarking algorithm using attack pattern analysis
968,340
Background#R##N#Transcription of genes coding for xylanolytic and cellulolytic enzymes in Aspergillus niger is controlled by the transactivator XlnR. In this work we analyse and model the transcription dynamics in the XlnR regulon from time-course data of the messenger RNA levels for some XlnR target genes, obtained by reverse transcription quantitative PCR (RT-qPCR). Induction of transcription was achieved using low (1 mM) and high (50 mM) concentrations of D-xylose (Xyl). We investigated the wild type strain (Wt) and a mutant strain with partial loss-of-function of the carbon catabolite repressor CreA (Mt).
['Jimmy Omony', 'Astrid R. Mach-Aigner', 'Gerrit van Straten', 'Anton J. B. van Boxtel']
Quantitative modeling and analytic assessment of the transcription dynamics of the XlnR regulon in Aspergillus niger
620,220
This volume illustrates the continuous arms race between attackers and defenders of the Web ecosystem by discussing a wide variety of attacks. In the first part of the book, the foundation of the Web ecosystem is briefly recapped and discussed. Based on this model, the assets of the Web ecosystem are identified, and the set of capabilities an attacker may have are enumerated. In the second part, an overview of the web security vulnerability landscape is constructed. Included are selections of the most representative attack techniques reported in great detail. In addition to descriptions of the most common mitigation techniques, this primer also surveys the research and standardization activities related to each of the attack techniques, and gives insights into the prevalence of those very attacks. Moreover, the book provides practitioners a set of best practices to gradually improve the security of their web-enabled services. Primer on Client-Side Web Security expresses insights into the future of web application security. It points out the challenges of securing the Web platform, opportunities for future research, and trends toward improving Web security.
['Philippe De Ryck', 'Lieven Desmet', 'Frank Piessens', 'Martin Johns']
Primer on Client-Side Web Security
746,309
A rectangular patch antenna array for MIMO communications was simulated on a magnetic permeability enhanced metamaterial. The performance of this antenna array was studied relative to a similar array constructed on a regular substrate. The analysis was performed with respect to performance metrics such as degree of mutual coupling for different element spacing, achievable channel capacity, bandwidth and efficiency. The array built on the metamaterial substrate showed significant size reduction, less mutual coupling and significant channel capacity improvement compared to similar arrays on conventional substrates.
['Prathaban Mookiah', 'Kapil R. Dandekar']
Performance Analysis of Metamaterial Substrate Based MIMO Antenna Arrays
374,254
Serious Computer Games Design for Active Learning in Teacher Education
['Jože Rugelj']
Serious Computer Games Design for Active Learning in Teacher Education
843,906
For compressed sensing with jointly sparse signals, we present a new signal model and two new joint iterative-greedy-pursuit recovery algorithms. The signal model is based on the assumption of a jointly shared support-set and the joint recovery algorithms have knowledge of the size of the shared support-set. Through experimental evaluation, we show that the new joint algorithms provide significant performance improvements compared to regular algorithms which do not exploit a joint sparsity.
['Dennis Sundman', 'Saikat Chatterjee', 'Mikael Skoglund']
Greedy pursuits for compressed sensing of jointly sparse signals
331,743
This paper presents SCOOP: a tool that symbolically optimises process-algebraic specifications of probabilistic processes. It takes specifications in the prCRL language (combining data and probabilities), which are linearised first to an intermediate format: the LPPE. On this format, optimisations such as dead-variable reduction and confluence reduction are applied automatically by SCOOP. That way, drastic state space reductions are achieved while never having to generate the complete state space, as data variables are unfolded only locally. The optimised state spaces are ready to be analysed by for instance CADP or PRISM.
['Mark Timmer']
SCOOP: A Tool for SymboliC Optimisations of Probabilistic Processes
315,022
Correction to “ T -convexity and tame extensions II”
['Lou van den Dries']
Correction to “ T -convexity and tame extensions II”
45,519
The magnetic particle imaging (MPI) imaging process is a new method of medical imaging with great promise. In this paper we derive the 1-D MPI signal, resolution, bandwidth requirements, signal-to-noise ratio (SNR), specific absorption rate, and slew rate limitations. We conclude with experimental data measuring the point spread function for commercially available SPIO nanoparticles and a demonstration of the principles behind 1-D imaging using a static offset field. Despite arising from the nonlinear temporal response of a magnetic nanoparticle to a changing magnetic field, the imaging process is linear in the magnetization distribution and can be described as a convolution. Reconstruction in one dimension is exact and has a well-behaved quasi-Lorentzian point spread function. The spatial resolution improves cubically with increasing diameter of the SPIO domain, inverse to absolute temperature, linearly with saturation magnetization, and inversely with gradient. The bandwidth requirements approach a megahertz for reasonable imaging parameters and millimeter scale resolutions, and the SNR increases with the scanning rate. The limit to SNR as we scale MPI to human sizes will be patient heating. SAR and magnetostimulation limits give us surprising relations between optimal scanning speeds and scanning frequency for different types of scanners.
['Patrick W. Goodwill', 'Steven M. Conolly']
The X-Space Formulation of the Magnetic Particle Imaging Process: 1-D Signal, Resolution, Bandwidth, SNR, SAR, and Magnetostimulation
106,265
In this paper, we build a social search engine named Glaucus for location-based queries. They compose a significant portion of mobile searches, thus becoming more popular with the prevalence of mobile devices. However, most of existing social search engines are not designed for location-based queries and thus often produce poor-quality results for such queries. Glaucus is inherently designed to support location-based queries. It collects the check-in information, which pinpoints the places where each user visited, from location-based social networking services such as Foursquare. Then, it calculates the expertise of each user for a query by using our new probabilistic model called the location aspect model . We conducted two types of evaluation to prove the effectiveness of our engine. The results showed that Glaucus selected the users supported by stronger evidence for the required expertise than existing social search engines. In addition, the answers from the experts selected by Glaucus were highly rated by our human judges in terms of answer satisfaction.
['Minsoo Choy', 'Jae-Gil Lee', 'Gahgene Gweon', 'Daehoon Kim']
Glaucus: Exploiting the Wisdom of Crowds for Location-Based Queries in Mobile Environments
167,791
The new generation of radio synthesis arrays, such as Low Frequency Array and Square Kilometre Array, have been designed to surpass existing arrays in terms of sensitivity, angular resolution and frequency coverage. This evolution has led to the development of advanced calibration techniques that ensure the delivery of accurate results at the lowest possible computational cost. However, the performance of such calibration techniques is still limited by the compact, bright sources in the sky, used as calibrators. It is important to have a bright enough source that is well distinguished from the background noise level in order to achieve satisfactory results in calibration. This paper presents `clustered calibration' as a modification to traditional radio interferometric calibration, in order to accommodate faint sources that are almost below the background noise level into the calibration process. The main idea is to employ the information of the bright sources' measured signals as an aid to calibrate fainter sources that are nearby the bright sources. In the case where we do not have bright enough sources, a source cluster could act as a bright source that can be distinguished from background noise. For this purpose, we construct a number of source clusters assuming that the signals of the sources belonging to a single cluster are corrupted by almost the same errors. Under this assumption, each cluster is calibrated as a single source, using the combined coherencies of its sources simultaneously. This upgrades the power of an individual faint source by the effective power of its cluster. The solutions thus obtained for every cluster are assigned to each individual source in the cluster. We give performance analysis of clustered calibration to show the superiority of this approach compared to the traditional unclustered calibration. We also provide analytical criteria to choose the optimum number of clusters for a given observation in an efficient manner.
['S. Kazemi', 'S. Yatawatta', 'Saleem Zaroubi']
Clustered Calibration: An Improvement to Radio Interferometric Direction Dependent Self-Calibration
496,524
Evaluation and usability as a practice area has diversified its approaches, broadened the spectrum of UX issues it addresses, and extended its contribution into deeper levels of product-development decision making. This forum addresses conceptual, methodological, and professional issues that arise in the field's continuing effort to contribute robust information about users to product planning and design. David Siegel and Susan Dray, Editors
['Jonathan Seth Arnowitz']
Taking the fast RIDE: designing while being agile
528,064
Abstract Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.
['J. Alison Noble']
Reflections on ultrasound image analysis
833,402
Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.
['Shelly Sachdeva', 'Daigo Yaginuma', 'Wanming Chu', 'Subhash Bhalla']
AQBE – QBE Style Queries for Archetyped Data
397,684
This paper presents a test generation technique for detecting stuck-at (SAF) and transition delay fault (TDF) at gate level in the finite-field systolic multiplier over GF(2 m ) based on polynomial basis. The proposed technique derives test vectors from the cell expressions of systolic multipliers without any requirement of Automatic test Pattern Generation (ATPG) tool. The complete systolic architecture is C-testable for SAF and TDF with only six constant tests. The test vectors are independent of the multiplier size. The test set provides 100% single SAF and TDF coverage.
['Hafizur Rahaman', 'Jimson Mathew', 'Dhiraj K. Pradhan']
Test Generation in Systolic Architecture for Multiplication Over $GF(2 ^{m})$
35,704
In this paper, we present a novel version of discriminative training for N-gram language models. Language models impose language specific constraints on the acoustic hypothesis and are crucial in discriminating between competing acoustic hypotheses. As reported in the literature, discriminative training of acoustic models has yielded significant improvements in the performance of a speech recognition system, however, discriminative training for N-gram language models (LMs) has not yielded the same impact. In this paper, we present three techniques to improve the discriminative training of LMs, namely updating the back-off probability of unseen events, normalization of the N-gram updates to ensure a probability distribution and a relative-entropy based global constraint on the N-gram probability updates. We also present a framework for discriminative adaptation of LMs to a new domain and compare it to existing linear interpolation methods. Results are reported on the Broadcast News and the MIT lecture corpora. A modest improvement of 0.2% absolute (on Broadcast News) and 0.3% absolute (on MIT lectures) was observed with discriminatively trained LMs over state-of-the-art systems.
['Ariya Rastrow', 'Abhinav Sethy', 'Bhuvana Ramabhadran']
Constrained discriminative training of N-gram language models
542,725
Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets . These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.
['Tong Gao', 'Mira Dontcheva', 'Eytan Adar', 'Zhicheng Liu', 'Karrie Karahalios']
DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization
634,820
The H.264 standard achieves much higher coding efficiency than previous video coding standards, due to its improved inter and intra prediction modes which come with a cost of higher computational complexity. When transcoding to H.264 from MPEG-2, motion information from MPEG-2 can be used to speed up the motion search. Fast mode decision algorithms are proposed for B and P frames, and a fast intra prediction algorithm is developed for intra coding. In addition, fast motion estimation is developed by reusing the motion information from MPEG-2. Simulation results demonstrate that we can achieve significant complexity reduction while maintaining the coding efficiency.
['Xiaoan Lu', 'Alexis M. Tourapis', 'Peng Yin', 'Jill Macdonald Boyce']
Fast mode decision and motion estimation for H.264 with a focus on MPEG-2/H.264 transcoding
254,737
This report documents our efforts to develop a Generation Challenges 2011 surface realization system by converting the shared task deep inputs to ones compatible with OpenCCG. Although difficulties in conversion led us to employ machine learning for relation mapping and to introduce several robustness measures into OpenCCG's grammar-based chart realizer, the percentage of grammatically complete realizations still remained well below results using native OpenCCG inputs on the development set, with a corresponding drop in output quality. We discuss known conversion issues and possible ways to improve performance on shared task inputs.
['Rajakrishnan Rajkumar', 'Dominic Espinosa', 'Michael White']
The OSU System for Surface Realization at Generation Challenges 2011
15,543
Reinforcement learning allows agents to use trial and error method to learn intelligent behaviors which like human beings. However, when the learning tasks become difficult, how to define the reward function is an imperative issue. So, inverse reinforcement learning is proposed to form the reward function that imitates the process of interaction between the expert and the environment. In this paper, an Adaboost-like inverse reinforcement learning methods is proposed. This method uses Adaboost classifier and upper confidence bounds to generate the reward function for a complex task. In the imitating process, the agent continuously compares the difference between itself and the expert, and then the difference decides a specific weight for each state through Adaboost classifier. The weight combines with state confidence by upper confidence bounds to form an approximate reward function. Finally, a simulation, maze environment is used to demonstrate that the proposed method can decrease the computation time.
['Kao-Shing Hwang', 'Hsuan-yi Chiang', 'Wei-Cheng Jiang']
Adaboost-like method for inverse reinforcement learning
942,331
In this paper, we present a new model for deformations of shapes. A pseudolikelihood is based on the statistical distribution of the gradient vector field of the gray level. The prior distribution is based on the probabilistic principal component analysis (PPCA). We also propose a new model based on mixtures of PPCA that is useful in the case of greater variability in the shape. A criterion of global or local object specificity based on a preliminary color segmentation of the image is included into the model. The localization of a shape in an image is then viewed as minimizing the corresponding Gibbs field. We use the exploration/selection (E/S) stochastic algorithm in order to find the optimal deformation. This yields a new unsupervised statistical method for localization of shapes. In order to estimate the statistical parameters for the gradient vector field of the gray level, we use an iterative conditional estimation (ICE) procedure. The color segmentation of the image can be computed with an exploration/selection/estimation (ESE) procedure.
['François Destrempes', 'Max Mignotte', 'Jean-François Angers']
Localization of Shapes Using Statistical Models and Stochastic Optimization
19,110
Healthy participants (n = 79), ages 9-23, completed a delay discounting task assessing the extent to which the value of a monetary reward declines as the delay to its receipt increases. Diffusion tensor imaging (DTI) was used to evaluate how individual differences in delay discounting relate to variation in fractional anisotropy (FA) and mean diffusivity (MD) within whole-brain white matter using voxel-based regressions. Given that rapid prefrontal lobe development is occurring during this age range and that functional imaging studies have implicated the prefrontal cortex in discounting behavior, we hypothesized that differences in FA and MD would be associated with alterations in the discounting rate. The analyses revealed a number of clusters where less impulsive performance on the delay discounting task was associated with higher FA and lower MD. The clusters were located primarily in bilateral frontal and temporal lobes and were localized within white matter tracts, including portions of the inferior and superior longitudinal fasciculi, anterior thalamic radiation, uncinate fasciculus, inferior fronto-occipital fasciculus, corticospinal tract, and splenium of the corpus callosum. FA increased and MD decreased with age in the majority of these regions. Some, but not all, of the discounting/DTI associations remained significant after controlling for age. Findings are discussed in terms of both developmental and age-independent effects of white matter organization on discounting behavior.
['Elizabeth A. Olson', 'Paul F Collins', 'Catalina J. Hooper', 'Ryan L. Muetzel', 'Kelvin O. Lim', 'Monica Luciana']
White matter integrity predicts delay discounting behavior in 9-to 23-year-olds: A diffusion tensor imaging study
168,951
Recently, pattern analysis of mass spectra of blood samples has attracted attention as a promising approach to early detection of cancer. However, many questions have been raised about the reliability of the reported results due to the ldquoblack boxrdquo methods employed. The main objective of this paper is to introduce a simple framework of rule building procedure which results limited number of significant linguistic rules using them the clinician can explore the knowledge hidden in raw mass spectra of blood samples and consequently evaluate the cancer status in a new sample, without dependency on complex ldquoblack boxrdquo processing. To achieve this goal, we utilized two major branches of computational intelligence: fuzzy systems and evolutionary computing. We applied fuzzy decision trees as a powerful tool of building efficient fuzzy rules and in parallel, utilized genetic algorithm to optimize the number of the rules. Finally, we compared the performance of the proposed method with two well-known classification methods: KNN and LDA, and the results show excellence of our algorithm.
['Amin Assareh', 'Mohammad Hassan Moradi']
Knowledge acquisition from mass spectra of blood samples using fuzzy decision tree and genetic algorithm
444,738
The paper describes a high-level pseudodeterministic ATPG that explores the DUT state space by exploiting an easy-to-traverse extended FSM model. Testing of hard-to-detect faults is thus improved. Generated test sequences are very effective in detecting both high-level faults and gate-level stuck-at faults. Thus, the reuse of test sequences generated by the proposed ATPG allows to improve the stuck-at fault coverage and to reduce the execution time of commercial gate-level ATPGs.
['G. Di Guglielmo', 'Franco Fummi', 'Cristina Marconcini', 'Graziano Pravadelli']
Improving gate-level ATPG by traversing concurrent EFSMs
475,636
Design of hybrid nonlinear spline adaptive filters for active noise control
['Vinal Patel', 'Danilo Comminiello', 'Michele Scarpiniti', 'Nithin V. George', 'Aurelio Uncini']
Design of hybrid nonlinear spline adaptive filters for active noise control
944,300
We investigate the computational complexity of several special cases of the three-dimensional matching problem where the costs are decomposable and determined by a so-called Kalmanson matrix. For the minimization version we develop an efficient polynomial time algorithm that is based on dynamic programming. For the maximization version, we show that there is a universally optimal matching (whose structure is independent of the particular Kalmanson matrix).
['Sergey Polyakovskiy', 'Frits C. R. Spieksma', 'Gerhard J. Woeginger']
The three-dimensional matching problem in Kalmanson matrices
32,999
This paper proposes neural networks for integrating compositional and non-compositional sentiment in the process of sentiment composition, a type of semantic composition that optimizes a sentiment objective. We enable individual composition operations in a recursive process to possess the capability of choosing and merging information from these two types of sources. We propose our models in neural network frameworks with structures, in which the merging parameters can be learned in a principled way to optimize a well-defined objective. We conduct experiments on the Stanford Sentiment Treebank and show that the proposed models achieve better results over the model that lacks this ability.
['Xiaodan Zhu', 'Hongyu Guo', 'Parinaz Sobhani']
Neural Networks for Integrating Compositional and Non-compositional Sentiment in Sentiment Composition
528,853
International Journal of Communication Systems#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Laurent Yamen Njilla', 'Niki Pissinou', 'Kia Makki']
Game theoretic modeling of security and trust relationship in cyberspace
643,449
A theory of monoids in the category of bicomodules of a coalgebra C or C-rings is developed. This can be viewed as a dual version of the coring theory. The notion of a matrix ring context consisting of two bicomodules and two maps is introduced and the corresponding example of a C-ring (termed a matrix C-ring) is constructed. It is shown that a matrix ring context can be associated to any bicomodule which is a one-sided quasi-finite injector. Based on this, the notion of a Galois module is introduced and the structure theorem, generalising Schneider’s Theorem II [Schneider, Isr. J. Math., 72:167–195, 1990], is proven. This is then applied to the C-ring associated to a weak entwining structure and a structure theorem for a weak A-Galois coextension is derived. The theory of matrix ring contexts for a firm coalgebra (or infinite matrix ring contexts) is outlined. A Galois connection associated to a matrix C-ring is constructed.
['Tomasz Brzezinski', 'Ryan B. Turner']
The Galois Theory of Matrix C-rings
127,603
We introduce a framework to study speech production using a biomechanical model of the human vocal tract, ArtiSynth. Electromagnetic articulography data was used as input to an inverse tracking sim ...
['Saeed Dabbaghchian', 'Marc Arnela', 'Olov Engwall', 'Oriol Guasch', 'Ian Stavness', 'Pierre Badin']
Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels
882,471
Bayesian reinforcement learning (BRL) provides a formal framework for optimal exploration-exploitation tradeoff in reinforcement learning. Unfortunately, it is generally intractable to find the Bayes-optimal behavior except for restricted cases. As a consequence, many BRL algorithms, model-based approaches in particular, rely on approximated models or real-time search methods. In this paper, we present potential-based shaping for improving the learning performance in model-based BRL. We propose a number of potential functions that are particularly well suited for BRL, and are domain-independent in the sense that they do not require any prior knowledge about the actual environment. By incorporating the potential function into real-time heuristic search, we show that we can significantly improve the learning performance in standard benchmark domains.
['Hyeoneun Kim', 'Woosang Lim', 'Kanghoon Lee', 'Yung-Kyun Noh', 'Kee-Eung Kim']
Reward shaping for model-based bayesian reinforcement learning
572,216
Compressible Reparametrization of Time-Variant Linear Dynamical Systems
['Nico Piatkowski', 'François Schnitzler']
Compressible Reparametrization of Time-Variant Linear Dynamical Systems
844,700
In mobile client/server computing environments, mobile clients make access to their server to get interested data and then are disconnected because of high cost of wireless communication. Mobile clients usually keep their own local copies in order to reduce the overhead of communicating with the server. The updates of the server database sometimes are subject to leading to invalidation of the cached map in mobile clients. However it is not efficient to resend the entirely copied map from the server to mobile clients for solving invalidation. This paper proposes a log-based update propagation method to propagate the server's update into its corresponding mobile clients by sending only update logs. The log-based update propagation scheme raises new issues as follows. First, the continuously growing of update logs downgrades the speed of searching for the relevant log data for a specific client. Second, there is considerable overhead of transmitting the update logs into mobile clients by using wireless communication. To solve these problems, we define unnecessary logs and then suggest methods to remove the unnecessary logs.
['Kyounghwan An', 'Bong-Gi Jun', 'Jietae Cha', 'Bonghee Hong']
A log-based cache consistency control of spatial databases in mobile computing environments
865,615
Comprehensive 3D visual simulation for radiation therapy planning.
['Felix G. Hamza-Lup', 'Ivan Sopin', 'O Zeidan']
Comprehensive 3D visual simulation for radiation therapy planning.
605,647
With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods.
['Toan Minh Hoang', 'Hyung Gil Hong', 'Husan Vokhidov', 'Kang Ryoung Park']
Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor
875,459
In this paper, a direct solution method that is based on ranking methods of fuzzy numbers and tabu search is proposed to solve fuzzy multi-objective aggregate production planning problem. The parameters of the problem are defined as triangular fuzzy numbers. During problem solution four different fuzzy ranking methods are employed/tested. One of the primary objectives of this study is to show that how a multi-objective aggregate production planning problem which is stated as a fuzzy mathematical programming model can also be solved directly (without needing a transformation process) by employing fuzzy ranking methods and a metaheuristic algorithm. The results show that this can be easily achieved.
['Adil Baykasoğlu', 'Tolunay Göçken']
Multi-objective aggregate production planning with fuzzy parameters
254,015
Comparative Case Analysis (CCA) is an important tool for criminal investigation and crime theory extraction. It analyzes the commonalities and differences between a collection of crime reports in order to understand crime patterns and identify abnormal cases. A big challenge of CCA is the data processing and exploration. Traditional manual approach can no longer cope with the increasing volume and complexity of the data. In this paper we introduce a novel visual analytics system, Spherical Similarity Explorer (SSE) that automates the data processing process and provides interactive visualizations to support the data exploration. We illustrate the use of the system with uses cases that involve real world application data and evaluate the system with criminal intelligence analysts.
['Leishi Zhang', 'Chris Rooney', 'Lev Nachmanson', 'B. L. William Wong', 'Bum Chul Kwon', 'Florian Stoffel', 'Michael Hund', 'Nadeem Qazi', 'Uchit Singh', 'Daniel A. Keim']
Spherical Similarity Explorer for Comparative Case Analysis
708,437
Tuning Chess Evaluation Function Parameters using Differential Evolution Algorithm
['Borko Boskovic', 'Janez Brest']
Tuning Chess Evaluation Function Parameters using Differential Evolution Algorithm
617,617
Though already intrinsically demanding, the development of real-time embedded on-board software is often made harsher by the constraining nature of the execution environment and the general lack of suitable support. One of the key needs in the design of these systems is to get guidance towards the definition of a system that is truly analysable against timing requirements; specialised methods and tools are needed to accommodate this particular demand. This paper reports on the use of a novel design method especially tailored towards the construction of hard real-time systems. >
['Tullio Vardanega']
Experience with the development of hard real-time embedded Ada software
46,402
Optimal Control of Multi-phase Movements with Learned Dynamics
['Andreea Radulescu', 'Jun Nakanishi', 'Sethu Vijayakumar']
Optimal Control of Multi-phase Movements with Learned Dynamics
598,663
With the increasing amount of available data, distributed data processing systems like Apache Flink, Apache Spark have emerged that allow to analyze large-scale datasets. However, such engines introduce significant computational overhead compared to non-distributed implementations. Therefore, the question arises when using a distributed processing approach is actually beneficial. This paper helps to answer this question with an evaluation of the performance of the distributed data processing framework Apache Flink. In particular, we compare Apache Flink executed on up to 50 cluster nodes to single-threaded implementations executed on a typical laptop for three different benchmarks: TPC-H Query 10, Connected Components,, Gradient Descent. The evaluation shows that the performance of Apache Flink is highly problem dependent, varies from early outperformance in case of TPC-H Query 10 to slower runtimes in case of Connected Components. The reported results give hints for which problems, input sizes,, cluster resources using a distributed data processing system like Apache Flink or Apache Spark is sensible.
['Ilya Verbitskiy', 'Lauritz Thamsen', 'Odej Kao']
When to Use a Distributed Dataflow Engine: Evaluating the Performance of Apache Flink
990,318
Now a days, Internet plays a major role in our day to day activities e.g., for online transactions, online shopping, and other network related applications. Internet suffers from slow convergence of routing protocols after a network failure which becomes a growing problem. Multiple Routing Configurations [MRC] recovers network from single node/link failures, but does not support network from multiple node/link failures. In this paper, we propose Enhanced MRC [EMRC], to support multiple node/link failures during data transmission in IP networks without frequent global re-convergence. By recovering these failures, data transmission in network will become fast.
['T. Anji Kumar', 'M. H. M. Krishna Prasad']
Enhanced Multiple Routing Configurations For Fast IP Network Recovery From Multiple Failures
322,168
This paper presents a modified Hopfield neural network HNN for solving the system-level fault diagnosis problem which aims at identifying the set of faulty nodes. This problem has been extensively studied in the last three decades. Nevertheless, identifying the set of all faulty nodes using only partial syndromes, i.e. when some of the testing or comparison outcomes are missing prior to initiating the diagnosis phase, remains an outstanding research issue. The new HNN-based diagnosis algorithm does not require any prior learning or knowledge about the system, nor about any faulty situation, hence providing a better generalisation performance. Results from a thorough simulation study demonstrate the effectiveness of the HNN-based fault diagnosis algorithm in terms of diagnosis correctness, diagnosis latency and diagnosis scalability, for randomly generated diagnosable systems of different sizes and under various fault scenarios. We have also conducted extensive simulations using partial syndromes. Simulations showed that the HNN-based diagnosis performed efficiently, i.e. diagnosis correctness was around 99% when at most half of the test or comparison outcomes are missing, making it a viable alternative to existing diagnosis algorithms.
['Mourad Elhadef', 'Lotfi Ben Romdhane']
Fault diagnosis using partial syndromes: a modified Hopfield neural network approach
587,590
Location-Sharing Systems With Enhanced Privacy in Mobile Online Social Networks
['Jin Li', 'Hongyang Yan', 'Zheli Liu', 'Xiaofeng Chen', 'Xinyi Huang', 'Duncan S. Wong']
Location-Sharing Systems With Enhanced Privacy in Mobile Online Social Networks
697,722
Using the CREP system we show that matrix representations of representation-finite algebras can be transformed into normal forms consisting of (0, 1)-matrices.
['Peter Dräxler']
Normal Forms for Representations of Representation-finite Algebras
243,627
Even more practical secure logging: Tree-based Seekable Sequential Key Generators.
['Giorgia Azzurra Marson', 'Bertram Poettering']
Even more practical secure logging: Tree-based Seekable Sequential Key Generators.
760,142
This paper is devoted to develop a new matrix scheme for solving two-dimensional time-dependent diffusion equations with Dirichlet boundary conditions. We first transform these equations into equivalent integro partial differential equations (PDEs). Such these integro-PDEs contain both of the initial and boundary conditions and can be solved numerically in a more appropriate manner. Subsequently, all the existing known and unknown functions in the latter equations are approximated by Bernoulli polynomials and operational matrices of differentiation and integration together with the completeness of these polynomials can be used to reduce the integro-PDEs into the associated algebraic generalized Sylvester equations. For solving these algebraic equations, an efficient Krylov subspace iterative method (i.e., BICGSTAB) is implemented. Two numerical examples are given to demonstrate the efficiency, accuracy, and versatility of the proposed method.
['Bashar Zogheib', 'Emran Tohidi']
A new matrix method for solving two-dimensional time-dependent diffusion equations with Dirichlet boundary conditions
838,241
Multicast Protocols: Combining Real-Time and Reliability
['D Alstein']
Multicast Protocols: Combining Real-Time and Reliability
814,932
Abstract This paper reviews past work comparing modern speech recognition systems and humans to determine how far recent dramatic advances in technology have progressed towards the goal of human-like performance. Comparisons use six modern speech corpora with vocabularies ranging from 10 to more than 65,000 words and content ranging from read isolated words to spontaneous conversations. Error rates of machines are often more than an order of magnitude greater than those of humans for quiet, wideband, read speech. Machine performance degrades further below that of humans in noise, with channel variability, and for spontaneous speech. Humans can also recognize quiet, clearly spoken nonsense syllables and nonsense sentences with little high-level grammatical information. These comparisons suggest that the human-machine performance gap can be reduced by basic research on improving low-level acoustic-phonetic modeling, on improving robustness with noise and channel variability, and on more accurately modeling spontaneous speech.
['Richard P. Lippmann']
Speech recognition by machines and humans
459,205
Quantitative isoperimetric inequalities are shown for anisotropic surface energies where the isoperimetric deficit controls both the Fraenkel asymmetry and a measure of the oscillation of the boundary with respect to the boundary of the corresponding Wulff shape.
['Robin Neumayer']
A strong form of the quantitative Wulff inequality
691,492
Presents a simple yet comprehensive approach that enables the stiffness of a tripod-based parallel kinematic machine to be quickly estimated. The approach arises from the basic idea for the determination of the equivalent stiffness of a group of serially connected linear springs and can be implemented in two steps. In the first step, the machine structure is decomposed into two substructures associated with the machine frame and parallel mechanism. The stiffness models of these two substructures are formulated by means of the virtual work principle. This is followed by the second step that enables the stiffness model of the machine structure as a whole to be achieved via linear superposition. The three-dimensional representations of the machine stiffness within the usable workspace are depicted and the contributions of different component rigidities to the machine stiffness are discussed. The results are compared with those obtained through experiments.
['T. Huang', 'M.P. Mei', 'Xingyu Zhao', 'L.H. Zhou', 'Dong Zhang', 'Z.P. Zeng', 'D.J. Whitehouse']
Stiffness estimation of a tripod-based parallel kinematic machine
123,301
Accurate continuous geographic assignment from low- to high-density SNP data
['Gilles Guillot', 'Hákon Jónsson', 'Antoine Hinge', 'Nabil Manchih', 'Ludovic Orlando']
Accurate continuous geographic assignment from low- to high-density SNP data
548,764
A number of interesting problems that I have addressed over the years which yielded surprisingly simple results will be presented. Many of these had intuitively pleasing interpretations or especially simple proofs and/or insights.
['Leonard Kleinrock']
Some of my simple results
487,846
It is easy for adversaries to mount node replication attacks due to the unattended nature of wireless sensor networks. In several replica node detection schemes, witness nodes fail to work before replicas are detected due to the lack of effective random verification. This paper presents a novel distributed detection protocol to counteract node replication attacks. Our scheme distributes node location information to multiple randomly selected cells and then linear-multicasts the information for verification from the localized cells. Simulation results show that the proposed protocol improves detection efficiency compared with various existing protocols and prolongs the lifetime of the overall network.
['Yuping Zhou', 'Zhenjie Huang', 'Juan Wang', 'Rufeng Huang', 'Dongmei Yu']
An energy-efficient random verification protocol for the detection of node clone attacks in wireless sensor networks
313,128
The identification and construction of patterns play a fundamental role in learning, but to date design patterns have been used for communication between professionals, rather than for learning purposes. We adapt the design pattern approach and develop software for mobile and other touch-sensitive devices in order to support design students to learn with patterns. We describe the multi-platform system and its gesture-based interaction for formal and informal environments, and present some application scenarios.
['Henning Breuer', 'Gustavo Zurita', 'Nelson Baloian', 'Mitsuji Matsumoto']
Mobile Learning with Patterns
260,164
In order that multicast routing characteristics is reflected in wireless mesh networks, multicast routing metric is required for quantifying the multicast tree cost under wireless environments. This paper proposes a new multicast routing metric considering receiver's different characteristics of link quality on a wireless multicast channel as well as wireless multicast advantage. The proposed multicast-tree transmission ratio quantifying the multicast tree cost represents the product of the multicast transmission ratios of all nodes in the constructed multicast tree. In this paper, we also propose a wireless multicast routing which constructs the multicast tree by maximizing the multicast-tree transmission ratio in wireless mesh networks with multiple gateways and design the multicast routing protocol based on AODV protocol. Since the multicast tree by our wireless proposed routing algorithm contains the nodes having maximum multicast-node transmission ratio, our proposed multicast routing shows a higher delivery ratio and a lower average delay than original multicast AODV and the multicast routing with minimizing the forwarding nodes in its multicast tree. In comparison with other multicast routings, simulation results show that the proposed multicast heuristics maximizing the multicast-tree transmission ratio construct a cost-effective multicast tree in terms of its delivery ratio, average delay, and required network resources.
['Jaehyung Park', 'Younho Jung', 'Yongmin Kim']
Cost-effective multicast routings in wireless mesh networks with multiple gateways
868,395
There are no mechanical harvesters for the fresh market apple industry commercially available. The absence of automated harvesting technology is a critical problem because of rising production costs and increasing uncertainty about future labor availability. This paper presents the preliminary design of a robotic apple harvester. The approach adopted was to develop a low-cost, ‘undersensed’ system for modern orchard systems with fruiting wall architectures. A machine vision system fuses Circular Hough Transform and blob analysis to detect clustered and occluded fruit. The design includes a custom, six degree of freedom manipulator with an underactuated, passively compliant end-effector. After fruit localization, the system makes a linear approach to the apple and replicates the human picking process. Integrated testing of the robotic harvesting system has been completed in a laboratory environment with a replica apple tree for proof-of-concept demonstration. Experimental results show that the system picked 95 of the 100 fruit attempted with average localization and picking times of 1.2 and 6.8 seconds, respectively, per fruit. Additional work planned in preparation for field evaluation in a commercial orchard is also described.
['Joseph R. Davidson', 'Abhisesh Silwal', 'Cameron J. Hohimer', 'Manoj Karkee', 'Changki Mo', 'Qin Zhang']
Proof-of-concept of a robotic apple harvester
962,421
A Method for Re-using Existing ITIL Processes for Creating an ISO 27001 ISMS Process Applied to a High Availability Video Conferencing Cloud Scenario
['Kristian Beckers', 'Stefan Hofbauer', 'Gerald Quirchmayr', 'Christopher Wills']
A Method for Re-using Existing ITIL Processes for Creating an ISO 27001 ISMS Process Applied to a High Availability Video Conferencing Cloud Scenario
563,223
The present article describes an architecture proposed to identify the probable actions taken by an actor involved in a software development process. Such a process is a collaborative activity that can be associated to a context that comprises tasks and interactions for information exchange, targeting the manipulation of artifacts being developed. In this work, context is described using ontologies with concepts related to activities, events and devices. The events initiated by the actor are detected trough sensors, e. g. data about the execution platform, IDE data collected with the Hackystat [1] tool, web navigation data etc. Finally a scenario is introduced to illustrate the application of the proposed architecture.
['Josivan Pereira de Souza', 'Gustavo Alberto Gimenez Lugo', 'Cesar Augusto Tacla']
Inferring activities of an actor by means of context ontologies
910,281
Urban segregation has received increasing attention in the literature due to the negative impacts that it has on urban populations. Indices of urban segregation are useful instruments for understanding the problem as well as for setting up public policies. The usefulness of spatial segregation indices depends on their ability to account for the spatial arrangement of population and to show how segregation varies across the city. This paper proposes global spatial indices of segregation that capture interaction among population groups at different scales. We also decompose the global indices to obtain local spatial indices of segregation, which enable visualization and exploration of segregation patterns. We propose the use of statistical tests to determine the significance of the indices. The proposed indices are illustrated using an artificial dataset and a case study of socio-economic segregation in Sao Jose dos Campos (SP, Brazil).
['Flávia da Fonseca Feitosa', 'Gilberto Câmara', 'Antônio Miguel Vieira Monteiro', 'Thomas Koschitzki', 'Marcelino Pereira dos Santos Silva']
Global and local spatial indices of urban segregation
454,626
This study proposes a variation immunological system VIS algorithm with radial basis function neural network RBFN learning for function approximation and the exercise of industrial computer IC sales forecasting. The proposed VIS algorithm was applied to the RBFN to execute the learning process for adjusting the network parameters involved. To compare the performance of relevant algorithms, three benchmark problems were used to justify the results of the experiment. With better accuracy in forecasting, the trained RBFN can be practically utilized in the IC sales forecasting exercise to make predictions and could enhance business profit.
['Zhen-Yao Chen', 'R. J. Kuo']
Immunological Algorithm-based Neural Network Learning for Sales Forecasting
577,164
Recent trends toward increased flexibility and configurability in emerging applications present demanding challenges for implementing systems that incorporate such capabilities. The resulting application configuration space is generally much larger than any one hardware implementation can support. We present an overview of a new data-adaptive approach to rapid design and implementation of such highly configurable applications. In support of this data-adaptable approach, we demonstrate an efficient and flexible hardware/software communication middleware to support the seamless communication between hardware and software tasks at runtime. We highlight the flexibility of this interface and present an initial case study with results demonstrating the performance capabilities and area requirements.
['Sachidanand Mahadevan', 'Vijay Shankar Gopinath', 'Roman L. Lysecky', 'Jonathan Sprinkle', 'Jerzy W. Rozenblit', 'Michael W. Marcellin']
Hardware/Software Communication Middleware for Data Adaptable Embedded Systems
415,026
For four (or more) transmitters, a new design of differential space-time block code allowing symbol-wise decoding is presented in this letter. The new design not only has the minimum (symbol-wise) decoding complexity as that by Yuen, Guan and Tjhung (YGT) but also yields a lower error rate. While the YGT code uses a specially designed symbol constellation, the new code uses a conventional QAM with a rotation. At a high data rate such as 3 bps/Hz, the new design with symbol-wise decoding complexity can even yield a lower error rate than the code by Zhu and Jafarkhani that has the pair-wise decoding complexity.
['Yu Chang', 'Yingbo Hua', 'Brian M. Sadler']
A New Design of Differential Space-Time Block Code Allowing Symbol-Wise Decoding
272,338
We present an implementation in the functional programming language Haskell of the PLE decomposition of matrices over division rings. We discover in our benchmarks that in a relevant number of cases it is significantly faster than the C-based implementation provided in FLINT. Describing the guiding principles of our work, we introduce the reader to basic ideas from high performance functional programming.
['Alexandru Ghitza', 'Martin Westerholt-Raum']
HLinear: Exact Dense Linear Algebra in Haskell
727,913
Many studies of human postural control use data from video-captured discrete marker locations to analyze via complex inverse kinematic reconstruction the postural responses to a perturbation. We propose here that Principal Component Analysis of this marker data provides a simpler way to get an overview of postural perturbation responses. Using short (1, 4, and 16 mm) anterior platform step translations that are on the order of a young adult's normal sway path length, we find that the low order eigenmodes (which we call eigenposes) of the time-series marker data correspond dominantly to a simple anterior-posterior pendular motion about the ankle, and secondarily (and with less energy) to hip flexion and extension. A third much weaker mode is occasionally seen that is represented by knee flexion.
['Joseph D. Skufca', 'Erik M. Bollt', 'Rakesh Pilkar', 'Charles J. Robinson']
Eigenposes: Using principal components to describe body configuration for analysis of postural control dynamics
187,159
A Wireless Sensor Network (WSN) is typically deployed on the place in which no electric source is provided, meaning that the battery consumption concern is crucial. Due to their deeply-embedded pervasive nature, applications running on WSNs need to adapt to the changes in physical environment or user preferences. In addition, developers of these applications must pay attention to a set of additional concerns such as limited hardware resources and the management of a set sensor nodes. To appropriately address these concerns, middlewares based on mobile agent based like Agilla have been proposed. Applications for these middlewares are executed by the communications among agents, thereby a common task is to look up the agents. In this paper, we propose an efficient lookup approach for mobile agent based middlewares in WSNs. We evaluate the advantages of our proposal through the comparison with traditional lookup approaches.
['Hiroaki Fukuda']
An efficient agent lookup approach in middlewares for mobile agents
664,296
In computational grids, a virtual organization (VO) is a dynamic coupling of multiple Linux/Unix nodes for resource sharing under specific polices. Currently, VO support functionalities are generally implemented as grid middleware. However, the usability of grids is often impaired by the complexity of configuring and maintaining a new layer of security infrastructure as well as adapting to new interfaces of security enabled services. In this paper, we present an OS-level approach to provide native VO support functionalities, which is a part of XtreemOS project [18]. Our approach adopts pluggable frameworks existing in current OS as extension points to implement VO support, avoiding modification of kernel codes and easily turning traditional OSes into grid-aware ones. The performance evaluation of NAS parallel benchmarks (NPB) shows that our current implementation incurs trivial overhead on original systems.
['An Qin', 'Haiyan Yu', 'Chengchun Shu', 'Xiaoqian Yu', 'Yvon Jégou', 'Christine Morin']
Operating System-Level Virtual Organization Support in XtreemOS
65,600
Similarity search structures for metric data typically bound object partitions by ball regions. Since regions can overlap, a relevant issue is to estimate the proximity of regions in order to predict the number of objects in the regions' intersection. This paper analyzes the problem using a probabilistic approach and provides a solution that effectively computes the proximity through realistic heuristics that only require small amounts of auxiliary data. An extensive simulation to validate the technique is provided. An application is developed to demonstrate how the proximity measure can be successfully applied to the approximate similarity search. Search speedup is achieved by ignoring data regions whose proximity to the query region is smaller than a user-defined threshold. This idea is implemented in a metric tree environment for the similarity range and "nearest neighbors" queries. Several measures of efficiency and effectiveness are applied to evaluate proposed approximate search algorithms on real-life data sets. An analytical model is developed to relate proximity parameters and the quality of search. Improvements of two orders of magnitude are achieved for moderately approximated search results. We demonstrate that the precision of proximity measures can significantly influence the quality of approximated algorithms.
['Giuseppe Amato', 'Fausto Rabitti', 'Pasquale Savino', 'Pavel Zezula']
Region proximity in metric spaces and its use for approximate similarity search
399,566
A literature review uncovered six distinctive indicators of failed information epidemics in the scientific journal literature: (1) presence of seminal papers(s), (2) rapid growth/decline in author frequency, (3) multi-disciplinary research, (4) epidemic growth/decline in journal publication frequency, (5) predominance of rapid communication journal publications, and (6) increased multi-authorship. These indicators were applied to journal publication data from two known failed information epidemics, Polywater and Cold Nuclear Fusion. Indicators 1-4 were distinctive of the failed epidemics, Indicator 6 was not, and Indicator 5 might be. Further bibliometric study of these five indicators in the context of other epidemic literatures needed.
['Eric Ackermann']
Indicators of failed information epidemics in the scientific journal literature: A publication analysis of Polywater and Cold Nuclear Fusion
121,998
OBM2OWL Patterns: Spotlight on OWL Modeling Versatility.
['Marek Dudás', 'Tomás Hanzal', 'Vojtech Svátek', 'Ondrej Zamazal']
OBM2OWL Patterns: Spotlight on OWL Modeling Versatility.
741,321
Transactions on Emerging Telecommunications Technologies#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Hongxiang Shao', 'Youming Sun', 'Hangsheng Zhao', 'Wei Zhong', 'Yuhua Xu']
Locally cooperative traffic-offloading in multi-mode small cell networks via potential games
692,768
Superpixel matching-based depth propagation for 2D-to-3D conversion with joint bilateral filtering
['Cheolkon Jung', 'Jiji Cai']
Superpixel matching-based depth propagation for 2D-to-3D conversion with joint bilateral filtering
681,359
In partial response systems with maximum likelihood sequence estimation a short list of error events dominate. In this paper we introduce a graph-based construction of high rate codes capable of correcting errors from a given list. We define a directed graph describing a universe of error-event-detecting codes, and construct a code by tracing a path through the graph that gives the best probability of error. We demonstrate a substantial SNR gain when these codes are used in a scheme which combines the error event detection and the list soft decoding.
['Bane Vasic']
A graph based construction of high-rate soft decodable codes for partial response channels
324,834
In this paper, a block-based architecture of digital pixel sensor (DPS) array integrated with an on-line compression algorithm is proposed. The proposed technique is based on a block divided storage and compression scheme of the original image. Image capture, storage, and reordering are completed simultaneously and performed on-line while storing pixel value into the on-chip memory array. More than 60% of memory saving is achieved using the proposed block-based design. Furthermore, block-based design greatly reduces the accumulation error inherent in DPCM type of processing. Simulation results show that the PSNR result can reach around 30 dB with a compression ratio of less than 3 BPP.
['Milin Zhang', 'Amine Bermak']
Architecture of a Low Storage Digital Pixel Sensor Array with an On-Line Block-Based Compression
239,518
The memory polynomial model is widely used for the behavioural modelling of radio-frequency non-linear power amplifiers having memory effects. One challenging task related to this model is the selection of its dimension which is defined by the non-linearity order and the memory depth. This study presents an approach suitable for the selection of the model dimension in memory polynomial-based power amplifiers’ behavioural models. The proposed approach uses a hybrid criterion that takes into account the model accuracy and its complexity. The proposed technique is tested on two memory polynomial-based behavioural models. Experimental validation carried out using experimental data of two Doherty power amplifiers, built using different transistor technologies and tested with two different signals, illustrates consistent advantages of the proposed technique as it significantly reduces the model dimension by more than 60% without compromising its accuracy.
['Oualid Hammi', 'Abderezak Miftah']
Complexity-aware-normalised mean squared error ‘CAN’ metric for dimension estimation of memory polynomial-based power amplifiers behavioural models
636,630
This paper shows the effectiveness of using optimized MPI calls for MPI based applications on different architectures. Using optimized MPI calls can result in reasonable performance gain for most of MPI based applications running on most of high-performance distributed systems. Since relative performance of different MPI function calls and system architectures can be uncorrelated, tuning system-dependent MPI applications by exploring the alternatives of using different MPI calls is the simplest but most effective optimization method. The paper first shows that for a particular system, there are noticeable performance differences between using various MPI calls that result in the same communication pattern. These performance differences are in fact not similar across different systems. The paper then shows that good performance optimization for an MPI application on different systems can be obtained by using different MPI calls for different systems. The communication patterns that were experimented in this paper include the point-to-point and collective communications. The MPI based application used for this study is the general-purpose transient dynamic finite element application and the benchmark problems are the public domain 3D car crash problems. The experiment results show that for the same communication purpose, using alternative MPI calls can result in quite different communication performance on the Fujitsu HPC2500 system and the 8-node AMD Athlon cluster, but very much the same performance on the other systems such as the Intel Itanium2 and the AMD Opteron clusters.
['Thuy T. Le']
Tuning system-dependent applications with alternative MPI calls: a case study
197,975
A new method of model registration is proposed using graphical templates. A decomposable graph of landmarks is chosen in the template image. All possible candidates for these landmarks are found in the data image using robust relational local operators. A dynamic programming algorithm on the template graph finds the optimal match to a subset of the candidate points in polynomial time. This combination-local operators to describe points of interest/landmarks and a graph to describe their geometric arrangement in the plane-yields fast and precise matches of the model to the data with no initialization required. In addition, it provides a generic tool box for modeling shape in a variety of applications. This methodology is applied in the context of T2-weighted magnetic resonance (MR) axial and sagittal images of the brain to identify specific anatomies.
['Yali Amit']
Graphical shape templates for automatic anatomy detection with applications to MRI brain scans
396,898
This paper investigates geometric and algorithmic properties of the Voronoi diagram for a transportation network on the Euclidean plane. In the presence of a transportation network, the distance is measured as the length of the shortest (time) path. In doing so, we introduce a needle, a generalized Voronoi site. We present an O(nm2 + m3 + nm log n) algorithm to compute the Voronoi diagram for a transportation network on the Euclidean plane, where n is the number of given sites and m is the complexity of the given transportation network. Moreover, in the case that the roads in a transportation network have only a constant number of directions and speeds, we propose two algorithms; one needs O(nm + m2 + n log n) time with O(m(n + m)) space and the other O(nm log n + m2log m) time with O(n + m) space.
['Sang Won Bae', 'Kyung-Yong Chwa']
VORONOI DIAGRAMS FOR A TRANSPORTATION NETWORK ON THE EUCLIDEAN PLANE
5,910
Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials.
['Satoko Hisanaga', 'Kaoru Sekiyama', 'Tomohiko Igasaki', 'Nobuki Murayama']
Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials.
745,153
We investigate the asymptotic performance of a multi-input multi-output decision-feedback (DF) equalizer used to detect a multicarrier (MC) signal based on a filter-bank and transmitted over a linear dispersive channel. We derive the optimum DF structure for a minimum-mean square-error criterion. We basically show that with infinite length forward and feedback filters and at high signal-to-noise ratio, the geometrical mean of prediction errors does not depend on the paraunitary filter-bank used in the transmitter. Hence, it appears that uncoded filter-bank based MC transmission and single carrier transmission, both used over the same channel and with optimum DF, lead to the same achievable bit rate.
['Luc Vandendorpe', 'Jérôme Louveaux', 'B. Maison', 'Antoine Chevreuil']
About the asymptotic performance of MMSE MIMO DFE for filter-bank based multicarrier transmission
185,328
With the fast development of robotics and intelligent vehicles, there has been much research work on modeling and motion control of autonomous vehicles. However, due to model complexity, and unknown disturbances from dynamic environment, the motion control of autonomous vehicles is still a difficult problem. In this paper, a novel self-learning path-tracking control method is proposed for a car-like robotic vehicle, where kernel-based approximate dynamic programming (ADP) is used to optimize the controller performance with little prior knowledge on vehicle dynamics. The kernel-based ADP method is a recently developed reinforcement learning algorithm called kernel least-squares policy iteration (KLSPI), which uses kernel methods with automatic feature selection in policy evaluation to get better generalization performance and learning efficiency. By using KLSPI, the lateral control performance of the robotic vehicle can be optimized in a self-learning and data-driven style. Compared with previous learning control methods, the proposed method has advantages in learning efficiency and automatic feature selection. Simulation results show that the proposed method can obtain an optimized path-tracking control policy only in a few iterations, which will be very practical for real applications.
['Xin Xu', 'Hongyu Zhang', 'Bin Dai', 'Hangen He']
Self-learning path-tracking control of autonomous vehicles using kernel-based approximate dynamic programming
108,100
Ontology is widely used to solve the data heterogeneity problems on the semantic web, but the available ontologies could themselves introduce heterogeneity. In order to reconcile these ontologies to implement the semantic interoperability, we need to find the relationships among the entities in various ontologies, and the process of identifying them is called ontology alignment. In all the existing matching systems that use evolutionary approaches to optimize their parameters, a reference alignment between two ontologies to be aligned should be given in advance which could be very expensive to obtain especially when the scale of ontologies is considerably large. To address this issue, in this paper we propose a novel approach to utilize the NSGA-II to optimize the ontology alignments without using the reference alignment. In our approach, an adaptive aggregation strategy is presented to improve the efficiency of optimizing process and two approximate evaluation measures, namely match coverage and match ratio, are introduced to replace the classic recall and precision on reference alignment to evaluate the quality of the alignments. Experimental results show that our approach is effective and can find the solutions that are very close to those obtained by the approaches using reference alignment, and the quality of alignments is in general better than that of state of the art ontology matching systems such as GOAL and SAMBO.
['Xingsi Xue', 'Yuping Wang', 'Weichen Hao', 'Juan Hou']
OPTIMIZING ONTOLOGY ALIGNMENTS THROUGH NSGA-II WITHOUT USING REFERENCE ALIGNMENT
665,366