abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
We study the pricing problem for a firm with two servers where heterogeneous customers can choose between deterministic service and probabilistic service. We find that different queueing priority policies do not affect the firm’s revenue but affect the firm’s optimal pricing strategies. Specifically, when the flexible customers (who choose probabilistic service) have a high priority, the optimal price of the deterministic service could be lower than the one of the probabilistic service in a small or moderate market.
['Xiaoya Xu', 'Zhaotong Lian', 'Xin Li', 'Pengfei Guo']
A Hotelling queue model with probabilistic service
831,325
Differential privacy has emerged as an important standard for privacy preserving computation over databases containing sensitive information about individuals. Research on differential privacy spanning a number of research areas, including theory, security, database, networks, machine learning, and statistics, over the last decade has resulted in a variety of privacy preserving algorithms for a number of analysis tasks. Despite maturing research efforts, the adoption of differential privacy by practitioners in industry, academia, or government agencies has so far been rare. Hence, in this tutorial, we will first describe the foundations of differentially private algorithm design that cover the state of the art in private computation on tabular data. In the second half of the tutorial we will highlight real world applications on complex data types, and identify research challenges in applying differential privacy to real world applications.
['Ashwin Machanavajjhala', 'Xi He', 'Michael Hay']
Differential privacy in the wild: a tutorial on current practices & open challenges
933,232
Preferential attachment models for random graphs are successful in capturing many characteristics of real networks such as power law behavior. At the same time they lack exibility to take vertex to vertex anities into account, a feature that is commonly used in many link recommendation algorithms. We propose a random graph model based on both node attributes and preferential attachment. This approach overcomes the limitation of existing models on expressing vertex anity and on reecting properties of dierent subgraphs. We analytically prove that our model preserves the power law behavior in the degree distribution as expressed by natural graphs and we show that it satises the small world property. Experiments show that our model provides an excellent t of many natural graph statistics and we provide an algorithm to infer the associated anity function eciently.
['Jay Lee', 'Manzil Zaheer', 'Stephan Günnemann', 'Alexander J. Smola']
Preferential Attachment in Graphs with Affinities
94,976
Wave-based control of under-actuated, flexible systems has many advantages over other methods. It considers actuator motion as launching a mechanical wave into the flexible system which it absorbs on its return to the actuator. The launching and absorbing proceed simultaneously. This simple, intuitive idea leads to robust, generic, highly efficient, precise, adaptable controllers, allowing rapid and almost vibrationless re-positioning of the system, using only sensors collocated at the actuator–system interface. It has been very successfully applied to simple systems such as mass–spring strings, systems of Euler–Bernoulli beams, planar mass–spring arrays, and flexible three-dimensional space structures undergoing slewing motion. In common with most other approaches, this work also assumed that, during a change of position, the forces from the environment were negligible in comparison with internal forces and torques. This assumption is not always valid. Strong external forces considerably complicate the f...
["William J. O'Connor", 'Hossein Habibi']
Wave-based control of under-actuated flexible structures with strong external disturbing forces
244,814
This paper presents an adaptive sensor activation for target tracking in wireless sensor networks by dynamically adjusting the range of sensor selective activation instead of fixed one. A closed-loop control algorithm for the range of adaptive sensor activation is designed according to the online feedback of the tracking quality. The failed tracking case can also be handled by the proposed algorithm. Extensive simulation results show that the adaptive sensor activation achieves higher performance in terms of tracking effect and energy efficiency.
['Jiming Chen', 'Kejie Cao', 'Youxian Sun', 'Xuemin Shen']
Adaptive Sensor Activation for Target Tracking in Wireless Sensor Networks
409,091
Quality of service for high-bandwidth or delay-sensitive applications in the Internet, such as streaming media and online games, can be significantly improved by replicating server content. We present a decentralized algorithm that allocates server resources to replicated servers in large-scale client-server networks to reduce network distance between each client and the nearby replicated server hosting the resources of interest to that client. Preliminary simulation results show that our algorithm converges quickly to an allocation that reduces the expected client-server distance by almost half compared to the distance when the assignment of replicated servers is done at random.
['Bongjun Ko', 'Dan Rubenstein']
Distributed server replication in large scale networks
1,355
Optimizing the AES S-Box using SAT.
['Carsten Fuhs', 'Peter Schneider-Kamp']
Optimizing the AES S-Box using SAT.
658,635
The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.
['Nolen Perualila-Tan', 'Ziv Shkedy', 'Willem Talloen', 'Hinrich W. H. Göhlmann', 'Marijke Van Moerbeke', 'Adetayo Kasim']
Weighted similarity-based clustering of chemical structures and bioactivity data in early drug discovery.
720,969
As a typical form of Cloud service, Software as a Service(SaaS) has been leading the adoption of Cloud to enable e-Business by the industries. There are growing number of SaaS services which could provide different aspects of capability enabling business to leverage Cloud platform to run and manage business processes, such as customer relationship management, supply chain management, finance and accounting. However the design aspects of building effective SaaS services and corresponding platform to enable e-Business have not been studied before to guide research and development activities from both business and technical perspectives. This paper studies the reference models, architecture and related technical implications of SaaS value chain and operation models, etc. Sample applications and related technical solutions are summarized to demonstrate how these patterns could be instantiated in real business environment.
['Wei Sun', 'Changjie Guo', 'Zhong Bo Jiang', 'Xin Zhang', 'Ning Duan', 'Ying Huang', 'Yue Da Xiong']
Design Aspects of Software as a Service to Enable E-Business through Cloud Platform
422,549
In this paper, the production decisions across multiple energy suppliers in smart grid, powering cellular networks are investigated. The suppliers are characterized by different offered prices and pollutant emissions levels. The challenge is to decide the amount of energy provided by each supplier to each of the operators such that their profitability is maximized while respecting the maximum tolerated level of CO2 emissions. The cellular operators are characterized by their offered quality of service (QoS) to the subscribers and the number of users that determines their energy requirements. Stochastic geometry is used to determine the average power needed to achieve the target probability of coverage for each operator. The total average power requirements of all networks are fed to an optimization framework to find the optimal amount of energy to be provided from each supplier to the operators. The generalized alpha-fair utility function is used to avoid production bias among the suppliers based on profitability of generation. Results illustrate the production behavior of the energy suppliers versus QoS level, cost of energy, capacity of generation, and level of fairness.
['Muhammad Junaid Farooq', 'Hakim Ghazzai', 'Abdullah Kadri']
A stochastic geometry-based demand response management framework for cellular networks powered by smart grid
887,934
Atlas-to-image non-rigid registration by minimization of conditional local entropy
["Emiliano D'Agostino", 'Frederik Maes', 'Dirk Vandermeulen', 'Paul Suetens']
Atlas-to-image non-rigid registration by minimization of conditional local entropy
904,137
This paper presents a method for detecting and classifying a target from its foveal (graded resolution) imagery using a multiresolution neural network. Target identification decisions are based on minimizing an energy function. This energy function is evaluated by comparing a candidate blob with a library of target models at several levels of resolution simultaneously available in the current foveal image. For this purpose, a concurrent (top-down-and-bottom-up) matching procedure is implemented via a novel multilayer Hopfield (1985) neural network. The associated energy function supports not only interactions between cells at the same resolution level, but also between sets of nodes at distinct resolution levels. This permits features at different resolution levels to corroborate or refute one another contributing to an efficient evaluation of potential matches. Gaze control, refoveation to more salient regions of the available image space, is implemented as a search for high resolution features which will disambiguate the candidate blob. Tests using real two-dimensional (2-D) objects and their simulated foveal imagery are provided.
['Susan S. Young', 'Peter D. Scott', 'Cesar Bandera']
Foveal automatic target recognition using a multiresolution neural network
36,108
A key reason for using asynchronous computer conferencing in instruction is its potential for supporting collaborative learning. However, few studies have examined collaboration in computer confere ...
['Huahui Zhao', 'Kirk P. H. Sullivan', 'Ingmarie Mellenius']
Participation, interaction and social presence: An exploratory study of collaboration in online peer review groups
301,732
We contend that repeatability of execution times is crucial to the validity of testing of real-time systems. However, computer architecture designs fail to deliver repeatable timing, a consequence of aggressive techniques that improve average-case performance. This paper introduces the Precision-Timed ARM (PTARM), a precision-timed (PRET) microarchitecture implementation that exhibits repeatable execution times without sacrificing performance. The PTARM employs a repeatable thread-interleaved pipeline with an exposed memory hierarchy, including a repeatable DRAM controller. Our benchmarks show an improved throughput compared to a single-threaded in-order five-stage pipeline, given sufficient parallelism in the software.
['Isaac Liu', 'Jan Reineke', 'David Broman', 'Michael Zimmer', 'Edward A. Lee']
A PRET microarchitecture implementation with repeatable timing and competitive performance
148,999
In 1993, we started to develop a tool to help people manage structured personal and business information. Initially named ORCCA (Online Resources for Corporate Citizens Action), it was fully implemented as a Semantic Networks Editor and Manager. Initially designed as a personal desktop tool, it evolved towards a collaborative server allowing small groups to share Semantic Networks. It has been marketed under the name of IDELIANCE, and has been used in significant business applications by a variety of users. During all these pioneer phases of design, implementation and usage of Ideliance, we learnt a lot of lessons which could be of interest for the success of the emerging Semantic Desktop domain. Not only did we implement practical tools for semantic networks management, but we also closely observed hundreds of people using Semantic Networks as a new way of writing and reading information. In this position paper, we propose a survey of the key points we encountered. For each of these points, we describe our experience, and we propose some guidelines for future Semantic Desktops design and usage.
['Jean Rohmer']
Lessons for the future of semantic desktops learnt from 10 years of experience with the IDELIANCE semantic networks manager
790,713
This paper studies the impact of channel error on the achievable rate of symmetrical K-user multiple-input multiple-output linear interference alignment (IA) networks. The upper and lower bounds of the achievable sum rate are derived analytically with the assumption of orthonormal transmit precoders and receive filters designed from imperfect channel state information (CSI) over both the uncorrelated and correlated channels. For uncorrelated channels, quite tight lower and upper bounds are obtained. The impact of channel error on the degrees of freedom (DoF) and the DoF persistence conditions are also investigated. Results show that the DoF of IA networks persists only if the channel error decreases in an order higher than the signal-to-noise ratio. For correlated channel, the lower and upper bounds for one realization of IA are derived. The derived upper bound can be used to characterize the achievable rate approximately. Simulation results indicate that the achievable rate of IA network is influenced significantly by CSI uncertainty. The obtained analytical bounds provide an intuitive way to show the impact of channel error on the achievable rate and thus can help practical systems deign.
['Anming Dong', 'Haixia Zhang', 'Xiaotian Zhou', 'Dongfeng Yuan']
On Analytical Achievable Rate for MIMO Linear Interference Alignment with Imperfect CSI
910,341
The issue of separating linear mixtures of independent linearly modulated signals stemming from unknown digital communication systems is addressed. The baud-rates of the various transmitted signals are in particular unknown and possibly different. Therefore, sampled versions of the received signal are cyclostationary sequences. Despite the non-stationary environment of the data, Godard's algorithm is shown to achieve separation in rather general contexts.
['Pierre Jallon', 'Antoine Chevreuil', 'Philippe Loubaton', 'Pascal Chevalier']
Separation of convolutive mixtures of linear modulated signals using constant modulus algorithm
385,331
The paper describes a hands-free speech recognition technique based on acoustic model adaptation to reverberant speech. In hands-free speech recognition, the recognition accuracy is degraded by reverberation, since each segment of speech is affected by the reflection energy of the preceding segment. To compensate for the reflection signal, we introduce a frame-by-frame adaptation method, adding the reflection signal to the means of the acoustic model. The reflection signal is approximated by a first-order linear prediction from the preceding frame, and the linear prediction coefficient is estimated by a maximum likelihood method by using the EM algorithm, which maximizes the likelihood of the adaptation data. Its effectiveness is confirmed by word recognition experiments on reverberant speech.
['Tetsuya Takiguchi', 'Masafumi Nishimura']
Acoustic model adaptation using first order prediction for reverberant speech
462,080
Due to resource scarcity, a paramount concern in ad hoc networks is to utilize the limited resources efficiently. The self-organized nature of ad hoc networks makes the social welfare based approach an efficient way to allocate the limited resources. However, the effect of instability of wireless links has not been adequately addressed in the literature. To efficiently address the routing problem in ad hoc networks, we introduce a new metric, maximum expected social welfare, and integrate the cost and stability of nodes in a unified model to evaluate the optimality of routes. The expected social welfare is defined in terms of expected benefit (of the routing source) minus the expected costs incurred by forwarding nodes. Based on our new metric, we design an optimal and efficient algorithm, and implement the algorithm in both centralized (optimal) and distributed (near-optimal) manners. We also extend our work to incorporate retransmission and study the effect of local and global retransmission restrictions on the selection of routes.
['Mingming Lu', 'Jie Wu']
Social Welfare Based Routing in Ad hoc Networks
152,407
A Scheme for Group Target Tracking in WSN Based on Boundary Detecting.
['Quanlong Li', 'Zhijia Zhao', 'Xiaofei Xu', 'Tingting Zhou']
A Scheme for Group Target Tracking in WSN Based on Boundary Detecting.
763,499
Model order reduction has become a rather common approach to approximate complex first-principles electrochemical models (described by systems of linear or nonlinear partial differential equations) into low-order dynamic system models, for control or estimation design.
['Guodong Fan', 'Ke Pan', 'Marcello Canova']
A comparison of model order reduction techniques for electrochemical characterization of Lithium-ion batteries
656,776
In this paper, we introduce a novel differential signaling approach for ultra-wideband (UWB) communications using multiple digital carriers. Unlike the transmitted reference (TR), differential and noncoherent UWB that also bypass explicit channel estimation, our scheme avoids the analog delay element whose on-chip implementation is challenging. Compared with the frequency-shifted reference (FSR) UWB, our multi-carrier differential signaling captures the signal energy more effectively and can achieve the full diversity gain, even in the presence of inter-frame interference. In addition, our approach relies on digital carriers that do not incur any spectrum expansion and can be realized with standard discrete-cosine transform (DCT) or fast fourier transform (FFT) circuits operating at the frame-rate. Simulations are also carried out to corroborate our theoretical analysis.
['Huilin Xu', 'Liuqing Yang', 'Dennis Goeckel']
Digital Multi-Carrier Differential Signaling for UWB Radios
403,956
User generated content in general and textual reviews in particular constitute a vast source of information for the decision making of tourists and management and are therefore a key component for e-tourism. This paper provides a description of the topic model method with a particular application focus on the tourism domain. It therefore contributes different application scenarios where the topic model method processes textual reviews in order to provide decision support and recommendations to online tourists as well as to build a basis for further analytics. In the latter case the delivery of additional semantics helps digging into the enormous amounts of data that are continuously collected in present time. The contribution therefore consists of new models based on the topic model method and results from experimenting with user generated review data on restaurants and hotels.
['Marco Rossetti', 'Fabio Stella', 'Markus Zanker']
Analyzing user reviews in tourism with topic models
129,971
In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed
['P.J. La Riviere', 'Phillip Vargas']
Monotonic penalized-likelihood image reconstruction for X-ray fluorescence computed tomography
81,265
Automatic code synthesis from dataflow program graphs is a promising high-level design methodology for rapid prototyping of multimedia embedded systems. Memory efficient code synthesis from dataflow models has been an active research subject to reduce the gap in terms of memory requirements between the synthesized code and the hand-optimized code. However, existent dataflow models have inherent difficulty of efficiently handling data structures. In this paper, we propose a new dataflow extension called fractional rate dataflow (FRDF) in which fractional number of samples can be produced and consumed. In the proposed FRDF model, a constituent data type is considered as a fraction of the composite data type. Existent integer rate dataflow models can be easily extended to incorporate the fractional rates without loosing analytical properties. In this paper, the SDF model is extended to include FRDF, which can reduce the buffer memory requirements significantly, up to 70%, for some multimedia applications. Extended SDF model with fractional rate has been implemented in our system design environment called PeaCE(Ptolemy extension as Codesign Environment).
['Hyunok Oh', 'Soonhoi Ha']
Fractional Rate Dataflow Model for Efficient Code Synthesis
455,083
In order to improve throughput and perform load balancing, many routing algorithms in WMNs (Wireless Mesh Networks) have been applied to take full advantages of multi-radio, multi-channel and multi-path of WMNs. Even though a lot of proposals have already shown their merits like LQSR, MR-LQSR etc., up to now, few schemes could universally achieve all of these objectives. In this paper, we present MR-OLSR, an optimized link state routing algorithm in multi-radio/multi-channel WMNs. It was improved by OLSR (Optimized Link State Routing) protocol in MANET. It can distribute data traffic among diverse multiple paths to avoid congestion, and improve channel throughput substantially. It uses the novel metric named IWCETT (Improved Weighted Culminated Estimate Transfer Time) to evaluate path quality. Besides, the proposed channel allocation strategy and path scheduling algorithm offer it the ability of loading balance. MR-OLSR is experimented in OPNET simulation environment, and the results prove that our proposal not merely maintains the merits of robustness and scalability in OLSR scheme. What's more, the proposal enhances the stability and reliability in the situation of links failing, and keeps the promise of increasing network throughput apparently.
['Guangwu Hu', 'Chaoqin Zhang']
MR-OLSR: A link state routing algorithm in multi-radio/multi-channel Wireless Mesh Networks
286,712
This paper discusses an alternative solution for curve fitting based on particle swarm optimization (PSO). The implementation of this method is conducted by generating randomly weight and control points of the NURBS curve. The weight and generated control points are used to calculate the NURBS point. The results are compared with the example data points to find the minimum error. The implementation results have shown that the proposed method yield better solution compared to the conventional methods with minimum error generated.
['Delint Ira Setyo Adi', 'Siti Mariyam Shamsuddin', 'Aida Ali']
Particle Swarm Optimization for NURBS Curve Fitting
97,636
In this paper a new electrical model is proposed to be used in fault size based fault simulation of crosstalk aggravated resistive short defects. The electrical behavior of the defect is first described and analyzed in details. Then an electrical model is proposed allowing to efficiently compute the critical resistance determining the range of detectable short resistance. The model is validated by comparison with SPICE simulations.
['Nicolas Houarche', 'Mariane Comte', 'Michel Renovell', 'Alejandro Czutro', 'Piet Engelke', 'Ilia Polian', 'Bernd Becker']
An Electrical Model for the Fault Simulation of Small Delay Faults Caused by Crosstalk Aggravated Resistive Short Defects
541,230
Anomalies classification approach for network-based intrusion detection system
['Qais Qassim', 'Abdullah Mohd Zin', 'Mohd Juzaiddin Ab Aziz']
Anomalies classification approach for network-based intrusion detection system
995,021
Summary form only given. Most of the task allocation models & algorithms in distributed computing system (DCS) require a priori knowledge of its execution time on the processing nodes. Since the task assignment is not known in advance, this time is quite difficult to estimate. We propose a cluster-based dynamic allocation scheme, in a distributed computing system, which eliminate this time requirement. Further, as opposed to a single task allocation, generally proposed in most of the models, we consider multiple tasks. A fuzzy function is used for both the module clustering and processor clustering. Dynamic invocation of clustering and assignment is considered. Experimental results show the efficacy of the proposed model.
['Deo Prakash Vidyarthi', 'Anil Kumar Tripathi', 'Biplab Kumer Sarker', 'Abhishek Dhawan']
Cluster-based multiple task allocation in distributed computing system
204,183
L es entreprises evoluent dans un environnement de plus en plus competitif et exigeant en termes de flexibilite et de reactivite. L’utilisation des modeles issus de l’Intelligence Artificielle Distribuee (IAD) et plus particulierement des systemes multi-agents (SMA) dans les outils de gestion des entreprises s’avere etre efficace pour simuler et reproduire les comportements collaboratifs et adaptatifs tels qu’ils apparaissent actuellement dans les entreprises. Cet article modelise la coordination des differentes parties collaboratives aussi bien internes qu’externes d’une chaine logistique en utilisant les modeles de coordination par formation de coalitions proposes dans les SMA. Dans une premiere partie, nous faisons un tour d’horizon des travaux deja effectues sur ce probleme. Dans une deuxieme partie, nous proposons une modelisation agent, un algorithme de formation de coalitions ainsi qu’un protocole d’interaction entre agents indispensable pour mettre en œuvre cette coordination distribuee. Finalement, nous illustrons notre demarche par un exemple pris dans le domaine de l’industrie avionique.
['Dhouha Anane', 'Samir Aknine', 'Suzanne Pinson']
Coordination d'Activités dans les Chaînes Logistiques : une Approche Multi-Agents par Formation de Coalitions
554,840
The emerging field of silicon photonics [1–3] targets monolithic integration of optical components in the CMOS process, potentially enabling high bandwidth, high density interconnects with dramatically reduced cost and power dissipation. A broadband photonic switch is a key component of reconfigurable networks which retain data in the optical domain, thus bypassing the latency, bandwidth and power overheads of opto-electronic conversion. Additionally, with WDM channels, multiple data streams can be routed simultaneously using a single optical device. Although many types of discrete silicon photonic switches have been reported [4–6], very few of them have been shown to operate with CMOS drivers. Earlier, we have reported two different 2×2 optical switches wirebond packaged with 90nm CMOS drivers [7,8]. The 2×2 switch reported in [7] is based on a Mach-Zehnder interferometer (MZI), while the one reported in [8] is based on a two-ring resonator.
['Alexander V. Rylyakov', 'Clint L. Schow', 'Benjamin G. Lee', 'William M. J. Green', 'Joris Van Campenhout', 'Min Yang', 'Fuad E. Doany', 'Solomon Assefa', 'Christopher V. Jahnes', 'Jeffrey A. Kash', 'Yurii A. Vlasov']
A 3.9ns 8.9mW 4×4 silicon photonic switch hybrid integrated with CMOS driver
167,167
Smart and Sustainable Library: Information Literacy Hub of a New City
['Aleksandar Jerkov', 'Adam Sofronijevic', 'Dejana Kavaja Stanisic']
Smart and Sustainable Library: Information Literacy Hub of a New City
624,400
It is a kind of privacy infraction in personalized web service if the user profile submitted to one web site transferred to another site without user permission. That can cause the second web site easily re-identify to whom these personal data belong, no matter whether the transfer is under control or by hacking. This paper presents a portable solution for users to bind their sensitive web data under the appointed domain. Such data, including query logs, user accounts, click stream etc, could be used to identify the sensitive information of the particular user. By our domain stretching de-identification method, if personal data leak from domain A to B, the web user could still not be identified even though he logins to sites under domain B using the same name and password. In the experiment implemented by javascript, we show the flexibility and efficiency of our de-identification approach.
['Jiaqian Zheng', 'Jing Yao', 'Junyu Niu']
Web user de-identification in personalization
360,383
A Hybrid Combination of Multiple SVM Classifiers for Automatic Recognition of the Damages and Symptoms on Plant Leaves
['Ismail El Massi', 'Youssef Es-saady', 'Mostafa El Yassa', 'Driss Mammass', 'Abdeslam Benazoun']
A Hybrid Combination of Multiple SVM Classifiers for Automatic Recognition of the Damages and Symptoms on Plant Leaves
830,515
The paper is an overview of recent developments concerning attribute implications in a fuzzy setting. Attribute implications are formulas of the form A ⇒ B, where A and B are collections of attributes, which describe dependencies between attributes. Attribute implications are studied in several areas of computer science and mathematics. We focus on two of them, namely, formal concept analysis and databases.
['Radim Belohlavek', 'Vilem Vychodil']
Attribute implications in a fuzzy setting
830,762
We consider the problem of selecting the best among several heavy-tailed systems using a large deviations perspective. In contrast to the light-tailed setting studied by Glynn and Juneja (2004), in the heavy-tailed setting, the probability of false selection is characterized by a rate function that does not require as detailed information about the probability distributions of the system?s performance. This motivates the question of studying static policies that could potentially provide convenient implementable in heavy-tailed settings. We concentrate in studying sharp large deviations estimates for the probability of false detection which suggest precise optimal allocation policies when the systems have comparable heavy-tails. Additional optimality insights are given for systems with non-comparable tails.
['Jose H. Blanchet', 'Jingchen Liu', 'Bert Zwart']
Large deviations perspective on ordinal optimization of heavy-tailed systems
535,111
The advancement of sequencing technologies has made it feasible for researchers to consider many high-throughput biological applications. A core step of these applications is to align an enormous amount of short reads to a reference genome. For example, to resequence a human genome, billions of reads of 35 bp are produced in 1-2 weeks, putting a lot of pressure of faster software for alignment. Based on existing indexing and pattern matching technologies, several short read alignment software have been developed recently. Yet this is still strong need to further improve the speed. In this paper, we show a new indexing data structure called bi-directional BWT, which allows us to build the fastest software for aligning short reads. When compared with existing software (Bowtie is the best), our software is at least 3 times faster for finding unique best alignments, and 25 times faster for finding all possible alignments. We believe that bi-directional BWTis an interesting data structure on its own and couldbe applied to other pattern matching problems.
['Tak Wah Lam', 'Ruiqiang Li', 'Alan Tam', 'Sk Wong', 'Edward Wu', 'Siu-Ming Yiu']
High Throughput Short Read Alignment via Bi-directional BWT
13,092
Special People in Routine Health Information Systems Implementation in South Africa.
['Lyn A. Hanmer', 'Edward Nicol']
Special People in Routine Health Information Systems Implementation in South Africa.
799,989
The objective of this paper is to develop a simple and efficient image enhancement algorithm for compressing image dynamics and enhancing image contrast in the discrete cosine transform (DCT) domain. The basic idea of this approach is to separate illumination and reflectance components of an image, so that by decreasing the contribution of the illumination the proposed algorithm effectively controls the dynamic range of the image using a contrast measure. The main advantage of the proposed algorithm is to enhance details in the dark and the bright areas with low computations without affecting the compressibility of the original image since it performs on the images in the compressed domain. In order to evaluate the proposed scheme, several base line approaches are described and compared.
['Sagkeun K. Lee', 'Surapong Lertrattanapanich']
A simple and efficient image enhancement in the compressed domain
81,160
Many listed companies in Shenzhen and Shanghai just leave from traditional business models and begin construct their modern accounting system. The transparency of financial statements quietly attract attention in these backgrounds. This study identifies that the world's second largest stock market has contrarian profit. Momentum profit for Shenzhen, Shanghai and Hong Kong are all negative and differ obviously from American or Euro in past literatures. We also point out the share-category for Shenzhen-Shanghai would lead to opposite or lower momentum profit. Another inference is that the corporations cannot compute their transparency indices have no trend and insignificant profit.
['Hung-Wen Lin', 'Mao-Wei Hung']
China momentum and transparency
573,818
This paper presents an advanced architecture for residue number system (RNS)-based code-division multiple-access (CDMA) system for high-rate data transmission by combining RNS representation, phase shift keying/quadrature amplitude modulation (PSK/QAM) and orthogonal modulation. The residues obtained from a fixed number of bits are again divided into spread code index and data symbol for modulation. The modulated data symbol is spread using the indexed orthogonal codes and transmitted through a communication channel. The proposed system uses a lower number of orthogonal codes than conventional RNS-based CDMA and the performance is comparable. The computational complexity of the proposed system is compared against alternative schemes such as M-ary CDMA and conventional RNS-based CDMA. The modified system is simulated extensively for different channel conditions and the results are discussed.
['A. S. Madhukumar', 'Francois P. S. Chin']
Enhanced architecture for residue number system-based CDMA for high-rate data transmission
322,489
Detailed, frequent, and accurate land surface temperature (LST) estimates from satellites may support various applications related to the urban climate. When satellite-retrieved LST is used in modeling, the level of uncertainty is important to account for. In this letter, an uncertainty estimation scheme based on Monte Carlo simulations is proposed for local-scale LST products derived from image fusion. The downscaling algorithm combines frequent low-resolution thermal measurements with surface cover information from high spatial resolution imagery. The uncertainty is estimated for all the intermediate products, allowing the analysis of individual uncertainties and their contribution to the final LST product. Uncertainties of less than 2 K was found for most part of the test area. The uncertainty estimation method, although demanding in terms of computations, can be useful for the uncertainty analysis of other satellite products.
['Zina Mitraka', 'Georgia Doxani', 'Fabio Del Frate', 'Nektarios Chrysoulakis']
Uncertainty Estimation of Local-Scale Land Surface Temperature Products Over Urban Areas Using Monte Carlo Simulations
724,694
Diabetic retinopathy is a major cause of blindness in the world. Regular screening and timely intervention can halt or reverse the progression of this disease. Digital retinal imaging technologies have become an integral part of eye screening programs worldwide due to their greater accuracy and repeatability in staging diabetic retinopathy. These screening programs produce an enormous number of retinal images since diabetic patients typically have both their eyes examined at least once a year. Automated detection of retinal lesions can reduce the workload and increase the efficiency of doctors and other eye-care personnel reading the retinal images and facilitate the follow-up management of diabetic patients. Existing techniques to detect retinal lesions are neither adaptable nor sufficiently sensitive and specific for real-life screening application. In this paper, we demonstrate the role of domain knowledge in improving the accuracy and robustness of detection of hard exudates in retinal images. Experiments on 543 consecutive retinal images of diabetic patients indicate that we are able to achieve 100% sensitivity and 74% specificity in the detection of hard exudates.
['Wynne Hsu', 'P. M. D. S. Pallawala', 'Mong Li Lee', 'Kah-Guan Au Eong']
The role of domain knowledge in the detection of retinal hard exudates
486,822
Here, we discuss the influence of higher-order nonlinear effects like third-order dispersion, intra-pulse Raman scattering, and self-steepening effects on 1-ps soliton pulse shift or displacement from its initial position. The temporal shifts of soliton due to these higher-order nonlinear effects were studied numerically by "Method of Moments" to realize the contribution of these HOE on shifts. Further, we note the influence of positive and negative TOD on the shift produced by the combined HOE. The soliton shift is then analyzed in 160-Gbps telecommunication system implemented with conventional single-mode fiber (C-SMF) for the length 10 and 20 km. The disturbances between the adjacent soliton pulses in noted with different 16-bit data sequences, and the deterioration of system is characterized in terms of quality factor. It could be seen for an unchirped soliton of pulsewidth $$T_{\mathrm{o}}\sim 1\hbox {ps}$$To~1ps, the shift is highly influenced due to intra-pulse Raman scattering, while the shifting due to third-order dispersion can be treated negligibly small. Moreover, negative TOD was expected to inhibit the soliton temporal shift such that it would reduce collision with adjacent pulses; it results in more resonant radiation resulting in pulse decaying. Although negative TOD helps in good reception of pulses for 10 km, it fails to perform in system with 20 km C-SMF, where the dispersive components break more and more while traveling along the length of fiber.
['Bhupeshwaran Mani', 'A. Jawahar', 'S. Radha', 'K. Chitra', 'A. Sivasubramanian']
Combined influence of third-order dispersion, intra-pulse Raman scattering, and self-steepening effect on soliton temporal shifts in telecommunications
595,974
The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier‘s probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater range of applicability than previously thought. For example, in this article it is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption. Further, studies in artificial domains show that it will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain. This article‘s results also imply that detecting attribute dependence is not necessarily the best way to extend the Bayesian classifier, and this is also verified empirically.
['Pedro M. Domingos', 'Michael J. Pazzani']
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
229,448
Understanding user mobility is critical for simula- tions of mobile devices in a wireless network, but current mobility models often do not reflect real user movements. In this paper, we provide a foundation for such work by exploring mobility characteristics in traces of mobile users. We present a method to estimate the physical location of users from a large trace of mobile devices associating with access points in a wireless network. Using this method, we extracted tracks of always-on Wi-Fi devices from a 13-month trace. We discovered that the speed and pause time each follow a log-normal distribution and that the direction of movements closely reflects the direction of roads and walkways. Based on the extracted mobility characteristics, we developed a mobility model, focusing on movements among popular regions. Our validation shows that synthetic tracks match real tracks with a median relative error of 17%.
['Minkyong Kim', 'David Kotz', 'Songkuk Kim']
Extracting a Mobility Model from Real User Traces
198,309
Web service (WS) technology has established itself as a key component of enterprise computing for a number of business processes. The mobile devices are expected to become primary internet devices in the near future. The popularity of mobile devices and establishment of WS technology has increased the demand of accessing and hosting web services on mobile devices. Hosting web services on such devices is always challenging because of their limited resources. In this paper, we propose a run time WS partitioning technique with the objective of improving the overall system performance. The proposed WS partitioning technique is devised to offload different sizes of partition on a remote computing node based on the system load. Performance of the proposed WS partitioning technique is analyzed by performing experiments on a simulator.
['Muhammad Asif', 'Shikharesh Majumdar']
A Runtime Partitioning Technique for Mobile Web Services
277,834
This paper explores the nature of argumentation , and its potential impact within the setting of the doctor-patient interaction. More specifically, we propose a twofold investigation. Firstly, we intend to clarify the ontological conditions for supporting the appropriateness of using argumentation in the medical setting, and show its general advantages. Within this framework, by relying on a set of medical consultation recorded with the help of Tessin physicians, we shall underline a typology of action types where argumentation occurs in the medical setting. Secondly, we shall offer some key-concepts to address the evaluation of arguments in the field. Here, considerations on the quality of the statements that make up arguments and the argument schemes adopted will allow us to shed light on the demarcation point between sound and derailed arguments, as a way to foster an optimization of the medical argumentative practice at an empirical level.
['Peter J. Schulz', 'Sara Rubinelli']
Healthy Arguments for Literacy in Health.
563,349
The paper considers the problem of determining an optimal observation schedule for discrimination between competing models of a dynamic process. To this end, an approach originating in optimum experimental design is applied. Its use necessitates solving some maximin problem. Unfortunately, a high computational cost is the main reason for limited practical applications, especially regarding distributed parameter systems. The paper constitutes an attempt to overcome such an impediment via a parallel implementation performed on a Linux cluster. The resulting numerical scheme is validated on a simulation example motivated by problems arising in chemical kinetics.
['Bartosz Kuczewski', 'Przemysław Baranowski', 'Dariusz Ucinski']
Parallel processing in discrimination between models of dynamic systems
353,725
When developing commercial applications, developers seldomly start from scratch. Generally, they use software platforms and extend them, joining an ever growing software ecosystem surrounding the platform. In this paper, the relationships between architecture and platform adoption are explored by analyzing the results of interviews and document study of five case studies of platform extenders. It is found that platform architecture plays a minor role in platform adoption by platform extenders, but that quality attributes strongly influence an architect's design choices when extending a platform. The findings of this work can be used by platform developers to improve platform extendibility and usability.
['Slinger Jansen']
How quality attributes of software platform architectures influence software ecosystems
534,115
Face detection has been well studied in terms of accuracy and speed. However, required memory size reduction is still poorly studied, which is becoming a critical issue as platforms for face detection go tiny. In this paper, we propose a novel compact weak classifier using Adaptive Look-Up-Table (ALUT) for face detection on resource-constrained devices such as wearable sensor nodes. ALUT gives good approximation of log-likelihood [3] with fewer data, thus enabling the drastic reduction of classifier data size, keeping high accuracy and low computation cost. To generate an optimal ALUT, a new cost function called Weighted Sum of Absolute Difference (WSAD) is also proposed for further improvement. In our experiment, the classifier data size is reduced by 43% and the computation cost is reduced by 15% with same accuracy, compared to a conventional fixed LUT classifier.
['Yuya Hanai', 'Tadahiro Kuroda']
Face detection through compact classifier using Adaptive Look-Up-Table
30,298
This article describes new construction and postoptimization heuristics for the Undirected Rural Postman Problem. Extensive computational tests indicate that some combinations of these heuristics consistently produce optimal or high-quality solutions.
['Alain Hertz', 'Gilbert Laporte', 'Pierrette Nanchen Hugo']
Improvement Procedures for the Undirected Rural Postman Problem
354,073
Object modeling plays a key role in modern information systems, in which objects that have similar features are grouped into classes. As objects may change during their lifetime, history data modeling of objects should be studied. However, former object modeling approaches are still not sufficient for knowledge management. In this paper, a method to record and reuse the evolution history of objects is put forward. This method provides support for knowledge management tasks, such as data analysis and business process modeling. The requirements of object evolution modeling are analyzed first. Then the types of object changes, the representation schema of the changes, the basic functions on the object history and applications of this method are discussed.
['Yinglin Wang']
Object Lifecycle Evolution History Modeling and Reuse for Knowledge Management
120,428
This paper presents an intra field de-interlacing algorithm for spatial edge preserving using the detection of accurate edge direction. Conventional intra field de-interlacing algorithms determine the edge direction at the pixel or half pixel level, so that they can be highly sensitive to noise and lead to image degradation. In this paper, the proposed algorithm first considers the edge tendency in the edge region, and then candidate direction vectors (CDVs) are selected using the modified Sobel operation through the edge tendency. Finally, the CDVs are adaptively applied to interpolate the lost pixel in the edge region. Experimental results show that the proposed algorithm performs well with a variety of still and moving images compared with conventional intra field algorithms in the literature.
['Soonjong Jin', 'Wonki Kim', 'Jechang Jeong']
Fine directional de-interlacing algorithm using modified Sobel operation
287,718
A path-method is used as a mechanism in object-oriented databases (OODBs) to retrieve or to update information relevant to one class that is not stored with that class but with some other class. A path-method is a method which traverses from one class through a chain of connections between classes and accesses information at another class. However, it is a difficult task for a casual user or even an application programmer to write path-methods to facilitate queries. This is because it might require comprehensive knowledge of many classes of the conceptual schema that are not directly involved in the query, and therefore may not even be included in a user's (incomplete) view about the contents of the database. We have developed a system, called path-method generator (PMG), which generates path-methods automatically according to a user's database-manipulating requests. The PMG offers the user one of the possible path-methods and the user verifies from his knowledge of the intended purpose of the request whether that path-method is the desired one. If the path method is rejected, then the user can utilize his now increased knowledge about the database to request (with additional parameters given) another offer from the PMG. The PMG is based on access weights attached to the connections between classes and precomputed access relevance between every pair of classes of the OODB. Specific rules for access weight assignment and algorithms for computing access relevance appeared in our previous papers [MGPF92, MGPF93, MGPF96]. In this paper, we present a variety of traversal algorithms based on access weights and precomputed access relevance. Experiments identify some of these algorithms as very successful in generating most desired path-methods. The PMG system utilizes these successful algorithms and is thus an efficient tool for aiding the user with the difficult task of querying and updating a large OODB.
['Ashish Mehta', 'James Geller', 'Yehoshua Perl', 'Erich J. Neuhold']
The OODB path-method generator (PMG) using access weights and precomputed access relevance
242,974
Centralized self-optimization of interference management in LTE-A HetNets.
['Yasir Khan', 'Berna Sayrac', 'Eric Moulines']
Centralized self-optimization of interference management in LTE-A HetNets.
857,595
This paper presents a novel automatic approach to partially integrate FrameNet and WordNet. In that way we expect to extend FrameNet coverage, to enrich WordNet with frame semantic information and possibly to extend FrameNet to languages other than English. The method uses a knowledge-based Word Sense Disambiguation algorithm for linking FrameNet lexical units to WordNet synsets. Specifically, we exploit a graph-based Word Sense Disambiguation algorithm that uses a large-scale knowledge-base derived from WordNet. We have developed and tested four additional versions of this algorithm showing a substantial improvement over previous results.
['Egoitz Laparra', 'German Rigau']
Integrating WordNet and FrameNet using a Knowledge-based Word Sense Disambiguation Algorithm
155,625
We propose a video model to generate a VBR MPEG video traffic based on the scene description. Long sessions of non-homogeneous video clips are decomposed into homogeneous video shots. The shots are then classified into different classes in terms of their texture and motion complexity. Each shot class can be uniquely described and modelled as a homogeneous video. The model may be used to generate traffic of any type of video scenes ranging from a low complexity video conferencing to a highly active sport program.
['Ali M. Dawood', 'Mohammed Ghanbari']
MPEG video modelling based on scene description
239,609
Agent-Oriented Modeling in Simulation.
['Mathias Röhl', 'Adelinde M. Uhrmacher']
Agent-Oriented Modeling in Simulation.
796,726
Unsteady Couette flow of a viscous incompressible fluid between two horizontal porous flat plates is considered. The stationary plate is subjected to a periodic suction and the plate in uniform motion is subjected to uniform injection. Approximate solutions have been obtained for the velocity and the temperature fields, skin friction by using perturbation technique. The heat transfer characteristic has also been studied on taking viscous dissipation into account. It is found that the main flow velocity decreases with increase in frequency parameter. On the other hand, the magnitude of the cross-flow velocity increases with increase in frequency parameter. It is seen that the amplitude of the shear stress due to main flow decreases while that due to cross-flow increases with increase in frequency parameter. It is also seen that the tangent of phase shifts both due to the main and cross-flows decrease with increase in frequency parameter. It is observed that the temperature increases with increase in frequency parameter.
['M. Guria', 'R. N. Jana']
THREE-DIMENSIONAL FLUCTUATING COUETTE FLOW THROUGH THE POROUS PLATES WITH HEAT TRANSFER
170,717
Ensuring dependability of software requires the use of formal methods. However, formal methods are still not widely accepted in engineering practice. One of the reasons for this is difficulty of deriving formal specifications from large and complex requirements given in natural language. In this paper, we propose an approach to deriving formal specifications of reactive systems starting from their requirements. We base our approach on proposing a new requirements language and show how to transform the informal requirements of a reactive system into requirements written in this language. The derived requirements allow us to better structure the informal requirements. We show how these requirements are then systematically translated into a formal specification in the B Method, which is our formal modelling framework. To validate the proposed approach, we conduct a case study and show how to obtain formal specification of a reactive routing protocol for ad-hoc networks - AODV (Ad hoc On-Demand Distant Vector) routing protocol.
['Duvravka Ilic']
Deriving Formal Specifications from Informal Requirements
102,936
In this paper, we propose a two-timescale delay-optimal base station discontinuous transmission (BS-DTX) control and user scheduling for downlink coordinated MIMO systems with energy harvesting capability. To reduce the complexity and signaling overhead in practical systems, the BS-DTX control is adaptive to both the energy state information (ESI) and the data queue state information (QSI) over a longer timescale. The user scheduling is adaptive to the ESI, the QSI and the channel state information (CSI) over a shorter timescale. We show that the two-timescale delay-optimal control problem can be modeled as an infinite horizon average cost partially observed Markov decision problem (POMDP), which is well known to be a difficult problem in general. By using sample-path analysis and exploiting specific problem structure, we first obtain some structural results on the optimal control policy and derive an equivalent Bellman equation with reduced state space. To reduce the complexity and facilitate distributed implementation, we obtain a delay-aware distributed solution with the BS-DTX control at the BS controller (BSC) and the user scheduling at each cluster manager (CM) using approximate dynamic programming and distributed stochastic learning. We show that the proposed distributed two-timescale algorithm converges almost surely. Furthermore, using queueing theory, stochastic geometry, and optimization techniques, we derive sufficient conditions for the data queues to be stable in the coordinated MIMO network and discuss various design insights.
['Ying Cui', 'Vincent Kin Nang Lau', 'Yueping Wu']
Delay-Aware BS Discontinuous Transmission Control and User Scheduling for Energy Harvesting Downlink Coordinated MIMO Systems
245,283
Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n/sup 2/) processors, or in O(L+log/sup 2/n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L/sup 2/).
['Hideo Nagumo', 'Mi Lu', 'Karan Watson']
Parallel algorithms for the static dictionary compression
33,605
Estimation of current source density CSD from the low-frequency part of extracellular electric potential recordings is an unstable linear inverse problem. To make the estimation possible in an experimental setting where recordings are contaminated with noise, it is necessary to stabilize the inversion. Here we present a unified framework for zero-and higher-order singular-value-decomposition SVD-based spectral regularization of 1D linear CSD estimation from local field potentials. The framework is based on two general approaches commonly employed for solving inverse problems: quadrature and basis function expansion. We first show that both inverse CSD iCSD and kernel CSD kCSD fall into the category of basis function expansion methods. We then use these general categories to introduce two new estimation methods, quadrature CSD qCSD, based on discretizing the CSD integral equation with a chosen quadrature rule, and representer CSD rCSD, an even-determined basis function expansion method that uses the problem's data kernels representers as basis functions. To determine the best candidate methods to use in the analysis of experimental data, we compared the different methods on simulations under three regularization schemes Tikhonov, tSVD, and dSVD, three regularization parameter selection methods NCP, L-curve, and GCV, and seven different a priori spatial smoothness constraints on the CSD distribution. This resulted in a comparison of 531 estimation schemes. We evaluated the estimation schemes according to their source reconstruction accuracy by testing them using different simulated noise levels, lateral source diameters, and CSD depth profiles. We found that ranking schemes according to the average error over all tested conditions results in a reproducible ranking, where the top schemes are found to perform well in the majority of tested conditions. However, there is no single best estimation scheme that outperforms all others under all tested conditions. The unified framework we propose expands the set of available estimation methods, provides increased flexibility for 1D CSD estimation in noisy experimental conditions, and allows for a meaningful comparison between estimation schemes.
['Pascal Kropf', 'Amir Shmuel']
1d current source density csd estimation in inverse theory: A unified framework for higher-order spectral regularization of quadrature and expansion-type csd methods
729,975
Orthogonal frequency division multiplexing (OFDM) and multiple access (OFDMA) signal receivers often employ pilot-aided channel estimation. For it, an often considered technique is frequency-domain polynomial interpolation, due to its simplicity. However, the performance of polynomial interpolators suffers in channels with large delay spreads due to modeling error. The problem can be remedied by including a linear phase to the interpolator. In this paper, we derive a method to estimate the optimal phase shift that minimizes the mean-square channel estimation error. We further consider adaptive selection of the interpolation order for best performance. As a practical application, we adapt the proposed channel estimation technique to mobile WiMAX downlink transmission and examine the resulting performance.
['Kun-Chien Hung', 'David W. Lin']
Pilot-Aided Multicarrier Wireless Channel Estimation via MMSE Polynomial Interpolation
22,042
Data Acquisition and delivery in Vehicular Ad hoc Networks (VANETs) is an important topic that has received very little attention. In [1],we proposed a system in which roadside units (RSUs) were exploited to satisfy the various requests of VANET users. Our approach uses RSUs as delegates to acquire services from service providers without the users connecting to them. Users' interests range from email messages, news, web downloading, business transactions, multimedia sharing, traffic or weather information, etc… Depending on RSUs to obtain users' data puts a huge load on the RSU network and might lead to a scalability problem, especially with the large number of users in VANETs. In this paper, we build on the approach in [1] and propose an RSU scheduling mechanism in which an RSU builds a schedule that is divided into time-slots (TSs). In each TS, all users that are expected to connect to the VANET are specified. Hence, an RSU prepares users' data and caches them during a free TS before the users connect. The users' connection times are derived from observing their actual connections. Our approach was tested using ns2 to assess its efficiency.
['Khaleel W. Mershad', 'Hassan Artail']
SCORE: Data Scheduling at roadside units in vehicle ad hoc networks
290,749
In this paper, we present a novel framework to customize multimedia messages for mobile users. The goal is to generate a video message from a series of pictures. The framework includes visual attention view detection, image grouping, image ranking, and slideshow generation. Considering the limitation of mobile device, we use a simple color feature based attention model to detect interesting regions of the images. We group the images, and rank them based on the attention view similarities. Finally a human perception based slideshow is designed to keep the mobile users' eye on attention regions efficiently. In addition, a short music is selected to match the video message. Extensive experiments and user studies show the promising performance of the proposed system.
['Cunxun Zang', 'Qingshan Liu', 'Hanqing Lu', 'Kongqiao Wang']
A New Multimedia Message Customizing Framework for mobile Devices
25,042
Several algorithms exist for optimally solving 2-way number partitioning. When the cardinality of the multiset to partition is high enough, solving algorithms have to relay on search techniques with low memory complexity. Currently, CKK is the reference of such algorithms. Here we propose a contribution to speed-up CKK. We detail how to consider terminal nodes with 4, 5, 6 or more numbers left, when the original CKK considers a node terminal when it contains 4 or less numbers left. Using this idea, we propose new CKK implementations, which provide savings up to 70% of execution time with respect to the original CKK algorithm. We provide experimental evidence of the benefits of this approach on random number partitioning instances with numbers of 35 bits.
['Jesús Cerquides', 'Pedro Meseguer']
Speeding up 2-way number partitioning
747,693
Fault-tolerance and its associated overheads are of great concern for current and future extreme-scale systems. The dominant mechanism used today, coordinated checkpoint/restart, places great demands on the I/O system and the method requires frequent synchronization. Uncoordinated checkpointing with message logging addresses many of these limitations at the cost of increasing the storage needed to hold message logs. These storage requirements are critical to the scalability of extreme-scale systems. In this paper, we investigate the viability of using standard compression algorithms to reduce message log sizes for a number of key high-performance computing workloads. Using these workloads we show that, while not be a universal solution for all applications, compression has the potential to significantly reduce message log sizes for a great number of important workloads.
['Kurt B. Ferreira', 'Rolf Riesen', 'Dorian C. Arnold', 'Dewan Ibtesham', 'Ron Brightwell']
The viability of using compression to decrease message log sizes
342,977
A New Covariance-assignment State Estimator in the Presence of Intermittent Observation Losses
['Sangho Ko', 'Seungeun Kang', 'Jihyoung Cha']
A New Covariance-assignment State Estimator in the Presence of Intermittent Observation Losses
747,538
The focus of this paper is on the development of a human inspired autonomous control scheme for a planar bipedal robot in a hybrid dynamical framework to realize human-like walking projected onto sagittal plane. In addition, a unified modelling scheme is presented for the biped dynamics incorporating the effects of various locomotion constraints due to varying feet-ground contact states, unilateral ground contact force, contact friction cone, passive dynamics associated with floating base etc. along with a practical impact velocity map on heel strike event. The autonomous control synthesis is formulated as a two-level hierarchical control algorithm with a hybrid-state based supervisory control in outer level and an integrated set of constrained motion control primitives, called task level control, in inner level. The supervisory level control is designed based on a human inspired heuristic approach whereas the task level control is formulated as a quadratic optimization problem with linear constraints. The explicit analytic solution obtained in terms of joint acceleration and ground contact force is used in turn to generate the joint torque command based on inverse dynamics model of the biped. The proposed controller framework is named as Hybrid-state Driven Autonomous Control (HyDAC). Unlike many other bipedal control schemes, HyDAC does not require a preplanned trajectory or orbit in terms of joint variables for locomotion control. Moreover, it is built upon a set of basic motion control primitives similar to those in human walk which provides a transparent and easily adaptable structure for the controller. These features make HyDAC framework suitable for bipedal walk on terrain with step and slope discontinuities without a priori gait optimization. The stability and agility of the proposed control scheme are demonstrated through dynamic model simulation of a 12-link planar biped having similar size and mass properties of an adult sized human being restricted to sagittal plane. Simulation results show that the planar biped is able to walk for a speed range of 0.1-2 m/s on level terrain and for a ground slope range of + / - 20 deg for 1 m/s speed. Formulation of realistic foot-ground impact model.Explicit analytic solution for control law without a priori walking gait optimization.Transparent control design strategy based on dynamically coordinated motion control primitives.Novel velocity control algorithm by selective activation of ground contact point.Walking Performance is demonstrated for a wide range of velocities and ground slopes.
['Sam K. Zachariah', 'Thomas Kurian']
Hybrid-state driven autonomous control for planar bipedal locomotion
821,072
Technology, in order to be human, needs to be informed by a reflection on what it is to be a tool in ways appropriate to humans. This involves both an instrumental, appropriating aspect (‘I use this tool’) and a limiting, appropriated one (‘The tool uses me’).
['Jacob L. Mey']
Cognitive Technology—technological cognition
432,343
Modern safety-critical systems, such as avionics, tend to be mixed-critical, because integration of different tasks with different assurance requirements can effectively reduce their costs in terms of hardware, at the risk, however to increase the costs for certification, in particular in the context of proving their schedulability. To simplify the certification costs such systems use Time Triggered (TT) scheduling paradigm, and a generalization of the Time Triggered (TT) scheduling paradigm Single Time Table per Mode (STTM), is a promising scheduling approach as, compared to priority-based algorithms. In the present paper we present a state-of-the art STTM algorithm which works optimally on single core and shows good preliminary results for multi-cores.
['Dario Socci', 'Peter Poplavko', 'Saddek Bensalem', 'Marius Bozga']
Time-Triggered Mixed-Critical Scheduler on Single and Multi-processor Platforms
547,662
Most switch architectures for parallel systems are designed to eliminate only the worst kinds of unfairness such as starvation scenarios in which packets belonging to one traffic flow may not make forward progress for an indefinite period of time. However stricter fairness can lead to a more predictable and better performance, in addition to improving isolation between traffic belonging to different users. This paper presents a new easily implementable scheduling discipline, called Elastic Round Robin (ERR), for the unique requirements of wormhole switching, popular in interconnection networks for parallel systems. Despite the constraints of wormhole switching imposed on the design, our scheduling discipline is at least as efficient as other scheduling disciplines, and more fair than scheduling disciplines of comparable efficiency proposed for any other kind of network, including the Internet. We prove that the work complexity of ERR is O(1) with respect to the number of flows. We analytically prove the fairness properties of ERR, and show that its relative fairness measure has an upper bound of 3 m, where m is the size of the largest packet that actually arrives during an execution of ERR. Finally, we present simulation results comparing the fairness and performance characteristics of ERR with other scheduling disciplines of comparable efficiency.
['Salil S. Kanhere', 'Alpa B. Parekh', 'Harish Sethu']
Fair and efficient packet scheduling in wormhole networks
418,663
A transfer function is a mathematical function relating the output or response of a system to the input or stimulus. It is a concise mathematical model representing the input/output behavior of a system and is widely used in many areas of engineering including system theory and signal analysis. Binary Decision Diagrams (BDDs) are a canonical representation of Boolean functions. We implement a framework to build transfer function models of digital switching functions using BDDs and demonstrate their application on simulation and implication.
['David Kebo Houngninou', 'Mitchell A. Thornton']
Implementation of switching circuit models as transfer functions
883,863
Resilient systems are expected to continuously provide trustworthy services despite changes in the environment or in the requirements they must comply with. In this paper, we focus on a methodology to provide adaptation mechanisms meant to ensure dependability while coping with various modifications of applications and system context. To this aim, we propose a representation of dependability-related attributes that may evolve during the system's lifecycle, and show why this representation is useful to provide adaptation of dependability mechanisms at runtime.
['Miruna Stoicescu', 'Jean-Charles Fabre', 'Matthieu Roy']
Architecting resilient computing systems: overall approach and open issues
545,318
Secure Cluster Based Routing Scheme (SCBRS) for Wireless Sensor Networks
['Sohini Roy']
Secure Cluster Based Routing Scheme (SCBRS) for Wireless Sensor Networks
632,017
Efficient computer simulation of complex physical phenomena has long been challenging due to their multi-physics and multi-scale nature. In contrast to traditional time-stepped execution methods, we describe an approach using optimistic parallel discrete event simulation (PDES) and reverse computation techniques. We show that reverse computation-based optimistic parallel execution can significantly reduce the execution time of a plasma simulation without requiring a significant amount of additional memory compared to conservative execution techniques. We describe an application-level reverse computation technique that is efficient and suitable for complex scientific simulations involving floating point operations.
['Yarong Tang', 'Kalyan S. Perumalla', 'Richard M. Fujimoto', 'Homa Karimabadi', 'Jonathan Driscoll', 'Yuri A. Omelchenko']
Optimistic Parallel Discrete Event Simulations of Physical Systems Using Reverse Computation
29,096
Presents an efficient and accurate high level software energy estimation methodology using the concept of characterization-based macromodeling. In characterization-based macromodeling, a function or subroutine is characterized using an accurate lower level energy model of the target processor to construct a macromodel that relates the energy consumed in the function under consideration to various parameters that can be easily observed or calculated from a high-level programming language description. The constructed macromodels eliminate the need for significantly slower instruction-level interpretation or hardware simulation that is required in conventional approaches to software energy estimation. Two different approaches to macromodeling for embedded software offer distinct efficiency-accuracy characteristics: 1) complexity-based macromodeling, where the variables that determine the algorithmic complexity of the function under consideration are used as macromodeling parameters and 2) profiling-based macromodeling, where internal profiling statistics for the functions are used as the parameters in the energy macromodels.
['Tat Kee Tan', 'Anand Raghunathan', 'Ganesh Lakshminarayana', 'Niraj K. Jha']
High-level energy macromodeling of embedded software
222,250
Human users can obtain information about the physical properties of an object through direct manipulation with one or two hands. Object manipulation of virtual objects using force feedback haptic interfaces is very challenging due to current technological constrains that often affect the information obtained by the user. Here, we describe the Master Finger 2 (MF2), a force feedback device which allows manipulation of one or more objects with one or two hands. We use experimental data to evaluate the performance of MF2 based on its capability to simulate effectively the weight of virtual objects. The results and implications for system design are discussed.
['Christos Giachritsis', 'Pablo Garcia-Robledo', 'Jorge Barrio', 'Alan M. Wing', 'Manuel Ferre']
Unimanual, bimanual and bilateral weight perception of virtual objects in the Master Finger 2 environment
78,348
Automatic mixing of bio-samples using micro-channel and centrifugation is considered. Existing methods for mixing bio-samples for life science applications use micro-well and micro-stirrer which are ineffective when used with highly viscous materials at the microliter or nanoliter level. Our method mixes viscous bio-samples in micro-capsules using micro-channel and centrifugation which minimizes contact with mixing tools. To introduce the method, firstly the design of the micro-capsule along with the micro-channel is presented. Secondly, a hydrodynamic model describing the flow of viscous materials in the micro-channel is presented, and the average sample velocity and the traveling time through the micro-channel are analyzed. Thirdly, the relationship between centrifugation speed and time is given to achieve effective and efficient control of the flow. Finally, experimental and theoretical results are compared.
['Liang Yuan', 'Yuan F. Zheng', 'Weidong Chen', 'Martin Caffrey']
Automatic Mixing of Bio-Samples Using Micro-Channel and Centrifugation
349,931
Two medieval manuscripts are recorded, investigated and analyzed by philologists in collaboration with computer scientists. Due to mold, air humidity and water the parchment is partially damaged and consequently hard to read. In order to enhance the readability of the text, the manuscript pages are imaged in different spectral bands ranging from 360 to 1000nm. A registration process is necessary for further image processing methods which combine the information gained by the different spectral bands. Therefore, the images are coarsely aligned using rotationally invariant features and an affine transformation. Afterwards, the similarity of the different images is computed by means of the normalized cross correlation. Finally, the images are accurately mapped to each other by the local weighted mean transformation. The algorithms used for the registration and results in enhancing the texts using Multivariate Spatial Correlation are presented in this paper.
['Martin Lettner', 'Markus Diem', 'Robert Sablatnig', 'Heinz Miklas']
Registration and enhancing of multispectral manuscript images
20,872
Linear speedup of two-dimensional neighborhood functions requires that the underlying processor perform iterative operations and solve the window border problem simultaneously. The authors illustrate a solution to the window border problem using an array processor that provides conflict-free access and alignment of two types of square block vectors of two-dimensional arrays. They give a parallel algorithm embodying this solution, which can speedup a class of neighborhood functions by a factor directly proportional to the number of processing elements in the array processor. >
['De-Lei Lee', 'Wayne A. Davis']
On linear speedup of a class of neighborhood functions in an array processor
406,279
Using an age of information (AoI) metric, we examine the transmission of coded updates through a binary erasure channel to a monitor/receiver. %Coded redundancy is employed to ensure the timely delivery of codupdate packets. We start by deriving the average status update age of an infinite incremental redundancy (IIR) system in which the transmission of a $k$-symbol update continues until $k$ symbols are received. This system is then compared to a fixed redundancy (FR) system in which each update is transmitted as an $n$ symbol packet and the packet is successfully received if and only if at least $k$ symbols are received. If fewer than $k$ symbols are received, the update is discarded. Unlike the IIR system, the FR system requires no feedback from the receiver. For a single monitor system, we show that tuning the redundancy to the symbol erasure rate enables the FR system to perform as well as the IIR system. As the number of monitors is increased, the FR system outperforms the IIR system that guarantees delivery of all updates to all monitors.
['Roy D. Yates', 'Elie Najm', 'Emina Soljanin', 'Jing Zhong']
Timely Updates over an Erasure Channel
996,803
We present a fine-grained parallel processing scheme for speeding up an industrial VLSI synthesis tool on a network of workstations without sacrificing the quality of results. The synthesis tool is Ambit BuildGates, a high-capacity ASIC logic synthesis software from Cadence Design Systems. We examine some necessary operating conditions for a practical parallel implementation of such a software, and propose a parallel approach which accommodates for the highly-irregular computation requirements in synthesis and the high-latency, low-bandwidth conditions of the target environment. For pragmatic as well as performance concerns, we designed a parallel algorithm which produces results (synthesized logic) that are identical to those of the original uniprocessor algorithm. We employ heuristic load assessment and adaptive cyclic distribution in order to actively balance the unpredictable load throughout execution, which enables a considerable reduction in runtime (i.e. 51.3 hours down to 23.4 hours) on actual customer design benchmarks.
['Victor Kim', 'Prithviraj Banerjee', 'K. De']
Fine-grained parallel VLSI synthesis for commercial CAD on a network of workstations
340,712
Correction to "Hop-Timing Estimation for FH Signals Using a Coarsely Channelized Receiver"
['Levent Aydin', 'Andreas Polydoros']
Correction to "Hop-Timing Estimation for FH Signals Using a Coarsely Channelized Receiver"
173,927
For mobile video codecs, the huge energy dissipation for external memory traffic is a critical challenge under the battery power constraint. Lossy embedded compression (EC), as a solution to this challenge, is considered in this paper. While previous studies in EC mostly focused on compression algorithms at the block level, this work, to the best of our knowledge, is the first one that addresses the allocation of video quality and memory traffic at the frame level. For lossy EC, a main difficulty of its application lies in the error propagation from quality degradation of reference frames. Instinctively, it is preferred to perform more lossy EC in non-reference frames to minimize the quality loss. The analysis and experiments in this paper, however, will show lossy EC should actually be distributed to more frames. Correspondingly, for hierarchical-B GOPs, we developed an efficient allocation that outperforms the non-reference-only allocation by up to 4.5 dB in PSNR. In comparison, the proposed allocation also delivers more consistent quality between frames by having lower PSNR fluctuation.
['Li Guo', 'Dajiang Zhou', 'Shinji Kimura', 'Satoshi Goto']
Frame-level quality and memory traffic allocation for lossy embedded compression in video codec systems
727,572
Whispers of the still city
['Parjad Sharifi']
Whispers of the still city
775,729
SEAM4US: Intelligent Energy Management for Public Underground Spaces through Cyber-Physical Systems.
['Jonathan Simon', 'Marc Jentsch', 'Markus Eisenhauer']
SEAM4US: Intelligent Energy Management for Public Underground Spaces through Cyber-Physical Systems.
796,171
The development of an approach for obtaining statistical inferences about nonobservable processes that influence a process y(\cdot) which can be observed directly and which is assumed to be a mixture of continuous and discontinuous components is continued. The approach is based on probability-measure transformations and consists of finding the conditional probability of a nonobservable event in terms of the prior probability of that event and a functional of the observations y(\cdot) . The topics studied include optimal filtering, smoothing, and prediction estimates of the nonobservable process; M -ary hypothesis testing; performance lower-bounds; and stochastic control.
['Marco V. Vaca', 'Donald L. Snyder']
Estimation and decision for observations derived from martingales: Part II
332,549
The maximum utilization of Multi Channel - Multi Radio Wireless Mesh Networks (WMNs) can be achieved only by intelligent Channel Assignment (CA) and Link Scheduling (LS). A common CA and LS may not be optimal, in terms of utilization of underlying network resources, for every traffic demand in the network. Using the best CA and LS for every traffic demand results in channel reassignments which in turn lead to traffic disruption in the network. This makes WMNs very unreliable. In this paper, we present a simple, general, and efficient framework to quantitatively evaluate a reconfiguration policy, based on the two conflicting objectives, namely maximizing network utilization and minimizing traffic disruption. Then we propose a reconfiguration algorithm called Clustered Channel Assignment Scheme (CCAS), based on clustering of similar traffic matrices. We demonstrate the effectiveness of CCAS which mainly depends on the correlation between successive traffic matrices through extensive simulation studies.
['Arun A. Kanagasabapathy', 'Antony Franklin', 'C.S.R. Murthy']
An Adaptive Channel Reconfiguration Algorithm for Multi-Channel Multi-Radio Wireless Mesh Networks
232,848
The design and analysis of control strategies for high-capacity, reconfigurable optical transmission systems require an understanding of optical system dynamics involving the time-dependent interaction of many components. This paper describes system simulation software that couples continuous physical-layer models of optical transmission components with discrete models for events such as channel add/drops. The simulator computes detailed time traces of signal and noise power propagation along a line system consisting of multiple controlled transmission elements and monitoring devices in response to a particular discrete event. Examples are given illustrating the rich variety of experimentation modes the software supports, including the evaluation of control algorithms, systematic exploration of design parameters, and investigation of cost reduction plans. Details of the development effort are presented, illustrating the contributions of the optical physicists, applied mathematicians, system engineers, and computer scientists who were involved in this collaborative project.
['Tin Kam Ho', 'Todd Salamon', 'Roland W. Freund', 'Christopher A. White', 'Bruce K. Hillyer', 'Lawrence C. Cowsar', 'Carl J. Nuzman', 'Daniel C. Kilper']
Simulation of power evolution and control dynamics in optical transport systems
43,477
We present an approach to text summarization that is entirely rooted in the formal description of a classification-based model of terminological knowledge representation and reasoning. Text summarization is considered an operator-based transformation process by which knowledge representation structures, as generated by the text understander, are mapped to condensed representation structures forming a text summary at the representational level. The framework we propose offers a variety of parameters on which scalable text summarization can be based.
['Udo Hahn', 'Ulrich Reimer']
Text Summarization Based on Terminological Logics
272,426
In this paper, the problem of semi-global bipartite consensus is examined for a group of homogeneous generic linear agents subject to input saturation under directed interaction topology. Distributed feedback controllers with a parameter adjusted by use of the low gain feedback technique are proposed to reach the semi-global bipartite consensus of multi-agent systems with input saturation, when each agent is asymptotically null controllable with bounded controls and the interaction network described by a signed diagraph is structurally balanced and has a spanning tree. Numerical simulations are given to illustrate the effectiveness of the proposed distributed control scheme.
['Weiming Fu', 'Jiahu Qin', 'Wei Xing Zheng', 'Huijun Gao', 'Guodong Shi']
Semi-global bipartite consensus for linear multi-agent systems subject to actuator saturation
975,541
We propose a new model for natural image statistics. Instead of minimizing dependency between components of natural images, we maximize a simple form of dependency in the form of tree-dependencies. By learning filters and tree structures which are best suited for natural images we observe that the resulting filters are edge filters, similar to the famous ICA on natural images results. Calculating the likelihood of an image patch using our model requires estimating the squared output of pairs of filters connected in the tree. We observe that after learning, these pairs of filters are predominantly of similar orientations but different phases, so their joint energy resembles models of complex cells.
['Daniel Zoran', 'Yair Weiss']
The 'tree-dependent components' of natural scenes are edge filters
334,090
A 3.5-5.3 GHz, low phase noise CMOS VCO with switched tuning for multi-standard radios is presented in this paper. Design of low phase noise and small amplitude variations across the operating frequency is shown to be important aspects in wide-band VCO. An analytic expression for the output amplitude of the VCO is derived as a function of the switched capacitor resonator Q. The linear-time variant model was used for prediction of the phase noise and for deciding a proper tank current to achieve the minimum phase noise and amplitude variations across the frequency range. The results are verified in a fully integrated 0.18 /spl mu/m VCO with measured phase noise levels of less than -115 dBc/Hz at 1 MHz offset from the carrier while dissipating 6 mW of power.
['Ali Fard']
Phase noise and amplitude issues of a wide-band VCO utilizing a switched tuning resonator
123,940
For a variety of reasons, it is becoming an increasingly common requirement for ships to shut down ship generators and connect to high voltage shore power for as long as practicable while in port. Ship electrical equipment shall only be connected to shore supplies that are able to maintain harbor distribution system voltage quality. Voltage drop, which is produced by the possible loading conditions of a ship when connected to a shore supply, may result in unsatisfactory operation of, and damage to, electrical and electronic equipment on board and thus, must be considered in the harbor distribution system design in order to comply with the requirements of international standards. A simple and fast method able to estimate the expected magnitude of voltage drops and provide information about the effectiveness of the various mitigation methods is essential. This paper presents the methodology and results for evaluation of the voltage drops of a practical harbor electrical distribution system with high voltage shore connection (HVSC). The potential voltage drops of the implemented HVSC are quantified using standard analytical formulas. Sensitivity analysis is performed to determine the critical parameters that significantly affect voltage drops. The parameters that have great improvement benefits should have high priority for implementation in other distribution systems with HVSC. The results obtained in this paper can provide engineers with useful information regarding the actual magnitude of voltage drops, as well as on the effectiveness of mitigation options for voltage drops, which would greatly enhance HV shore supply quality.
['Chun-Lien Su', 'Yung-Chi Lee', 'Min-Hung Chou', 'Hai-Ming Chin', 'Giuseppe Parise', 'P. B. Chavdarian']
Evaluation of voltage drop in harbor electrical distribution systems with high voltage shore connection
930,913
Humans have an impressive ability to reason about new concepts and experiences from just a single example. In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept. We develop machine learning systems with this important capacity by developing new deep generative models, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning. We develop a class of sequential generative models that are built on the principles of feedback and attention. These two characteristics lead to generative models that are among the state-of-the art in density estimation and image generation. We demonstrate the one-shot generalization ability of our models using three tasks: unconditional sampling, generating new exemplars of a given concept, and generating new exemplars of a family of concepts. In all cases our models are able to generate compelling and diverse samples-- having seen new examples just once--providing an important class of general-purpose models for one-shot machine learning.
['Danilo Jimenez Rezende', 'Shakir Mohamed', 'Ivo Danihelka', 'Karol Gregor', 'Daan Wierstra']
One-shot generalization in deep generative models
689,853
A novel linear blind adaptive receiver based on joint iterative optimization (JIO) and the constrained-constant-modulus design criterion is proposed for interference suppression in direct-sequence ultrawideband systems. The proposed blind receiver consists of the following two parts: 1) a transformation matrix that performs dimensionality reduction and 2) a reduced-rank filter that produces the output. In the proposed receiver, the transformation matrix and the reduced-rank filter are jointly and iteratively updated to minimize the constant-modulus cost function subject to a constraint. Adaptive implementations for the JIO receiver are developed using the normalized stochastic gradient (NSG) and recursive least squares (RLS) algorithms. To obtain a low-complexity scheme, the columns of the transformation matrix with the RLS algorithm are individually updated. Blind channel estimation algorithms for both versions (i.e., NSG and RLS) are implemented. Assuming perfect timing, the JIO receiver only requires the spreading code of the desired user and the received data. Simulation results show that both versions of the proposed JIO receivers have excellent performance in terms of suppressing the intersymbol interference (ISI) and multiple-access interference (MAI) with low complexity.
['Sheng S. Li', 'R.C. de Lamare']
Blind Reduced-Rank Adaptive Receivers for DS-UWB Systems Based on Joint Iterative Optimization and the Constrained Constant Modulus Criterion
309,646