abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Lesley S.J. Farmer and Alan M. Safer. Library Improvement through Data Analysis. Chicago: ALA Neal-Schuman, 2016. 184p. Paper, $75.00 (ISBN 978-0-8389-1425-0). LCCN 2015046382. | ['Mark E. Shelton'] | Lesley S.J. Farmer and Alan M. Safer. Library Improvement through Data Analysis. Chicago: ALA Neal-Schuman, 2016. 184p. Paper, $75.00 (ISBN 978-0-8389-1425-0). LCCN 2015046382. | 963,229 |
In order to achieve the intelligent design for products, we research feature expression of product by normal form, feature amalgamation hypothesis and coordinates transformation algorithm. Based on these, we build the feature expression model of product; Then, we analysis the definition of total function, function expression and build the function model of product. Further more, we break up the functions and achieve function-feature mapping. At last, based on the research of feature gene coding, we express the product’s function with gene coding. | ['Hao Yong-tao', 'Qin Qin'] | Feature-Function Expression Model and Gene Coding for Products | 518,266 |
Fuel-cell vehicles (FCVs) with energy storage (ES) device(s) could result in improved lifetime, performance, fuel economy, and reduced cost. This paper presents the utilization of an ES device consisting of a supercapacitor bank for future electric vehicles with a hydrogen fuel cell (FC) as the main power source. The study mainly focuses on the innovative control law based on the flatness properties for a FC/supercapacitor hybrid power source. Utilizing the flatness principle, we propose simple solutions to the hybrid energy-management and stabilization problems. A supercapacitor module, as a high dynamic and high-power density device, functions to supply energy to regulate the dc-bus energy. The FC, as a slower dynamic source in this system, functions by supplying energy to keep the supercapacitor module charged. To ensure energy-efficient operation of the FC stack, the output current ripple of the FC stack is minimized by parallel boost converters with an interleaving switching technique for a high-frequency ripple by the supercapacitor for a low-frequency ripple. To authenticate the proposed control laws, a test bench is realized in the laboratory. The control algorithm (energy and current control loops) is digitally implemented by dSPACE controller DS1103. Experimental results with small-scale devices (a proton exchange membrane FC (PEMFC) of 500 W, 50 A, and 10 V and a supercapacitor bank of 250 F, 32 V, and 500 A) substantiate the excellent performance during load cycles. | ['Phatiphat Thounthong', 'Serge Pierfederici', 'Jean-Philippe Martin', 'Melika Hinaje', 'Bernard Davat'] | Modeling and Control of Fuel Cell/Supercapacitor Hybrid Source Based on Differential Flatness Control | 518,713 |
Grey theory is a multidisciplinary and generic theory to cope with systems of poor or deficient information. In this paper, at first a new grey-based model MGM(1, n,m)is proposed based on the general MGM(1, n) forecasting model to deal with the forecasting problems of input-output systems. Then the efficiency and accuracy of this model is tested by applying it to the dynamic forecasting problem of oilfield development during the middle-late stage. Experimental results demonstrate the new method has obviously a higher prediction accuracy than the RBF (Radial Basis Function ) neural network model which is another well know nmethod. | ['Zhibin Liu', 'Yan Yang', 'Chao Min'] | A New Grey-Based System Forecasting Model and Its Application in the Dynamic Forecasting Problem of Oilfield Development during the Middle-Late Stage | 221,551 |
Comparison of Agent-Based and Population-Based Simulations of Displacement of Crime. | ['Tibor Bosse', 'Charlotte Gerritsen', 'Mark Hoogendoorn', 'S. Waqar Jaffry', 'Jan Treur'] | Comparison of Agent-Based and Population-Based Simulations of Displacement of Crime. | 844,702 |
This research analyzes the impact of mobile phone screen size on user comprehension of health information and application structure. Applying experimental approach, we asked randomly selected users to read content and conduct tasks on a commonly used diabetes mobile application using three different mobile phone screen sizes. We timed and tracked a number of parameters, including correctness, effectiveness of completing tasks, content ease of reading, clarity of information organization, and comprehension. The impact of screen size on user comprehension/retention, clarity of information organization, and reading time were mixed. It is assumed on first glance that mobile screen size would affect all qualities of information reading and comprehension, including clarity of displayed information organization, reading time and user comprehension/retention of displayed information, but actually the screen size, in this experimental research, did not have significant impact on user comprehension/retention of the content or on understanding the application structure. However, it did have significant impact on clarity of information organization and reading time. Participants with larger screen size took shorter time reading the content with a significant difference in the ease of reading. While there was no significant difference in the comprehension of information or the application structures, there were a higher task completion rate and a lower number of errors with the bigger screen size. Screen size does not directly affect user comprehension of health information. However, it does affect clarity of information organization, reading time and user's ability to recall information. | ['Ebtisam Ghamdi', 'Faisal Yunus', "Omar B. Da'ar", 'Ashraf El-Metwally', 'Mohamed Khalifa', 'Bakheet Aldossari', 'Mowafa S. Househ'] | The Effect of Screen Size on Mobile Phone User Comprehension of Health Information and Application Structure: An Experimental Approach | 571,759 |
Introducing the Prague Discourse Treebank 1.0 | ['Lucie Poláková', 'Jiří Mírovský', 'Anna Nedoluzhko', 'Pavlína Jínová', 'Šárka Zikánová', 'Eva Hajiċová'] | Introducing the Prague Discourse Treebank 1.0 | 615,237 |
In view of the well-acknowledged inequalities in health between the rich and the poor, populations of low socioeconomic status stand to benefit most from advances in technology designed to promote health- related behavioural change. In this paper we investigate attitudes towards diet and the perceived barriers to making positive changes from the perspective of the primary caregivers of seventeen families with low socioeconomic status. Participants were aware of the weaknesses their family's dietary habits and were motivated to make changes, but lacked financial, strategic, and social resources needed to do so. Based on our analysis, the current trend of raising awareness and motivation to change does not appear to address the needs of this population. We call for research to investigate systems that address existing gaps in health-related communication and empower people to take practical steps towards achieving realistic goals; matching any attempt to motivate change with an attempt to facilitate change. | ['Julie Maitland', 'Matthew Chalmers', 'Katie A. Siek'] | Persuasion not required Improving our understanding of the sociotechnical context of dietary behavioural change | 456,004 |
Objective: The aim of this study was to investigate the feasibility of using an inductive tongue control system (ITCS) for controlling robotic/prosthetic hands and arms. Methods: This study presents a novel dual modal control scheme for multigrasp robotic hands combining standard electromyogram (EMG) with the ITCS. The performance of the ITCS control scheme was evaluated in a comparative study. Ten healthy subjects used both the ITCS control scheme and a conventional EMG control scheme to complete grasping exercises with the IH1 Azzurra robotic hand implementing five grasps. Time to activate a desired function or grasp was used as the performance metric. Results: Statistically significant differences were found when comparing the performance of the two control schemes. On average, the ITCS control scheme was 1.15 s faster than the EMG control scheme, corresponding to a 35.4% reduction in the activation time. The largest difference was for grasp 5 with a mean AT reduction of 45.3% (2.38 s). Conclusion: The findings indicate that using the ITCS control scheme could allow for faster activation of specific grasps or functions compared with a conventional EMG control scheme. Significance: For transhumeral and especially bilateral amputees, the ITCS control scheme could have a significant impact on the prosthesis control. In addition, the ITCS would provide bilateral amputees with the additional advantage of environmental and computer control for which the ITCS was originally developed. | ['Daniel Johansen', 'Christian Cipriani', 'Dejan B. Popovic', 'Lotte N. S. Andreasen Struijk'] | Control of a Robotic Hand Using a Tongue Control System—A Prosthesis Application | 718,578 |
Perceiving visual prosody from point-light displays. | ['Erin Cvejic', 'Jeesun Kim', 'Chris Davis'] | Perceiving visual prosody from point-light displays. | 754,572 |
With the growing need of big-data processing in diverse application domains, MapReduce (e.g., Hadoop) has become one of the standard computing paradigms for large-scale computing on a cluster system. Despite its popularity, the current MapReduce framework suffers from inflexibility and inefficiency inherent to its programming model and system architecture. In order to address these problems, we propose Vispark, a novel extension of Spark for GPU-accelerated MapReduce processing on array-based scientific computing and image processing tasks. Vispark provides an easy-to-use, Python-like high-level language syntax and a novel data abstraction for MapReduce programming on a GPU cluster system. Vispark introduces a programming abstraction for accessing neighbor data in the mapper function, which greatly simplifies many image processing tasks using MapReduce by reducing memory footprints and bypassing the reduce stage. Vispark provides socket-based halo communication that synchronizes between data partitions tr... | ['Woohyuk Choi', 'Sumin Hong', 'Won-Ki Jeong'] | Vispark: GPU-Accelerated Distributed Visual Computing Using Spark | 923,207 |
In this paper we describe a set of algorithms that can be used to reduce the complexity of evaluating multiple queries of a transaction in a distributed environment. With the consideration of conjunct sharing, it compiles a set of queries into a network based on the concept of semi-joins. As some of the queries in a transaction may change the contents of a database, evaluation of the network corresponding to the transaction is synchronized into several phases so that the dependencies among the queries can be properly captured. | ['A.Y. Lu', 'Phillip C.-Y. Sheu'] | Processing of Multiple Queries in Distributed Databases | 284,391 |
One key challenge in talent search is how to translate complex criteria of a hiring position into a search query. This typically requires deep knowledge on which skills are typically needed for the position, what are their alternatives, which companies are likely to have such candidates, etc. However, listing examples of suitable candidates for a given position is a relatively easy job. Therefore, in order to help searchers overcome this challenge, we design a next generation of talent search paradigm at LinkedIn: Search by Ideal Candidates. This new system only needs the searcher to input one or several examples of suitable candidates for the position. The system will generate a query based on the input candidates and then retrieve and rank results based on the query as well as the input candidates. The query is also shown to the searcher to make the system transparent and to allow the searcher to interact with it. As the searcher modifies the initial query and makes it deviate from the ideal candidates, the search ranking function dynamically adjusts an refreshes the ranking results balancing between the roles of query and ideal candidates. As of writing this paper, the new system is being launched to our customers. | ['Viet Ha-Thuc', 'Ye Xu', 'Satya Pradeep Kanduri', 'Xianren Wu', 'Vijay Dialani', 'Yan Yan', 'Abhishek Gupta', 'Shakti Dhirendraji Sinha'] | Search by Ideal Candidates: Next Generation of Talent Search at LinkedIn | 653,800 |
Symbol Interpretation in Neural Networks: an investigation on representations in communication. | ['Emerson Silva de Oliveira', 'Angelo Loula'] | Symbol Interpretation in Neural Networks: an investigation on representations in communication. | 771,860 |
Authentication as a Service (AaaS) provides on-demand delivery of multi-factor authentication (MFA). However, current AaaS has left out of consideration the trustworthiness of user inputs at client evices and the risk of privacy exposure at the AaaS providers. To solve these concerns, we present TAaaS, Trustworthy Authentication as a Service, which offers a trusted path-based MFA service to the service provider in the cloud. TAaaS leverages the hypervisor-based trusted path to ensure the trustworthiness of user inputs, and addresses privacy concerns in the cloud by storing only the irreversible user account information. We implement two end-to-end prototypes and evaluate our work to show its feasibility and security. | ['Jisoo Oh', 'Jaemin Park', 'Sungjin Park', 'Jong-Jin Won'] | TAaaS: Trustworthy Authentication as a Service Based on Trusted Path | 980,181 |
Infants form expectations about others' emotions based on context and perceptual access. | ['Amy E. Skerry', 'Mina Cikara', 'Susan Carey', 'Elizabeth S. Spelke', 'Rebecca Saxe'] | Infants form expectations about others' emotions based on context and perceptual access. | 745,084 |
The Complete Complementary (CC) code-based Spread- Time Code Division Multiple Access(ST-CDMA) system has been proposed. In this paper, we evaluate the Bit Error Rate performance of CC code-based ST-CDMA for AWGN channel in the presence of Multiple Access Interference(MAI). The theoretical BER is derived conditioned on the number of users K and the employed CC code sequence length G. The theoretical performance is verified by the simulations and compared with the performance of the conventional STCDMA system using Gold codes. | ['Liru Lu', 'V.K. Dubey'] | Performance of CC Code-based Spread Time CDMA System in AWGN Channel | 432,732 |
As an effective technique for combating multipath fading and for high-bit-rate transmission over wireless channels, orthogonal frequency-division multiplexing (OFDM) is extensively used in modern digital television terrestrial broadcasting (DTTB) systems to support high performance bandwidth-efficient multimedia services. In this paper, novel synchronization schemes, including timing, frequency offset estimation, and phase error tracking, are proposed for the time domain synchronous OFDM (TDS-OFDM) based DTTB systems. Simulations under different channel situations verify the efficiency of the proposed schemes. | ['Zi-Wei Zheng', 'Zhixing Yang', 'Changyong Pan', 'Yi-Sheng Zhu'] | Novel synchronization for TDS-OFDM-based digital television terrestrial broadcast systems | 386,843 |
The principle of duplication and comparison has proven very efficient for error detection in processor cores, since it can be applied as a generic solution for making virtually any type of core fail safe. A weakness of this approach, however, is the potential for common cause faults: Faults affecting both cores in the same way will escape detection. Shared resources and signals are especially prone to such effects. In practice the efforts for providing a redundant power source are often prohibitive, thus rendering the power supply such a shared resource. While a complete failure of the supply voltage can be relatively easily accommodated in a fail safe system, short pulses can have subtle consequences and are therefore much more dangerous. In this paper we will perform an experimental study of the potential of such power supply induced faults to create common cause effects. For this purpose we first study their effects on the operation of a processor core. In particular we will show that, when applied with the most adverse parameters, they tend to cause timing violations in the critical path. In two instances of the same core there is therefore a non-negligible risk of common cause effects. We will quantitatively assess this risk through fault injection experiments into an FPGA based dual core design. | ['Peter Tummeltshammer', 'Andreas Steininger'] | On the role of the power supply as an entry for common cause faults—An experimental analysis | 20,252 |
Large-scale storage systems often face node failures that lead to data loss. Cooperative regeneration has been extensively studied to minimize the repair traffic of simultaneously reconstructing the lost data of multiple failed nodes. However, existing cooperative regeneration schemes assume that nodes are homogeneous. They do not consider how to minimize the general regenerating cost when taking into account node heterogeneity. This paper presents the first systematic study on enhancing conventional cooperation regeneration (CCR) schemes in a heterogeneous environment. We formulate cooperative regeneration as a cost-based routing optimization model, and propose a new cost-based heterogeneity-aware cooperative regeneration (HCR) framework. The main novelty of HCR is to decompose CCR schemes into two stages (i.e., expansion and aggregation) that can be opportunistically carried out by different nodes depending on their costs. To efficiently select the nodes for expansion execution without exhaustive enumeration, we design two greedy algorithms based on the hill-climbing technique. We also formulate the routing problem in the aggregation stage as a Steiner Tree Problem. Finally, we conduct extensive trace-driven simulations and show that HCR can reduce up to 75.4% transmission time of CCR. Also, we demonstrate that HCR remains robust even when the heterogeneity information is not accurately measured. | ['Zhirong Shen', 'Patrick P. C. Lee', 'Jiwu Shu'] | Efficient routing for cooperative data regeneration in heterogeneous storage networks | 916,559 |
AbstractData warehousing projects still face challenges in the various phases of the development life cycle. In particular, the success of the design phase, the focus of this paper, is hindered by the cross-disciplinary competences it requires. This paper presents a natural language-based method for the design of data mart schemas. Compared to existing approaches, our method has three main advantages: first, it facilitates requirements specification through a template covering all concepts of the decision-making process while providing for the acquisition of analytical requirements written in a structured natural language format. Second, it supports requirement validation with respect to a data source used in the ETL process. Third, it provides for a semi-automatic generation of conceptual data mart schemas that are directly mapped onto the data source; this mapping assists the definition of ETL procedures. The performance of the proposed method is illustrated through a software prototype used in an empir... | ['Fahmi Bargui', 'Hanêne Ben-Abdallah', 'Jamel Feki'] | A natural language-based approach for a semi-automatic data mart design and ETL generation | 871,506 |
In this paper, the boundedness of solutions of the chaotic system for the incompressible flow between two concentric rotating cylinders, known as Couette–Taylor flow, has been studied. Based on Lagrange multiplier method and generalized positive definite and radially unbound Lyapunov functions with respect to the parameters of the system, we derive the ultimate bound and the globally exponentially attractive set for this system. Then, the result is applied to the finite-time stabilization. Moreover, numerical simulations are presented to show the effectiveness of the method. | ['Moosarreza Shamsyeh Zahedi', 'Hassan Saberi Nik'] | Bounds of the Chaotic System for Couette–Taylor Flow and Its Application in Finite-Time Control | 586,300 |
Systems that learn from examples often express the learned concept in the form of a disjunctive description. Disjuncts that correctly classify few training examples are known as small disjuncts and are interesting to machine learning researchers because they have a much higher error rate than large disjuncts. Previous research has investigated this phenomenon by performing ad hoc analyses of a small number of datasets. In this paper we present a quantitative measure for evaluating the effect of small disjuncts on learning and use it to analyze 30 benchmark datasets. We investigate the relationship between small disjuncts and pruning, training set size and noise, and come up with several interesting results. | ['Gary M. Weiss', 'Haym Hirsh'] | A Quantitative Study of Small Disjuncts | 189,847 |
Electronic Support Measures consist of passive receivers which can identify emitters coming from a small bearing angle, which, in turn, can be related to platforms that belong to 3 classes: either Friend, Neutral, or Hostile. Decision makers prefer results presented in STANAG 1241 allegiance form, which adds 2 new classes: Assumed Friend, and Suspect. Dezert-Smarandache (DSm) theory is particularly suited to this problem, since it allows for intersections between the original 3 classes. Results are presented showing that the theory can be successfully applied to the problem of associating ESM reports to established tracks, and its results identify when miss-associations have occurred and to what extent. Results are also compared to Dempster-Shafer theory which can only reason on the original 3 classes. Thus decision makers are offered STANAG 1241 allegiance results in a timely manner, with quick allegiance change when appropriate and stability in allegiance declaration otherwise. | ['Pierre Valin', 'Pascal Djiknavorian', 'Dominic Grenier'] | DSm theory for fusing highly conflicting ESM reports | 366,856 |
Every node in a wireless ad hoc network is both end host (it generates its own data and routing traffic) and infrastructure (it forwards traffic for others), but rational nodes have no incentive to cooperatively forward traffic for others, since this kind of forwarding is not costless. In this paper, we use game theory to analyze cooperative mechanisms, and derive optimal criteria in forwarding. | ['Lu Yan'] | Cooperative Packet Relaying in Wireless Multi-hop Networks | 394,767 |
An enhanced version of the movement-based location update with selective paging strategy proposed by Akyildiz, Ho and Lin (see IEEE/ACM Trans. Networking, vol.4, p.629-38, 1996) is presented. Although the terminal paging cost is slightly increased, a significant reduction in the location update cost is achieved. The net effect is, for low call-to-mobility ratio, a saving of around 10%-15% in the total cost (location+paging) per call arrival is achieved. Our proposal can be easily implemented in real cellular systems. | ['Vicente Casares-Giner', 'Jorge Mataix-Oltra'] | On movement-based mobility tracking strategy-an enhanced version | 281,449 |
A Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model with four global search operations and one local search operation. A Reinforcement Learning-based Memetic Particle Swarm Optimizer (RLMPSO) is proposed.Each particle is subject to five possible operations under control of the RL algorithm.They are exploitation, convergence, high-jump, low-jump, and local fine-tuning.The operation is executed according to the action generated by the RL algorithm.The empirical results indicate that RLMPSO outperforms many other PSO-based models. Developing an effective memetic algorithm that integrates the Particle Swarm Optimization (PSO) algorithm and a local search method is a difficult task. The challenging issues include when the local search method should be called, the frequency of calling the local search method, as well as which particle should undergo the local search operations. Motivated by this challenge, we introduce a new Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model. Each particle is subject to five operations under the control of the Reinforcement Learning (RL) algorithm, i.e. exploration, convergence, high-jump, low-jump, and fine-tuning. These operations are executed by the particle according to the action generated by the RL algorithm. The proposed RLMPSO model is evaluated using four uni-modal and multi-modal benchmark problems, six composite benchmark problems, five shifted and rotated benchmark problems, as well as two benchmark application problems. The experimental results show that RLMPSO is useful, and it outperforms a number of state-of-the-art PSO-based algorithms. | ['Hussein Samma', 'Chee Peng Lim', 'Junita Mohamad Saleh'] | A new Reinforcement Learning-based Memetic Particle Swarm Optimizer | 649,471 |
The emergence of nasal velar codas in brazilian Portuguese: An Rt-MRI study | ['Marissa S. Barlaz', 'Maojing Fu', 'Zhi-Pei Liang', 'Ryan Shosted', 'Bradley P. Sutton'] | The emergence of nasal velar codas in brazilian Portuguese: An Rt-MRI study | 774,014 |
In this paper, we describe a new application domain for intelligent autonomous systems--intelligent buildings (IB). In doing so we present a novel approach to the implementation of IB agents based on a hierarchical fuzzy genetic multi-embedded-agent architecture comprising a low-level behaviour based reactive layer whose outputs are co-ordinated in a fuzzy way according to deliberative plans. The fuzzy rules related to the room resident comfort are learnt and adapted online using our patented fuzzy-genetic techniques (British patent 99-10539.7). The learnt rule base is updated and adapted via an iterative machine-user dialogue. This learning starts from the best stored rule set in the agent memory (Experience Bank) thereby decreasing the learning time and creating an intelligent agent with memory. We discuss the role of learning in building control systems, and we explain the importance of acquiring information from sensors, rather than relying on pre-programmed models, to determine user needs. We describe how our architecture, consisting of distributed embedded agents, utilises sensory information to learn to perform tasks related to user comfort, energy conservation, and safety. We show how these agents, employing a behaviour-based approach derived from robotics research, are able to continuously learn and adapt to individuals within a building, whilst always providing a fast, safe response to any situation. In addition we show that our system learns similar rules to other offline supervised methods but that our system has the additional capability to rapidly learn and optimise the learnt rule base. Applications of this system include personal support (e.g. increasing independence and quality of life for older people), energy efficiency in commercial buildings or living-area control systems for space vehicles and planetary habitation modules. | ['Hani Hagras', 'Victor Callaghan', 'Martin Colley', 'Graham Clarke'] | A hierarchical fuzzy-genetic multi-agent architecture for intelligent buildings online learning, adaptation and control | 377,769 |
A main challenge with developing applications for wireless embedded systems is the lack of visibility and control during execution of an application. In this paper, we present a tool suite called Marionette that provides the ability to call functions and to read or write variables on pre-compiled, embedded programs at run-time, without requiring the programmer to add any special code to the application. This rich interface facilitates interactive development and debugging at minimal cost to the node. | ['Kamin Whitehouse', 'Gilman Tolle', 'Jay Taneja', 'Cory Sharp', 'Sukun Kim', 'Jaein Jeong', 'Jonathan W. Hui', 'Prabal Dutta', 'David E. Culler'] | Marionette: using RPC for interactive development and debugging of wireless embedded networks | 3,304 |
The increasing usage of smart embedded devices is blurring the line between the virtual and real worlds. This creates new opportunities for applications to better integrate the real-world, providing services that are more diverse, highly dynamic and efficient. Service Oriented Architecture is on the verge of extending its applicability from the standard, corporate IT domain to the real-world devices. In such infrastructures, composed of a large number of resource-limited devices, the discovery of services and on demand provisioning of missing functionality is a challenge. This work proposes a process, its architecture and an implementation that enables developers and process designers to dynamically discover, use, and create running instances of real-world services in composite applications. | ['Dominique Guinard', 'Vlad Trifa', 'Patrik Spiess', 'Bettina Dober', 'Stamatis Karnouskos'] | Discovery and On-demand Provisioning of Real-World Web Services | 456,078 |
The second heart sound (S2) is triggered by an aortic valve closure as a result of the ventricular-arterial interaction of the cardiovascular system. The objective of this paper is to investigate the timing of S2 in response to the changes in hemodynamic parameters and its relation to aortic blood pressure (BP). An improved model of the left ventricular-arterial interaction was proposed based on the combination of the newly established pressure source model of the ventricle and the nonlinear pressure-dependent compliance model of the arterial system. The time delay from the onset of left ventricular pressure rise to the onset of S2 (RS 2 ) was used to measure the timing of S2. The results revealed that RS 2 bears a strong negative correlation with both systolic blood pressure and diastolic blood pressure under the effect of changing peripheral resistance, heart rate, and contractility. The results were further validated by a series of measurements of 16 normal subjects submitted to dynamic exercise. This study helps understand the relationship between the timing of S2 and aortic BP under various physiological and pathological conditions. | ['Xinyu Zhang', 'Emma MacPherson', 'Yuan-Ting Zhang'] | Relations Between the Timing of the Second Heart Sound and Aortic Blood Pressure | 353,348 |
In this paper, we benchmark the Regularity Model-Based Multiobjective Estimation of Distribution Algorithm family RM-MEDA of Zhang et al. on the bi-objective family bbob-biobj test suite of the Comparing Continuous Optimizers (COCO) platform. It turns out that, starting from about 200 times dimension many function evaluations, family RM-MEDA shows a linear increase in the solved hypervolume-based target values with time until a stagnation of the performance occurs rather quickly on all problems. The final percentage of solved hypervolume targets seems to decrease with the problem dimension. | ['Anne Auger', 'Dimo Brockhoff', 'Nikolaus Hansen', 'Dejan Tušar', 'Tea Tušar', 'Tobias Wagner'] | Benchmarking RM-MEDA on the Bi-objective BBOB-2016 Test Suite | 851,877 |
Efficient combination of Wang–Landau and transition matrix Monte Carlo methods for protein simulations | ['Ruben G. Ghulghazaryan', 'Shura Hayryan', 'Chin-Kun Hu'] | Efficient combination of Wang–Landau and transition matrix Monte Carlo methods for protein simulations | 214,328 |
Linux Operating System Support for the SCC Platform - An Analysis. | ['Jan-Arne Sobania', 'Peter Tröger', 'Andreas Polze'] | Linux Operating System Support for the SCC Platform - An Analysis. | 754,345 |
Nodes in Delay Tolerant Networks (DTN) rely on routing utilities (e.g., probabilities of meeting nodes) to decide the packet forwarder. As the utilities reflect user privacy, nodes may be reluctant to disclose such information directly. Therefore, we propose a distributed strategy to protect the aforementioned private information in utility-based DTN routing algorithms while still guarantying the correctness of packet forwarding, namely meeting Relationship Anonymity (ReHider). We also present an enhanced version that can better prevent certain malicious behaviors (probing attack and brute-force attack). Initial analysis show the effectiveness of the proposed strategy. | ['Kang Chen', 'Haiying Shen'] | Distributed privacy-protecting DTN routing: Concealing the information indispensable in routing | 965,099 |
We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research. | ['Ben Medlock', 'Ted Briscoe'] | Weakly Supervised Learning for Hedge Classification in Scientific Literature | 51,656 |
We consider perfect secret key generation for a “pairwise independent network” model in which every pair of terminals share a random binary string, with the strings shared by distinct terminal pairs being mutually independent. The terminals are then allowed to communicate interactively over a public noiseless channel of unlimited capacity. All the terminals as well as an eavesdropper observe this communication. The objective is to generate a perfect secret key shared by a given set of terminals at the largest rate possible, and concealed from the eavesdropper. First, we show how the notion of perfect omniscience plays a central role in characterizing perfect secret key capacity. Second, a multigraph representation of the underlying secrecy model leads us to an efficient algorithm for perfect secret key generation based on maximal Steiner tree packing. This algorithm attains capacity when all the terminals seek to share a key, and, in general, attains at least half the capacity. Third, when a single “helper” terminal assists the remaining “user” terminals in generating a perfect secret key, we give necessary and sufficient conditions for the optimality of the algorithm; also, a “weak” helper is shown to be sufficient for optimality. | ['Sirin Nitinawarat', 'Prakash Narayan'] | Perfect omniscience, perfect secrecy and Steiner tree packing | 304,708 |
IEEE 802.11ah Based M2M Networks Employing Virtual Grouping and Power Saving Methods | ['Kohei Ogawa', 'Masahiro Morikura', 'Koji Yamamoto', 'Tomoyuki Sugihara'] | IEEE 802.11ah Based M2M Networks Employing Virtual Grouping and Power Saving Methods | 155,260 |
The calculus outlined in this paper provides a formal architectural framework for describing and reasoning about the properties of multi-user and mobile distributed interactive systems. It is based on the Workspace Model, which incorporates both distribution-independent and implementation-specific representations of multi-user and mobile applications. The calculus includes an evolution component, allowing the representation of system change at either level over time. It also includes a refinement component supporting the translation of changes at either level into corresponding changes at the other. The combined calculus has several important properties, including locality and termination of the refinement process and commutativity of evolution and refinement. The calculus may be used to reason about fault tolerance and to define the semantics of programming language constructs. | ['W. Greg Phillips', 'T. C. Graham', 'Christopher L. Wolfe'] | A calculus for the refinement and evolution of multi-user mobile applications | 893,291 |
In this paper, we present the formulas of the covariances of the second-, third-, and fourth-order sample cumulants of stationary processes. These expressions are then used to obtain the analytic performance of FIR system identification methods as a function of the coefficients and the statistics of the input sequence. The lower bound in the variance is also compared for different sets of sample statistics to provide insight about the information carried by each sample statistic. Finally, the effect that the presence of noise has on the accuracy of the estimates is studied analytically. The results are illustrated graphically with plots of the variance of the estimates as a function of the parameters or the signal-to-noise ratio. Monte Carlo simulations are also included to compare their results with the predicted analytic performance. > | ['José A. R. Fonollosa'] | Sample cumulants of stationary processes: asymptotic results | 48,046 |
The IEEE 802.15.4 protocol has the ability to support time-sensitive wireless sensor network (WSN) applications due to the guaranteed time slot (GTS) medium access control mechanism. Recently, several analytical and simulation models of the IEEE 802.15.4 protocol have been proposed. Nevertheless, currently available simulation models for this protocol are both inaccurate and incomplete, and in particular they do not support the GTS mechanism. In this paper, we propose an accurate OPNET simulation model, with focus on the implementation of the GTS mechanism. The motivation that has driven this work is the validation of the network calculus based analytical model of the GTS mechanism that has been previously proposed and to compare the performance evaluation of the protocol as given by the two alternative approaches. Therefore, in this paper we contribute an accurate OPNET model for the IEEE 802.15.4 protocol. Additionally, and probably more importantly, based on the simulation model we propose a novel methodology to tune the protocol parameters such that a better performance of the protocol can be guaranteed, both concerning maximizing the throughput of the allocated GTS as well as concerning minimizing frame delay. | ['Petr Jurcik', 'Anis Koubaa', 'Mário Alves', 'Eduardo Tovar', 'Zdenek Hanzalek'] | A Simulation Model for the IEEE 802.15.4 protocol: Delay/Throughput Evaluation of the GTS Mechanism | 457,336 |
In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps combining one low-resolution depth camera with high-resolution stereoscopic cameras. Basically, the hybrid camera system produces four synchronized images at each frame: left and right images from the stereoscopic cameras, a color image and its associated depth map from the depth camera. In the hybrid camera system, after estimating initial depth information for the left image using a stereo matching algorithm, we project depths obtained from the depth camera onto ROI of the left image using three-dimensional (3-D) image warping. Then, the warped depths are linearly interpolated to fill depth holes occurred in ROI. Finally, we merge the ROI depths with background ones extracted from the initial depth information to generate the ROI enhanced depth map. Experimental results show that the proposed depth acquisition system provides more accurate depth information for ROI than previous stereo matching algorithms. Besides, the proposed scheme minimizes inherent problems of the current depth camera, such as limitation of its measuring distance and production of low-resolution depth maps. | ['Sung-Yeol Kim', 'Eun-Kyung Lee', 'Yo-Sung Ho'] | Generation of ROI Enhanced Depth Maps Using Stereoscopic Cameras and a Depth Camera | 485,994 |
The effect of a repair of a complex system can usually be approximated by the following two types: minimal repair for which the system is restored to its functioning state with minimum effort, or perfect repair for which the system is replaced or repaired to a good-as-new state. When both types of repair are possible, an important problem is to determine the repair policy; that is, the type of repair which should be carried out after a failure. In this paper, an optimal allocation problem is studied for a monotonic failure rate repairable system under some resource constraints. In the first model, the numbers of minimal & perfect repairs are fixed, and the optimal repair policy maximizing the expected system lifetime is studied. In the second model, the total amount of repair resource is fixed, and the costs of each minimal & perfect repair are assumed to be known. The optimal allocation algorithm is derived in this case. Two numerical examples are shown to illustrate the procedures. | ['Lirong Cui', 'Way Kuo', 'Han Tong Loh', 'Min Xie'] | Optimal allocation of minimal & perfect repairs under resource constraints | 121,584 |
WSCS : un service web de contexte spatial dédié aux téléphones intelligents dans le cadre de jeux éducatifs. | ['Elodie Edoh-Alove', 'Frédéric Hubert', 'Thierry Badard'] | WSCS : un service web de contexte spatial dédié aux téléphones intelligents dans le cadre de jeux éducatifs. | 806,200 |
The Principled Design of Computer System Safety Analyses | ['David. J. Pumfrey'] | The Principled Design of Computer System Safety Analyses | 168,877 |
We present a new priority-based approach to reasoning with specificity which subsumes inheritance reasoning. The new approach differs from other priority-based approaches in the literature in the way priority between defaults is handled. Here, it is conditional rather than unconditional as in other approaches. We show that any unconditional handling of priorities between defaults as advocated in the literature until now is not sufficient to capture general defeasible inheritance reasoning. We propose a simple and novel argumentation semantics for reasoning with specificity taking the conditionality of the priorities between defaults into account. Since the proposed argumentation semantics is a form of stable semantics of nonmonotonic reasoning, it inherits a common problem of the latter where it is not always defined for every default theory. We propose a class of stratified default theories for which the argumentation semantics is always defined. We also show that acyclic and consistent inheritance networks are stratified. We prove that the argumentation semantics satisfies the basic properties of a nonmonotonic consequence relation such as deduction, reduction, conditioning, and cumulativity for well-defined and stratified default theories. We give a modular and polynomial transformation of default theories with specificity into semantically equivalent Reiter default theories. | ['Phan Minh Dung', 'Tran Cao Son'] | An argument-based approach to reasoning with specificity | 148,222 |
The ability to exchange opinions and experiences online is known as online word of mouth (WOM) . Due to the high acceptance of consumers and their apparent rel iance on online WOM it is important for organizations to understand how it works and what kind of impact it has on product sales . Using the well established notions of volume and valence to describe online WOM, we empirically evaluate the hypothesized relations hip between online WOM in a retail e -commerce site and actual product sales . Our analysis of the data shows that there is a significant change in the number of products sold following the addition of online WOM to a retail e -commerce site’s product pages. Additionally, only the volume dimension of online WOM, measured by the number of customer review comments, is shown to have an influence on product sales . | ['Alanah J. Davis', 'Deepak Khazanchi'] | The Influence of Online Word of Mouth on Product Sales in Retail E-commerce: An Empirical Investigation | 348,133 |
In this contribution the authors present an innovative conception of and first experiences with a lecture "Didactics of Informatics I (DoI1)", which is a mandatory lecture for Informatics student teachers at the Universities of Erlangen-Nuremberg and the Technical University of Munich (Germany). | ['Torsten Brinda', 'Peter Hubwieser'] | A lecture about teaching informatics in secondary education: lecture design and first experiences | 199,837 |
The current LTE network architecture experiences high level of signalling traffic between control plane entities. As a result many innovative architectures have been proposed including SDN to simplify the network. Cloud RAN on the other hand proposes a pioneering paradigm in terms of sustaining the profitability margin with unprecedented surge of mobile traffic and providing better performance to the end users. Schemes have been proposed to compute and analyse the signalling load in the new SDN LTE architecture; however in our best understanding, none of them consider future CRAN architectures. Thus in this paper we propose a new SDN CRAN architecture and present an analysis of the signalling load, evaluate the performance and compare it against existing architectures in the literature. Evaluation results show significant improvement of the proposed schemes in terms of reduction in the signalling load when addressing several network metrics. | ['Imad Al-Samman', 'Angela Doufexi', 'Mark A Beach'] | A C-RAN Architecture for LTE Control Signalling | 832,008 |
An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products. | ['Rosario Gennaro', 'Daniele Micciancio', 'Tal Rabin'] | An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products. | 792,871 |
This article introduces the LBATCH and ABATCH rules for applying the batch means method to analyze output of Monte Carlo and, in particular, discrete-event simulation experiments. Sufficient conditions are given for these rules to produce strongly consistent estimators of the variance of the sample mean and asymptotically valid confidence intervals for the mean. The article studies the performance of these rules and two others suggested in the literature, comparing confidence interval coverage rates and mean half-lengths. The article also gives detailed algorithms for implementing the rules in O(t) time with O(log2 t) space. FORTRAN, C, and SIMSCRIPT II.5 implementations of the procedures are available by anonymous file transfer protocol. | ['George S. Fishman', 'L. Stephen Yarberry'] | An Implementation of the Batch Means Method | 517,241 |
AbstractMulti-objective evolutionary algorithms (MOEAs) are currently a dynamic field of research that has attracted considerable attention. Mutation operators have been utilized by MOEAs as variation mechanisms. In particular, polynomial mutation (PLM) is one of the most popular variation mechanisms and has been utilized by many well-known MOEAs. In this paper, we revisit the PLM operator and we propose a fitness-guided version of the PLM. Experimental results obtained by non-dominated sorting genetic algorithm II and strength Pareto evolutionary algorithm 2 show that the proposed fitness-guided mutation operator outperforms the classical PLM operator, based on different performance metrics that evaluate both the proximity of the solutions to the Pareto front and their dispersion on it. | ['Konstantinos Liagkouras', 'Kostas S. Metaxiotis'] | Enhancing the performance of MOEAs: an experimental presentation of a new fitness guided mutation operator | 711,372 |
In traditional human-centered games and virtual reality applications, a skeleton is commonly tracked using consumer-level cameras or professional motion capture devices to animate an avatar. In this paper, we propose a novel application that automatically reconstructs a real 3D moving human captured by multiple RGB-D cameras in the form of a polygonal mesh, and which may help users to actually enter a virtual world or even a collaborative immersive environment. Compared with 3D point clouds, a 3D polygonal mesh is commonly adopted to represent objects or characters in games and virtual reality applications. A vivid 3D human mesh can hugely promote the feeling of immersion when interacting with a computer. The proposed method includes three key steps for realizing dynamic 3D human reconstruction from RGB images and noisy depth data captured from a distance. First, we remove the static background to obtain a 3D partial view of the human from the depth data with the help of calibration parameters, and register two neighboring partial views. The whole 3D human is globally registered using all the partial views to obtain a relatively clean 3D human point cloud. A complete 3D mesh model is constructed from the point cloud using Delaunay triangulation and Poisson surface reconstruction. Finally, a series of experiments demonstrates the reconstruction quality of the 3D human meshes. Dynamic meshes with different poses are placed in a virtual environment, which can be used to provide personalized avatars for everyday users, and enhance the interactive experience in games and virtual reality environments. HighlightsScan 3D real human body by low cost depth cameras.The whole 3D point cloud of human is globally registered.The reconstructed mesh quality is satisfactory. | ['Zhenbao Liu', 'Hongliang Qin', 'Shuhui Bu', 'Meng Yan', 'Jinxin Huang', 'Xiaojun Tang', 'Junwei Han'] | 3D real human reconstruction via multiple low-cost depth cameras | 495,627 |
Phase contrast, a noninvasive microscopy imaging technique, is widely used to capture time-lapse images to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle, phase contrast microscopy images contain artifacts such as the halo and shade-off that hinder image segmentation, a critical step in automated microscopy image analysis. Rather than treating phase contrast microscopy images as general natural images and applying generic image processing techniques on them, we propose to study the optical properties of the phase contrast microscope to model its image formation process. The phase contrast imaging system can be approximated by a linear imaging model. Based on this model and input image properties, we formulate a regularized quadratic cost function to restore artifact-free phase contrast images that directly correspond to the specimen’s optical path length. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on microscopy image sequences with thousands of cells captured over several days. We also demonstrate that accurate restoration lays the foundation for high performance in cell detection and tracking. | ['Zhaozheng Yin', 'Takeo Kanade', 'Mei Chen'] | Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation | 316,433 |
This paper presents an automatic system for the detection of single- and multi-note ornaments in Irish traditional flute playing. This is a challenging problem because ornaments are notes of a very short duration. The presented ornament detection system is based on first detecting onsets and then exploiting the knowledge of musical ornamentation. We employed onset detection methods based on signal envelope and fundamental frequency and customised their parameters to the detection of soft onsets of possibly short duration. Single-note ornaments are detected based on the duration and pitch of segments, determined by adjacent onsets. Multi-note ornaments are detected based on analysing the sequence of segments. Experimental evaluations are performed on monophonic flute recordings from Grey Larsen’s CD, which was manually annotated by an experienced flute player. The onset and single- and multinote ornament detection performance is presented in terms of the precision, recall andF -measure. | ['Münevver Köküer', 'Peter Jancovic', 'Islah Ali-MacLachlan', 'Cham Athwal'] | AUTOMATED DETECTION OF SINGLE- AND MULTI-NOTE ORNAMENTS IN IRISH TRADITIONAL FLUTE PLAYING | 675,842 |
The Multi-innovation Based RLS Method for Hammerstein Systems | ['Zhenwei Shi', 'Zhicheng Ji', 'Yan Wang'] | The Multi-innovation Based RLS Method for Hammerstein Systems | 948,798 |
This paper proposes a vision-based Multi-user Human Computer Interaction (HCI) method for creating augmented reality user interfaces. In the HCI session, one of the users' hands is selected as the active hand. The fingers of the active hand are employed as input devices to trigger functionalities of the application program. To share the token of interaction among the users, the HCI session is modeled as a Finite State Machine (FSM). The FSM is composed of the initial and steady states. In the initial state, the FSM identifies the active hand by tracking the hand with the maximum moving speed. Then the FSM enters the steady state to carry out the HCI session. At the end of each individual HCI cycle, the FSM polls requests from other hands for acquiring the role of active hand. If such requests are sensed, the FSM returns to the initial state to track a new active hand. Otherwise, the HCI session is continuously carried out by the current active hand. Test results show that the resultant user interface is efficient, flexible and practical for users with problems on using ordinary input devices. In a desk-top computer equipped with a 640 × 480 resolution web-camera, the HCI session can be successfully conducted when the operation distance ranges from 30 to 90 cm. | ['Shyh-Kuang Ueng', 'Guan-Zhi Chen'] | Vision based multi-user human computer interaction | 639,038 |
To offset the significant power demands of hard disk drives in computer systems, drives are typically powered down during idle periods. This saves power, but accelerates duty cycle consumption, leading to earlier drive failure. Hybrid disks with a small amount of non-volatile flash memory (NVCache) are coming on the market. We present four I/O subsystem enhancements that exploit the characteristics of hybrid disks to improve system performance: 1) artificial idle periods, 2) a read-miss cache, 3) anticipatory spin-up, and 4) NVCache write-throttling. These enhancements reduce power consumption, duty cycling, NVCache block-erase impact, and the observed spinup latency of a hybrid disk, resulting in lower power consumption, greater reliability, and faster I/O. | ['Timothy Bisson', 'Scott A. Brandt', 'Darrell D. E. Long'] | A Hybrid Disk-Aware Spin-Down Algorithm with I/O Subsystem Support | 78,023 |
In the past few years, we have witnessed the increasing ubiquity of user-generated content on seller reputation and product condition in Internet-based used-good markets. Recent theoretical models of trading and sorting in used-good markets provide testable predictions to use to examine the presence of adverse selection and trade patterns in such dynamic markets. A key aspect of such empirical analyses is to distinguish between product-level uncertainty and seller-level uncertainty, an aspect the extant literature has largely ignored. Based on a unique, 5-month panel data set of user-generated content on used good quality and seller reputation feedback collected from Amazon, this paper examines trade patterns in online used-good markets across four product categories (PDAs, digital cameras, audio players, and laptops). Drawing on two different empirical tests and using content analysis to mine the textual feedback of seller reputations, the paper provides evidence that adverse selection continues to exist in online markets. First, it is shown that after controlling for price and other product, and for seller-related factors, higher quality goods take a longer time to sell compared to lower quality goods. Second, this result also holds when the relationship between sellers' reputation scores and time to sell is examined. Third, it is shown that price declines are larger for more unreliable products, and that products with higher levels of intrinsic unreliability exhibit a more negative relationship between price decline and volume of used good trade. Together, our findings suggest that despite the presence of signaling mechanisms such as reputation feedback and product condition disclosures, the information asymmetry problem between buyers and sellers persists in online markets due to both product-based and seller-based information uncertainty. No consistent evidence of substitution or complementarity effects between product-based and seller-level uncertainty are found. Implications for research and practice are discussed. | ['Anindya Ghose'] | Internet exchanges for used goods: an empirical analysis of trade patterns and adverse selection 1 | 112,026 |
This paper presents a novel affine registration algorithm for diffusion tensor images. The proposed metric derived from the standpoint of diffusion profiles not only has concrete physical underpinning but also can be extended for comparing higher-order diffusion models. The non-translational part of the affine transformation is parametrized in the spirit of the Polar Decomposition Theorem. The registration objective function and its derivatives are derived analytically by combining this parametrization scheme with finite strain tensor reorientation. The affine algorithm is embeded in a multi-resolution piecewise affine framework for non-rigid registration. | ['Hui Zhang', 'Paul A. Yushkevich', 'James C. Gee'] | Registration of diffusion tensor images | 31,859 |
In Wireless Sensor Network, the localization of sensor nodes is an important problem in many applications. Normally in localization problem, the unknown position nodes will be determined their location through information of three or more anchors. In first part, some popular heuristic optimization methods like Genetic Algorithm (GA), Particle Swarm Optimization (PSO) will be compared with some recent optimization methods like Grey Wolf Optimizer (GWO), Firefly Algorithm (FA), and Brain Storm Optimization (BSO) algorithms in estimating the location of sensor nodes about accuracy. In second part, the improvement in localization algorithm is also proposed to enhance the number of nodes that can be localized. The results of our proposed improvement will be compared with original algorithm in both number of nodes that can be localized and the execute time with different deployment of networks. | ['Chin-Shiuh Shieh', 'Van-Oanh Sai', 'Yuh-Chung Lin', 'Tsair-Fwu Lee', 'Trong-The Nguyen', 'Quang-Duy Le'] | Improved Node Localization for WSN Using Heuristic Optimization Approaches | 887,170 |
We investigate genre effects on the task of automatic sentence segmentation, focusing on two important domains - broadcast news (BN) and broadcast conversation (BC). We employ an HMM model based on textual and prosodic information and analyze differences in segmentation accuracy and feature usage between the two genres using both manual and automatic speech transcripts. Experiments are evaluated using Czech broadcast corpora annotated for sentence-like units (SUs). Prosodic features capture information about pause, duration, pitch, and energy patterns. Textual knowledge sources include words, part-of-speech, and automatically induced classes. We also analyze effects of using additional textual data that is not annotated for SUs. Feature analysis reveals significant differences in both textual and prosodic feature usage patterns between the two genres. The analysis is important for building automatic understanding systems when limited matched-genre data are available, or for designing eventual genre-independent systems. | ['Jachym Kolar', 'Yang Liu', 'Elizabeth Shriberg'] | Genre effects on automatic sentence segmentation of speech: A comparison of broadcast news and broadcast conversations | 359,790 |
Broadcast transmissions are the predominate form of network traffic in a VANET. However, since there is no MAC-layer recovery on broadcast frames within an 802.11-based VANET, the reception rates of broadcast messages can become very low, especially under saturated conditions. In this paper, we present an adaptive broadcast protocol that improves the reception rates of broadcast messages. We rely on the observation that a node in a VANET is able to detect network congestion by simply analyzing the sequence numbers of packets it has recently received. Based on the percentage of packets that are successfully received in the last few seconds, a node is able to dynamically adjust the contention window size and thus improve performance. | ['Nathan Balon', 'Jinhua Guo'] | Increasing broadcast reliability in vehicular ad hoc networks | 200,013 |
The SKIM II processor is a microcoded hardware machine for the rapid evaluation of functional languages. This paper gives details of some of the more novel methods employed by SKIM II, and resulting performance measurements. The authors conclude that combinator reduction can still form the basis for the efficient implementation of a functional language. | ['W. R. Stoye', 'T. J. Clarke', 'A. C. Norman'] | Some practical methods for rapid combinator reduction | 38,123 |
We propose an approach of making pedagogical knowledge of CS instructors explicitly available by coupling it to exercises. | ['Herman Koppelman'] | Exercises as a tool for sharing pedagogical knowledge | 437,377 |
In this paper, we extend the interconnection and damping assignment passivity-based control strategy to systems subjected to nonholonomic Pfaffian constraints. The results are applied to stabilize the pitch dynamics of an underactuated mobile inverted pendulum (MIP) robot subjected to nonholonomic constraints arising out of no-slip conditions. A novel feature of this paper is that we reduce the kinetic energy partial differential equations and potential energy partial differential equation to a set of ordinary differential equations, that are explicitly solved. | ['Vijay Muralidharan', 'M. Ravichandran', 'Arun D. Mahindrakar'] | Extending interconnection and damping assignment passivity-based control (IDA-PBC) to underactuated mechanical systems with nonholonomic Pfaffian constraints: The mobile inverted pendulum robot | 271,405 |
We propose a declarative approach for prototyping automated negotiations in multi-agent systems. Our approach is demonstrated by using Jason agent programming language to describe English and Dutch auctions. | ['Alex Muscar', 'Laura Surcel', 'Costin Badica'] | Using Jason to Develop Declarative Prototypes of Automated Negotiations | 667,706 |
Map Simplification with Topology Constraints: Exactly and in Practice. | ['Stefan Funke', 'T. Mendel', 'Alexander Miller', 'Sabine Storandt', 'Maria Wiebe'] | Map Simplification with Topology Constraints: Exactly and in Practice. | 976,358 |
We present a maximum margin framework that clusters data using latent variables. Using latent representations enables our framework to model unobserved information embedded in the data. We implement our idea by large margin learning, and develop an alternating descent algorithm to effectively solve the resultant non-convex optimization problem. We instantiate our latent maximum margin clustering framework with tag-based video clustering tasks, where each video is represented by a latent tag model describing the presence or absence of video tags. Experimental results obtained on three standard datasets show that the proposed method outperforms non-latent maximum margin clustering as well as conventional clustering approaches. | ['Guang-Tong Zhou', 'Tian Lan', 'Arash Vahdat', 'Greg Mori'] | Latent Maximum Margin Clustering | 321,126 |
Enhance the Word Vector with Prosodic Information for the Recurrent Neural Network Based TTS System | ['Xin Wang', 'Shinji Takaki', 'Junichi Yamagishi'] | Enhance the Word Vector with Prosodic Information for the Recurrent Neural Network Based TTS System | 882,755 |
In this paper we apply 3D printing and genetic algorithm-generated anticipatory system dynamics models to a homeland security challenge, namely understanding the interface between transnational organized criminal networks and local gangs. We apply 3D printing to visualize the complex criminal networks involved. This allows better communication of the network structures and clearer understanding of possible interventions. We are applying genetic programming to automatically generate anticipatory system dynamics models. This will allow both the structure and the parameters of system dynamics models to evolve. This paper reports the status of work in progress. This paper builds on previous work that introduced the use of genetic programs to automatically generate system dynamics models. This paper's contributions are that it introduces the use of 3D printing techniques to visualize complex networks and that it presents in more detail our emerging approach to automatically generating anticipatory system dynamics in weakly constrained, data-sparse domains. | ['Michael J. North', 'Pam Sydelko', 'Ignacio J. Martinez-Moyano'] | Applying 3D printing and genetic algorithm-generated anticipatory system dynamics models to a homeland security challenge | 660,818 |
This paper presents the performance of a Chinese-English cross-language information retrieval CLIR system, which is equipped with topic-based pseudo relevance feedback. The web-based workflow simulates the real multilingual retrieval environment, and the feedback mechanism improves retrieval results automatically without putting excessive burden on users. | ['Xuwen Wang', 'Xiaojie Wang', 'Qiang Zhang'] | A Web-Based CLIR System with Cross-Lingual Topical Pseudo Relevance Feedback | 352,314 |
With the development of the multi-robot coordination, the working efficiency of the multi-robot system is improved, and the tasks can be finished better. Multi-robot coordination is one of the important issues for the research of mobile robot. The research of formation control of multi-robot will improve the efficiency of the coordination of multi-robot. So the problem of multi-robot formation is a typical problem of the system of robot team formation, and also communications play an important role in mobile robot systems able to address real world applications. Mobile ad-hoc networks are characterized by self-organization, rapid deployment and fault tolerance. A multi-robot formation supported by mobile Ad-hoc networks is suitable to some special situations where the communication devices of mobile networks cannot be preinstalled. The technology of leader-follower method has been very mature and the effect is good, and could be fast respond. However one issue with the typical leader-follower strategy is the lack of inter-robot information feedback throughout the group. And at present, the research of formation control only based on communication in real-time is not high, and the effect is bad. As the reason mentioned above, in this paper, we combined the leader-follower method with ad-hoc network to solve the problem of real-time formation. Then the robots can be in their lines correctly and quickly and arrive at the goal position. Computerized simulation results show that the approach is feasible. | ['Yi Zhang', 'Li Zeng', 'Yanhua Li', 'Quanjie Liu'] | Multi-robot formation control using leader-follower for MANET | 1,363 |
Environmental protection from productive investments becomes a major task for enterprises and constitutes a critical competitiveness factor. The region of Central Greece presents many serious and particular environmental problems. An Environmental Geographic Information System is under development that will maintain necessary and available information, including existing environmental legislation, specific data rules, regulations, restrictions and actions of the primary sector, existing activities of the secondary and tertiary sectors and their influences. The system will provide information about the environmental status in each location with respect to water resources, soil and atmosphere, the existence of significant pollution sources, existing surveys, studies and measurements for high risk areas, the land use and legal status of locations and the infrastructure networks. In this paper, we present a Database Design that supports the above mentioned objectives and information provision. More specifically, we present examples of user queries that the system should be able to answer for extraction of useful information, the basic categorization of data that will be maintained by the system, a data model that is able to support such data maintenance and examine how existing indexing structures can be utilized for efficient processing of such queries. | ['George Roumelis', 'Thanasis Loukopoulos', 'Michael Vassilakopoulos'] | Database Design of a Geo-environmental Information System | 747,600 |
This paper proposes a new method for constructing binary classification trees. The aim is to build simple trees, i.e. trees which are as less complex as possible, thereby facilitating interpretation and favouring the balance between optimization and generalization in the test data sets. The proposed method is based on the metaheuristic strategy known as GRASP in conjunction with optimization tasks. Basically, this method modifies the criterion for selecting the attributes that determine the split in each node. In order to do so, a certain amount of randomisation is incorporated in a controlled way. We compare our method with the traditional method by means of a set of computational experiments. We conclude that the GRASP method (for small levels of randomness) significantly reduces tree complexity without decreasing classification accuracy. | ['Joaquín A. Pacheco', 'Esteban Alfaro', 'Silvia Casado', 'Matías Gámez', 'Noelia García'] | A GRASP method for building classification trees | 271,667 |
Algorithms for Combining Rooted Triplets into a Galled Phylogenetic Network. | ['Jesper Jansson', 'Wing-Kin Sung'] | Algorithms for Combining Rooted Triplets into a Galled Phylogenetic Network. | 989,716 |
Beyond the Ad-Hoc and the Impractically Formal: Lessons from the Implementation of Formalisms of Intention. | ['Sean A. Lisse', 'Robert E. Wray', 'Marcus J. Huber'] | Beyond the Ad-Hoc and the Impractically Formal: Lessons from the Implementation of Formalisms of Intention. | 753,672 |
Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The quality and performance of the GA-based identification are compared with those based on extended Least-Mean-Square (LMS) methods, subject to the consideration of different types of time-delay systems, excitation signals, Signal-to-Noise Ratios, and different evaluation criteria. The obtained results exhibit that the GA technique has a very promising capability in handling this type of non-convex system identification problem. | ['Zhenyu Yang', 'Glen Thane Seested'] | Time-Delay System Identification Using Genetic Algorithm – Part One: Precise FOPDT Model Estimation | 885,006 |
A sensor network is generally composed of a set of sensors with limited computation capability and power supply. Thus, a well-defined resource allocation scheme is essential for maintaining the whole sensor network. This paper investigates the dynamic resource allocation problem in a sensor and robot network for mobile target tracking tasks. Most of the sensors will be in sleep mode except for the ones that can contribute for tracking. The sensor network resource allocation is achieved by a hierarchical structure-clustering. Upon detecting an interesting event, a set of sensors form a cluster. Only cluster members will be activated during the tracking task. The cluster headship and membership will be updated based on the target's movement properties. In this paper, the clustering algorithm considers sensing area with communication holes and a routing tree is set up within the cluster. For a cluster with communication and/or sensing holes, mobile sensors will be deployed to enhance the sensing and communication capability in the clustering area. Simulations have been used to verify the proposed algorithm. | ['Jindong Tan', 'Guofeng Tong'] | Dynamic resource allocation for target tracking in robotic sensor networks | 181,350 |
Autonomous systems are increasingly conceived as a means to allow operation in changeable or poorly understood environments. However, granting a system autonomy over its operation removes the ability of the developer to be completely sure of the system's behaviour under all operating contexts. This combination of environmental and behavioural uncertainty makes the achievement of assurance through testing very problematic. This paper focuses on a class of system, called an m-DAS, that uses run-time models to drive run-time adaptations in changing environmental conditions. We propose a testing approach which is itself model-driven, using model analysis to significantly reduce the set of test cases needed to test for emergent behaviour. Limited testing resources may therefore be prioritised for the most likely scenarios in which emergent behaviour may be observed. | ['Kristopher Welsh', 'Peter Sawyer'] | Managing Testing Complexity in Dynamically Adaptive Systems: A Model-Driven Approach | 431,789 |
We propose an environment for evolutionary prototyping technique, which is a theoretical framework based on abstract interpretation for Java programs. In general, it is difficult to execute a prototype in the intermediate stage of top-down development. This disadvantage prevents us to find bugs in the early stage of the development. The technique we propose here allows programmers to execute the prototype as a whole, even though it is partially implemented. In the technique, an object is repeatedly changed with interface changes. Our idea is to use an earlier runnable object instead of an unimplemented later object in runtime. The changes of objects are based on abstract interpretation. However, it is necessary to realize the mechanism for using the alternative object including the interface changes in Java. To solve this problem, we introduce proxy object, which is a mediator containing all interface and transform objects dynamically and automatically to keep execution going. Moreover, proxy objects can be automatically generated by the dynamic proxy class API, which is one of Java reflection mechanism. As a result, our environment would reduce the cost that programmers describe code for adjusting current objects. | ['Hiroyuki Ozaki', 'Shingo Ban', 'Katsuhiko Gondow', 'Takuya Katayama'] | An environment for evolutionary prototyping Java programs based on abstract interpretation | 176,326 |
We study a generative model in which hidden causes combine competitively to produce observations. Multiple active causes combine to determine the value of an observed variable through a max function, in the place where algorithms such as sparse coding, independent component analysis, or non-negative matrix factorization would use a sum. This max rule can represent a more realistic model of non-linear interaction between basic components in many settings, including acoustic and image data. While exact maximum-likelihood learning of the parameters of this model proves to be intractable, we show that efficient approximations to expectation-maximization (EM) can be found in the case of sparsely active hidden causes. One of these approximations can be formulated as a neural network model with a generalized softmax activation function and Hebbian learning. Thus, we show that learning in recent softmax-like neural networks may be interpreted as approximate maximization of a data likelihood. We use the bars benchmark test to numerically verify our analytical results and to demonstrate the competitiveness of the resulting algorithms. Finally, we show results of learning model parameters to fit acoustic and visual data sets in which max-like component combinations arise naturally. | ['Jörg Lücke', 'Maneesh Sahani'] | Maximal Causes for Non-linear Component Extraction | 433,872 |
Compact filter based on a hybrid structure of substrate integrated waveguide and coplanar waveguide | ['Zhaosheng He', 'Chang Jiang You', 'Supeng Leng', 'Xiang Li', 'Yongmao Huang', 'Haiyan Jin'] | Compact filter based on a hybrid structure of substrate integrated waveguide and coplanar waveguide | 976,593 |
Using both modified bit-level comparators and the bitonic sorting algorithm, three multicast routing networks are introduced. The first is dynamic network, which possess a time complexity of O(log/sup 2/ N) and cost complexity of O(N log N). The next two are Hypercube and 2D-MESH static topology networks. A new type of wormhole router is adopted to achieve a general multicast time complexity of O(log/sup 2/ N) and O(/spl radic/N) for the Hypercube and 2D-MESH respectively. > | ['Majed Z. Al-Hajery', 'Kenneth E. Batcher'] | ON the role of K-Bits bitonic sorting network in multicast routing | 246,093 |
We present an algorithm for the application of support vector machine (SVM) learning to image compression. The algorithm combines SVMs with the discrete cosine transform (DCT). Unlike a classic radial basis function networks or multilayer perceptrons that require the topology of the network to be defined before training, an SVM selects the minimum number of training points, called support vectors, that ensure modeling of the data within the given level of accuracy (a.k.a. insensitivity zone /spl epsi/). It is this property that is exploited as the basis for an image compression algorithm. Here, the SVMs learning algorithm performs the compression in a spectral domain of DCT coefficients, i.e., the SVM approximates the DCT coefficients. The parameters of the SVM are stored in order to recover the image. Results demonstrate that even though there is an extra lossy step compared with the baseline JPEG algorithm, the new algorithm dramatically increases compression for a given image quality; conversely it increases image quality for a given compression ratio. The approach presented can be readily applied for other modeling schemes that are in a form of a sum of weighted basis functions. | ['Jonathan Robinson', 'Vojislav Kecman'] | Combining support vector machine learning with the discrete cosine transform in image compression | 380,944 |
In this paper, we propose a fuzzy clustering (FC) approach to improve the template matching method described in (T. Wakahara et al., Proc. IEEE, vol.80, no.7, p.1181-1194, 1992) that demonstrated high recognition rates. Our focus is on the reduction of recognition time. The key idea is to organize the template pool in a structured manner so that matching can be performed more quickly. Through the hierarchical FC, the original template pool has the form of a decision tree where each internal node in the tree is represented by a cluster center. Since the number of class templates in each subcluster is less than the original cluster, the recognition time can be effectively reduced. An approximate formula for the number of distance calculations needed for the presented scheme is derived. The use of FC is to provide a mechanism for cluster overlapping so that in the middle of the recognition process almost 100% hit ratio can be obtained. In the experiments conducted, we obtained approximately a 3.4 times reduction of recognition time (using the simplified formula we derived) at only a 0.2% average loss of recognition rate. | ['Ming-Yen Tsai', 'Leu-Shing Lan', 'Wei-Tzen Pao'] | Online recognition of Chinese handwriting using a hierarchical fuzzy clustering approach | 474,314 |
We examine DoS resistance of the Host Identity Protocol (HIP) and discuss a technique to deny legitimate services To demonstrate the experiment, we implement a formal model of HIP based on Timed Petri Nets and use a simulation approach provided in CPN Tools to achieve a formal analysis By integrating adjustable puzzle difficulty, HIP can mitigate the effect of DoS attacks However, the inability to protect against coordinated adversaries on a hash-based puzzle causes the responder to be susceptible to DoS attacks at the identity verification phase As a result, we propose an enhanced approach by employing a time-lock puzzle instead of a hash-based scheme Once the time-lock puzzle is adopted, the effect of coordinated attacks will be removed and the throughput from legitimate users will return to the desirable level. | ['Suratose Tritilanunt', 'Colin Boyd', 'Ernest Foo', 'Juan Manuel González Nieto'] | Examining the dos resistance of HIP | 97,595 |
We describe a novel system for learning and adapting type-2 fuzzy controllers for intelligent agents that are embedded in ubiquitous computing environments (UCEs). Our type-2 agents operate non intrusively in an online life long learning manner to learn the user behaviour so as to control the UCE on the user's behalf. We have performed unique experiments in which the type-2 intelligent agent has learnt and adapted online to the user's behaviour during a stay of five days in the intelligent dormitory (iDorm) which is a real UCE test bed. We show how our type-2 agent deals with the uncertainty and imprecision present in UCEs to give a very good performance that outperform the type-1 fuzzy agents while using a smaller number of rules. | ['Faiyaz Doctor', 'Hani Hagras', 'Victor Callaghan'] | A type-2 fuzzy embedded agent for ubiquitous computing environments | 9,099 |
A Brief Note About Rott Contraction | ['Eduardo L. Fermé', 'Ricardo Oscar Rodríguez'] | A Brief Note About Rott Contraction | 129,034 |
Two issues in linear algebra algorithms for multicomputers are addressed. First, how to unify parallel implementations of the same algorithm in a decomposition-independent way. Second, how to optimize naive parallel programs maintaining the decomposition independence. Several matrix decompositions are viewed as instances of a more general allocation function called subcube matrix decomposition. By this meta-decomposition, a programming environment characterized by general primitives that allow one to design meta-algorithms independently of a particular decomposition. The authors apply such a framework to the parallel solution of dense matrices. This demonstrates that most of the existing algorithms can be derived by suitably setting the primitives used in the meta-algorithm. A further application of this programming style concerns the optimization of parallel algorithms. The idea to overlap communication and computation has been extended from 1-D decompositions to 2-D decompositions. Thus, a first attempt towards a decomposition-independent definition of such optimization strategies is provided. > | ['Michele Angelaccio', 'Michele Colajanni'] | Unifying and optimizing parallel linear algebra algorithms | 233,878 |
We propose a false-negative approach to approximate the set of frequent itemsets (FIs) over a sliding window. Existing approximate algorithms use an error parameter, ∈, to control the accuracy of the mining result. However, the use of e leads to a dilemma. A smaller e gives a more accurate mining result but higher computational complexity, while increasing e degrades the mining accuracy. We address this dilemma by introducing a progressively increasing minimum support function. When an itemset is retained in the window longer, we require its minimum support to approach the minimum support of an FI. Thus, the number of potential FIs to be maintained is greatly reduced. Our experiments show that our algorithm not only attains highly accurate mining results, but also runs significantly faster and consumes less memory than do existing algorithms for mining FIs over a sliding window. | ['James Cheng', 'Yiping Ke', 'Wilfred Ng'] | Maintaining frequent itemsets over high-speed data streams | 963,733 |
An optimized algorithm using neural network is presented, and the characteristics of neural network are introduced. This algorithm has advantages in increasing precision, reducing calculating time, reducing additional memory and the deterioration of the harmonic. This paper introduces a new FPGA: ALTERA’S ACEX chip and EDA tool: MAX+PLUS II. For recent years, FPGA is increasingly developing very fast and has the advantages of high speed, large scale and easy-to-design software. It is convenient to use FPGA to design the circuit, simulate and verify the result, and it is also easy to debug the hardware using ISP (In System Programmable). So this algorithm is easily realized. If it is necessary to adopt the improved algorithm, the only work we should do is to modify the algorithm in software, then download file to chip using ISP, it has nothing to do with the hardware circuit. Therefore, it is fit to design product and verify the algorithm using FPGA. At the end of this paper, the experimental results are shown, and the advantages are verified by these results. | ['Li Jianlin', 'Xu Hong-hua', 'Zhangzhong Chao'] | Based FPGA Development of Optimized SVPWM Algorithm | 113,473 |
ROSCoq: Robots Powered by Constructive Reals | ['Abhishek Anand', 'Ross A. Knepper'] | ROSCoq: Robots Powered by Constructive Reals | 300,460 |
Motivation: Segmental duplications > 1 kb in length with ≥ 90% sequence identity between copies comprise nearly 5% of the human genome. They are frequently found in large, contiguous regions known as duplication blocks that can contain mosaic patterns of thousands of segmental duplications. Reconstructing the evolutionary history of these complex genomic regions is a non-trivial, but important task.#R##N##R##N#Results: We introduce parsimony and likelihood techniques to analyze the evolutionary relationships between duplication blocks. Both techniques rely on a generic model of duplication in which long, contiguous substrings are copied and reinserted over large physical distances, allowing for a duplication block to be constructed by aggregating substrings of other blocks. For the likelihood method, we give an efficient dynamic programming algorithm to compute the weighted ensemble of all duplication scenarios that account for the construction of a duplication block. Using this ensemble, we derive the probabilities of various duplication scenarios. We formalize the task of reconstructing the evolutionary history of segmental duplications as an optimization problem on the space of directed acyclic graphs. We use a simulated annealing heuristic to solve the problem for a set of segmental duplications in the human genome in both parsimony and likelihood settings.#R##N##R##N#Availability: Supplementary information is available at http://www.cs.brown.edu/people/braphael/supplements/.#R##N##R##N#Contact: ude.nworb.sc@nhaklc; ude.nworb.sc@leahparb. | ['Crystal L. Kahn', 'Borislav Hristov', 'Benjamin J. Raphael'] | Parsimony and likelihood reconstruction of human segmental duplications | 499,939 |
Tracking Control for Two-dimensional Overhead Crane - Feedback Linearization with Linear Observer. | ['Tamás Rózsa', 'Bálint Kiss'] | Tracking Control for Two-dimensional Overhead Crane - Feedback Linearization with Linear Observer. | 769,511 |
Spearphishing is a prominent targeted attack vector in today's Internet. By impersonating trusted email senders through carefully crafted messages and spoofed metadata, adversaries can trick victims into launching attachments containing malicious code or into clicking on malicious links that grant attackers a foothold into otherwise well-protected networks. Spearphishing is effective because it is fundamentally difficult for users to distinguish legitimate emails from spearphishing emails without additional defensive mechanisms. However, such mechanisms, such as cryptographic signatures, have found limited use in practice due to their perceived difficulty of use for normal users. In this paper, we present a novel automated approach to defending users against spearphishing attacks. The approach first builds probabilistic models of both email metadata and stylometric features of email content. Then, subsequent emails are compared to these models to detect characteristic indicators of spearphishing attacks. Several instantiations of this approach are possible, including performing model learning and evaluation solely on the receiving side, or senders publishing models that can be checked remotely by the receiver. Our evaluation of a real data set drawn from 20 email users demonstrates that the approach effectively discriminates spearphishing attacks from legitimate email while providing significant ease-of-use benefits over traditional defenses. | ['Sevtap Duman', 'Kubra Kalkan-Cakmakci', 'Manuel Egele', 'William K. Robertson', 'Engin Kirda'] | EmailProfiler: Spearphishing Filtering with Header and Stylometric Features of Emails | 877,052 |
We investigate the association between constellation shaping and bit-interleaved coded modulation with iterative decoding (BICM-ID). To this end, we consider a technique which consists of inserting shaping block codes between mapping and channel coding functions in order to achieve constellation shaping. By assuming the example of a 2-b/s/Hz 16-quadrature amplitude modulation BICM-ID, it is demonstrated using computer simulations that this technique can improve the performance of BICM-ID schemes by a few tenths of a decibel | ['Boon Kien Khoo', 'S.Y. Le Goff', 'Bayan S. Sharif', 'Charalampos C. Tsimenidis'] | Bit-interleaved coded modulation with iterative decoding using constellation shaping | 101,878 |
The practice of 'crowdsourcing' under the new technological regime has opened doors of huge data repositories. In recent years, crowdsourcing have expanded rapidly allowing citizens to connect with each other, governments to connect with common mass, to coordinate disaster response work, to map political conflicts, acquiring information quickly and participating in issues that affect day-to-day life of citizens. Crowdsourcing has the potentiality to offer smart governance by gathering and analyzing massive data from citizens. As data is a key enabler to proper public governance, this paper aims to provide a picture of potential offers that 'crowdsourcing' could make in support of crisis governance in the Post-2015 World, while it illustrates some critical challenges of data protection and privacy in different service sectors. Lastly, with a brief analysis on privacy, online data protection; and safety level of some crowdsourcing tools, this paper proposes brief guidelines for different stakeholders and some future works to avoid some mismanagement of crowdsourced data to protect data, privacy and security of end users. | ['Buddhadeb Halder'] | Crowdsourcing collection of data for crisis governance in the post-2015 world: potential offers and crucial challenges | 483,647 |
We investigate the spectral efficiency achievable by random synchronous code-division multiple access (CDMA) with quaternary phase-shift keying (QPSK) modulation and binary error-control codes, in the large system limit where the number of users, the spreading factor, and the code block length go to infinity. For given codes, we maximize spectral efficiency assuming a minimum mean-square error (MMSE) successive stripping decoder for the cases of equal rate and equal power users. In both cases, the maximization of spectral efficiency can be formulated as a linear program and admits a simple closed-form solution that can be readily interpreted in terms of power and rate control. We provide examples of the proposed optimization methods based on off-the-shelf low-density parity-check (LDPC) codes and we investigate by simulation the performance of practical systems with finite code block length. | ['Giuseppe Caire', 'Souad Guemghar', 'Aline Roumy', 'Sergio Verdu'] | Maximizing the spectral efficiency of coded CDMA under successive decoding | 17,463 |