abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Publisher Summary#R##N#This chapter presents an architectural overview of the Hewlett-Packard (HP) Exemplar V-Class Technical Server. The Exemplar Technical Servers from HP address major computing challenges like compute, file, product data management, storage and intranet applications, and range from the entry-level systems to the high-end supercomputing level systems. The systems of the Exemplar product line possess very high scalability in terms of performance and the ability to provide consistent price/performance over the entire range of each product line. They also possess consistent programming model. All of these systems, regardless of the performance levels, present the same application programming environment. The first generation, the Exemplar SPPlx00 family with the models SPP1000, SPP1200, SPP1600, pioneered the use of a highly scalable operating system combined with a global shared memory (GSM) programming environment. The V-Class offers improved performance by employing the PA8200 processor, HP’s latest RISC CPU, and increased functionality by running native HP-UX 11.0, a full 64-bit implementation of UNIX. HP’s PA-8200 processor that delivers currently 800 MFLOPS (millions of floating operations per second) at 200 MHz is used in the Exemplar V-Class server. This processor offers up to 4-way superscalar processing yielding 4 operations per clock. | ['Frank P.E. Baetke', 'Henry V.A. Strauß'] | Architectural overview of the HP Exemplar V-Class Technical Server | 695,595 |
The Challenge of Creating Geo-Location Markup for Digital Books | ['Annika Hinze', 'David Bainbridge', 'Sally Jo Cunningham'] | The Challenge of Creating Geo-Location Markup for Digital Books | 900,231 |
Evaluation von Lokalisierungstechnologien für den Einsatz im Operationssaal. | ['Gel Han', 'Susanne Meindl', 'Sonja Vogl', 'Gudrun Klinker'] | Evaluation von Lokalisierungstechnologien für den Einsatz im Operationssaal. | 995,421 |
Vorwissensbasierte NFFT zur CT-Metallartefaktreduktion. | ['Bärbel Kratz', 'May Oehler', 'Thorsten M. Buzug'] | Vorwissensbasierte NFFT zur CT-Metallartefaktreduktion. | 769,100 |
Results on i.p.s. Hypergroups. | ['Rajab Ali Borzooei', 'Abbas Hasankhani', 'H. Rezaei'] | Results on i.p.s. Hypergroups. | 735,873 |
We investigate parameterizations of both database instances and queries that make query evaluation fixed-parameter tractable in combined complexity. We introduce a new Datalog fragment with stratified negation, intensional-clique-guarded Datalog (ICG-Datalog), with linear-time evaluation on structures of bounded treewidth for programs of bounded rule size. Such programs capture in particular conjunctive queries with simplicial decompositions of bounded width, guarded negation fragment queries of bounded CQ-rank, or two-way regular path queries. Our result is shown by compiling to alternating two-way automata, whose semantics is defined via cyclic provenance circuits (cycluits) that can be tractably evaluated. Last, we prove that probabilistic query evaluation remains intractable in combined complexity under this parameterization. | ['Antoine Amarilli', 'Pierre Bourhis', 'Mikaël Monet', 'Pierre Senellart'] | Combined Tractability of Query Evaluation via Tree Automata and Cycluits. | 998,221 |
Entity Resolution constitutes a core task for data integration that, due to its quadratic complexity, typically scales to large datasets through blocking methods. These can be configured in two ways. The schema-based configuration relies on schema information in order to select signatures of high distinctiveness and low noise, while the schema-agnostic one treats every token from all attribute values as a signature. The latter approach has significant potential, as it requires no fine-tuning by human experts and it applies to heterogeneous data. Yet, there is no systematic study on its relative performance with respect to the schema-based configuration. This work covers this gap by comparing analytically the two configurations in terms of effectiveness, time efficiency and scalability. We apply them to 9 established blocking methods and to 11 benchmarks of structured data. We provide valuable insights into the internal functionality of the blocking methods with the help of a novel taxonomy. Our studies reveal that the schema-agnostic configuration offers unsupervised and robust definition of blocking keys under versatile settings, trading a higher computational cost for a consistently higher recall than the schema-based one. It also enables the use of state-of-the-art blocking methods without schema knowledge. | ['George Papadakis', 'George Alexiou', 'George Papastefanatos', 'Georgia Koutrika'] | Schema-agnostic vs schema-based configurations for blocking methods on homogeneous data | 581,627 |
This paper proposes a method for keyframe selection of captured motion data. Motion capture systems have been widely used in movies, games and human motion analysis. Most of the previous methods make use of the rotation angles directly and measure a cost of the rotation curves for selection of keyframes. One drawback of these methods is that they do not directly control the positions of the joints in the 3D space. Our method proposes a position-based keyframe detection scheme. In our framework, frames are decimated one by one with measuring the positions of all the joints. We make use of the cost that is the sum of all the joint position differences. Then, a frame with the lowest cost is decimated after measuring those costs about frames. We demonstrate it in experimental section by several typical motions | ['Hideyuki Togawa', 'Masahiro Okuda'] | Position-Based Keyframe Selection for Human Motion Animation | 339,574 |
Visualisation of Musical Structure by Applying Improved Editing Algorithms. | ['Pierre Hanna', 'Matthias Robine', 'Pascal Ferraro'] | Visualisation of Musical Structure by Applying Improved Editing Algorithms. | 786,581 |
The authors show how the Euclidean algorithm fits into the behavioral framework of exact modeling and how it computes solutions of the scalar minimal partial realization problem. It turns out that the Euclidean algorithm can be considered as a special instance of Wolovich's procedure (1974) to achieve row reducedness for a given polynomial 2/spl times/2 matrix. The authors show in detail how this approach yields a parameterization of all minimal solutions in terms of polynomials that are sequentially produced by the Euclidean algorithm. | ['Margreta Kuijper'] | Partial realization and the Euclidean algorithm | 475,225 |
A permanent-magnet (PM)-assisted synchronous reluctance (PMASR) machine exhibits both high efficiency and high flux-weakening (FW) range. However, the best performance is achieved after a machine design optimization. In industry applications, the design of PMASR machines requires to satisfy an increasing number of limitations. The key points are lamination geometry, material property, and control strategy. This paper analyzes the influence of PM volume (flux level) on the motor performance, although lamination geometry and stack length are kept fixed. Thus, the PM volume inset in the rotor is optimized. The considered PMASR motor is designed for a very high FW speed range. The study is based on a finite-element (FE) analysis. The accuracy of the FE simulations is verified comparing their results with measurements on a prototype. The FE model is then used to study the different cases. | ['Massimo Barcaro', 'Nicola Bianchi', 'Freddy Magnussen'] | Permanent-Magnet Optimization in Permanent-Magnet-Assisted Synchronous Reluctance Motor for a Wide Constant-Power Speed Range | 208,305 |
Partitioned Memory Models for Program Analysis | ['Wei Wang', 'Clark Barrett', 'Thomas Wies'] | Partitioned Memory Models for Program Analysis | 960,073 |
Inverse Light Design for High-occlusion Environments | ['Anastasios Gkaravelis', 'Georgios Papaioannou', 'Konstantinos Kalampokis'] | Inverse Light Design for High-occlusion Environments | 737,397 |
This work aims to present an experiment conducted in an engineer school with 10 senior students. During this experiment we asked the group for making a decision in a face to face meeting then in an asynchronous non distributed situation and finally in an asynchronous distributed situation. For this purpose the group participants used a Group Multi-Criteria Decision Support System: the Co-oP system. Afterwards, a questionnaire was given to the group members and the results are analyzed and then discussed. | ['Pascale Zaraté', 'Jean-Luc Soubie', 'Tung X. Bui'] | Experiment of a Group Multi-Criteria Decision Support System for Distributed Decision Making Processes | 453,701 |
To ascertain the possible level of risk in a digital business ecosystem interaction, the initiating agent has to determine beforehand the probability of failure, the possible consequences of failure, and the loss of investment probability to its resources while interacting with the other agent. Out of these three constituents, the initiating agent can determine beforehand the probability of failure in interacting with an agent either by considering its past interaction history with it or by soliciting recommendations from other agents. In both cases, it is imperative for the agent who is either considering its past interaction history, or who is communicating a recommendation about the other agent, to know the accurate level of failure in interacting with the other agent. To achieve this, in this paper we propose a methodology by which the initiating agent of the interaction ascertains the level of failure in the interaction, after interacting with an agent. | ['Omar Khadeer Hussain', 'Elizabeth Chang', 'Farookh Khadeer Hussain', 'Tharam S. Dillon'] | Quantifying the level of failure in a digital business ecosystem interactions | 470,796 |
Diversity is a powerful means to increase the transmission performance of wireless communications. For the case of fountain codes relaying, it has been shown previously that introducing diversity is also beneficial since it counteracts transmission losses on the channel. Instead of simply hop-by-hop forwarding information, each sensor node diversifies the information flow using XOR combinations of stored packets. This approach has been shown to be efficient for random linear fountain codes. However, random linear codes exhibit high decoding complexity. In this paper, we propose diversity increased relaying strategies for the more realistic and lower complexity Luby Transform code in a linear network. Results are provided herein for a linear network assuming uniform imperfect channel states. | ['Anya Apavatjrut', 'Claire Goursaud', 'Katia Jaffrès-Runser', 'Cristina Comaniciu', 'Jean-Marie Gorce'] | Toward Increasing Packet Diversity for Relaying LT Fountain Codes in Wireless Sensor Networks | 32,637 |
The adoption of smartphones, devices transforming from simple communication devices to smart and multipurpose devices, is constantly increasing. Amongst the main reasons for their vast pervasiveness are their small size, their enhanced functionality, as well as their ability to host many useful and attractive applications. Furthermore, recent studies estimate that application installation in smartphones acquired from official application repositories, such as the Apple Store, will continue to increase. In this context, the official application repositories might become attractive to attackers trying to distribute malware via these repositories. The paper examines the security inefficiencies related to application distribution via application repositories. Our contribution focuses on surveying the application management procedures enforced during application distribution in the popular smartphone platforms (i.e. Android, Black-Berry, Apple iOS, Symbian, Windows Phone), as well as on proposing a scheme for an application management system suited for secure application distribution via application repositories. | ['Alexios Mylonas', 'Bill Tsoumas', 'Stelios Dritsas', 'Dimitris Gritzalis'] | A secure smartphone applications roll-out scheme | 603,866 |
We develop a novel approach for decision making under uncertainty in high-dimensional state spaces, considering both active unfocused and focused inference, where in the latter case reducing the uncertainty of only a subset of variables is of interest. State of the art approaches typically first calculate the posterior information (or covariance) matrix, followed by its determinant calculation, and do so separately for each candidate action. In contrast, using the generalized matrix determinant lemma, we avoid calculating these posteriors and determinants of large matrices. Furthermore, as our key contribution we introduce the concept of calculation re-use, performing a onetime computation that depends on state dimensionality and system sparsity, after which evaluating the impact of each candidate action no longer depends on state dimensionality. Such a concept is derived for both active focused and unfocused inference, leading to general, non-myopic and exact approaches that are faster by orders of magnitude compared to the state of the art. We verify our approach experimentally in two scenarios, sensor deployment (focused and unfocused) and measurement selection in visual SLAM, and show its superiority over standard techniques. | ['Dmitry Kopitkov', 'Vadim Indelman'] | Computationally efficient decision making under uncertainty in high-dimensional state spaces | 956,213 |
A major issue in software testing is the automatic generation of the inputs to be applied to the programme under test. To solve this problem, a number of approaches based on search methods have been developed in the last few years, offering promising results for adequacy criteria like, for instance, branch coverage. We devise branch coverage as the satisfaction of a number of constraints. This allows to formulate the test data generation as a constrained optimisation problem or as a constraint satisfaction problem. Then, we can see that many of the generators so far have followed the same particular approach. Furthermore, this constraint-handling point of view overcomes this limitation and opens the door to new designs and search strategies that, to the best of our knowledge, have not been considered yet. As a case study, we develop test data generators employing different penalty objective functions or multiobjective optimisation. The results of the conducted preliminary experiments suggest these generators can improve the performance of classical approaches. | ['Ramón Sagarna', 'Xin Yao'] | Handling Constraints for Search Based Software Test Data Generation | 202,698 |
This paper describes a prototype system for performing handover between cameras with non-overlapping views. The design is being used to identify problems that may arise in the development of a larger, more capable, and fully automatic system. If there is no information about the spatio-temporal relationship between cameras to assist in matching individuals, similarities in appearance may be used. Here, the object's appearance is represented by a vector of features calculated from its delineation. The features considered are the scale-invariant feature transform, grey-level co-occurrence matrix features, local binary patterns, Zernike moments and some simple colour features. The system has been tested on a difficult surveillance scenario, which considers opposing views of the subjects (frontal presentation in one sequence matched with rear presentation in the other, and vice versa). Several classification strategies are employed to determine the best match across presentations of the subjects in each sequence. The quality of the results was lower than expected but provides useful information for future robustification of the system. | ['Nicholas J. Redding', 'Julius Fabian Ohmer', 'Judd Kelly', 'Tristrom Cooke'] | Cross-Matching via Feature Matching for Camera Handover with Non-overlapping Fields of View | 37,273 |
In the field of software intensive systems, the software industry inherently requires a new method to obtain a high level of productivity by reuse of software embedded in the systems. Product line engineering is an emerging software engineering discipline with the new approach that makes it efficient to reuse embedded software by scoping a domain of systems. Features are described with terms for system requirements, which cover both functional requirements and nonfunctional requirements. Features are deployed in a tree figure, such that a general feature consists of subordinate features. The feature tree has a central role in system design and development. To construct a feature tree, the construction must be based on all fundamental design knowledge about a domain of systems such as of integrated circuits and mechanical devices as well as software entities because all design knowledge in any engineering disciplines in a domain of systems affect commonality and variability modeling in the feature tree. The feature modeling therefore needs a global systems engineering viewpoint. | ['Kei Kurakawa'] | Feature modeling from holistic viewpoints in product line engineering | 263,877 |
Guest Editorial: Large-scale image and video search: Challenges, technologies, and trends | ['Meng Wang', 'Nicu Sebe', 'Tao Mei', 'Js Li', 'Kiyoharu Aizawa'] | Guest Editorial: Large-scale image and video search: Challenges, technologies, and trends | 575,397 |
Future wireless multimedia systems will support a variety of services with diverse range of capabilities and bit rates. For these systems, it is highly desired for real-time conversational and non-real-time services to efficiently share the available channels and bandwidth in an optimized way. The partitioned resource shaping with either fixed or a slow changing dynamic, proposed for conventional packet scheduling techniques, proves difficult and inefficient under fast-changing dynamics of radio channel and traffic. By taking into account almost all the aspects (dimensions) of quality-of-service (QoS) provisioning, the proposed unified fast dynamic multidimensional QoS-based packet scheduler (MQPS) in this paper elegantly and efficiently encapsulates features of many possible packet scheduling strategies. MQPS applies an optimization and tuning mechanism to packet scheduling weights to adopt the most appropriate packet scheduling and channel assignment strategy in response to the varying traffic and radio channel conditions. As an example, the technique is applied to a high-speed downlink packet access (HSDPA) system. It is shown that MQPS provides significantly better performance than existing techniques by satisfying all the requirements of a successful QoS provisioning to maximum possible level simultaneously. | ['Saied Abedi'] | Efficient radio resource management for wireless multimedia communications: a multidimensional QoS-based packet scheduler | 150,970 |
Traditional information retrieval techniques require documents that share enough words to build semantic links between them. This kind of technique is greatly affected by two factors: synonymy (different words having the same meaning) and polysemy (a word with several meanings), also known as ambiguity. Synonymy may result in a loss of semantic difference, while polysemy may lead to wrong semantic links. S.J. Green (1999) proposed the concept of a synset (a set of words having the same or a close meaning) and used a synset method to solve the problems of synonymy and polysemy. Although the synonymy problem can be solved, the polysemy problem still remains, because it is not actually possible to use an entire document as a basis to identify the meaning of a word. In this paper, we propose the concept of a context-related semantic set in order to identify the meaning of a word by considering the relations between the word and its contexts. We believe that this approach can efficiently solve the ambiguity problem and hence support the automation of Web document searching and analysis. | ['Lian Wang', 'William Song', 'David W. Cheung'] | Using contextual semantics to automate the Web document search and analysis | 261,865 |
Gas identification represents a big challenge for pattern recognition systems due to several particular problems such as non-selectivity and drift. This paper proposes a gas identification committee machine (CM), which combines various gas identification algorithms, to obtain a unified decision with improved accuracy. The CM combines 5 different classifiers: K nearest neighbors (KNN), multi-layer perceptron (MLP), radial basis function (RFB), Gaussian mixture model (GMM) and probabilistic PCA (PPCA). A data acquisition system using tin-oxide gas sensor array has been designed in order to create a real gas data set. The committee machine is implemented by assembling the outputs of these gas identification algorithms based on weighted combination rule. Experiments on real sensors' data proved the effectiveness of our system with an improved accuracy 95.9% over the individual classifiers. | ['Minghua Shi', 'Amine Bermak'] | Committee Machine with Over 95% Classification Accuracy for Combustible Gas Identification | 132,633 |
Traditionally, the "best effort, cost free" model of Supercomputers/Grids does not consider pricing. Clouds have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on "pay-as-you-go" model. Large scale many-task workflow (MTW) may be suited for execution on Clouds due to its scale-* requirement (scale up, scale out, and scale down). In the context of scheduling, MTW execution cost must be considered based on users' budget constraints. In this paper, we address the problem of scheduling MTW on Clouds and present a budget-conscious scheduling algorithm, referred to as Scale Star (or Scale-*). Scale Star assigns the selected task to a virtual machine with higher comparative advantage which effectively balances the execution time-and-monetary cost goals. In addition, according to the actual charging model, an adjustment policy, refer to as {\em DeSlack}, is proposed to remove part of slack without adversely affecting the overall make span and the total monetary cost. We evaluate Scale Star with an extensive set of simulations and compare with the most popular HEFT-based LOSS3 algorithm and demonstrate the superior performance of Scale Star. | ['Lingfang Zeng', 'Bharadwaj Veeravalli', 'Xiaorong Li'] | ScaleStar: Budget Conscious Scheduling Precedence-Constrained Many-task Workflow Applications in Cloud | 35,238 |
Motivated by both established and new applications, we study navigational query languages for graphs (binary relations). The simplest language has only the two operators union and composition, together with the identity relation. We make more powerful languages by adding any of the following operators: intersection; set difference; projection; coprojection; converse; transitive closure; and the diversity relation. All these operators map binary relations to binary relations. We compare the expressive power of all resulting languages. We do this not only for general path queries (queries where the result may be any binary relation) but also for boolean or yes/no queries (expressed by the nonemptiness of an expression). For both cases, we present the complete Hasse diagram of relative expressiveness. In particular, the Hasse diagram for boolean queries contains nontrivial separations and a few surprising collapses. | ['George H. L. Fletcher', 'Marc Gyssens', 'Dirk Leinders', 'Jan Van den Bussche', 'Dirk Van Gucht', 'Stijn Vansummeren', 'Yuqing Wu'] | Relative expressive power of navigational querying on graphs | 3,536 |
In IEEE 802.11 wireless local area networks (WLANs), TCP suffers unfairness between uplink and downlink flows due to its asymmetric reactions towards data and ACK losses at the AP (Access Point), and the AP's inability to distinguish itself from other contending stations accessing the medium. In this paper, we propose a novel dual queue management (DQM) scheme with ECN (Explicit Congestion Notification) at the AP's downlink buffer to improve TCP performance in infrastructure WLANs. Our approach maintains two queues for TCP ACK and data respectively, with their total length controlled by a PI (Proportional Integral) controller to prevent congestion. ACK/data packets are marked/dequeued dependent on uplink/downlink time usage of the wireless channel. We also propose an opportunistic scheduling mechanism of the two queues exploiting "multi-rate capability" and "multi-user diversity" for more efficient link utilization. We evaluate the proposed approach in ns-2 and our simulation results demonstrate that this design with few states can significantly improve TCP congestion control, fairness performance and link utilization in WLANs. | ['Qiuyan Xia', 'Xing Jin', 'Mounir Hamdi'] | Dual Queue Management for Improving TCP Performance in Multi-Rate Infrastructure WLANs | 109,990 |
A theory of minimizing distortion in reconstructing a stationary signal under a constraint on the number of bits per sample is developed. We first analyze the optimal sampling frequency required in order to achieve the optimal distortion-rate tradeoff for a stationary bandlimited signal. To this end, we consider a combined sampling and source coding problem in which an analog Gaussian source is reconstructed from its encoded sub-Nyquist samples. We show that for processes whose energy is not uniformly distributed over the spectral band, each point on the distortion-rate curve of the process corresponds to a sampling frequency smaller than the Nyquist rate. This characterization can be seen as an extension of the classical sampling theorem for bandlimited random processes in the sense that it describes the minimal amount of excess distortion in the reconstruction due to lossy compression of the samples, and provides the minimal sampling frequency $f_{DR}$ required in order to achieve that distortion. We compare the fundamental limits of combined source coding and sampling to the performance in pulse code modulation (PCM), where each sample is quantized by a scalar quantizer using a fixed number of bits. | ['Alon Kipnis', 'Yonina C. Eldar', 'Andrea J. Goldsmith'] | Sampling stationary signals subject to bitrate constraints | 623,468 |
We describe twinning and its applications to adapting programs to alternative APIs. Twinning is a simple technique that allows programmers to specify a class of program changes, in the form of a mapping , without modifying the target program directly. Using twinning, programmers can specify changes that transition a program from using one API to using an alternative API. We describe two related mapping-based source-to-source transformations. The first applies the mapping to a program, producing a copy with the changes applied. The second generates a new API that abstracts the changes specified in the mapping. Using this API, programmers can invoke either the old (replaced) code or the new (replacement) code through a single interface. Managing program variants usually involves heavyweight tasks that can prevent the program from compiling for extended periods of time, as well as simultaneous maintenance of multiple implementations, which can make it easy to forget to add features or to fix bugs symmetrically. Our main contribution is to show that, at least in some common cases, the heavyweight work can be reduced and symmetric maintenance can be at least encouraged, and often enforced. | ['Marius Nita', 'David Notkin'] | Using twinning to adapt programs to alternative APIs | 178,656 |
Detection of living microalgae cells is very important for ballast water treatment and analysis. Chlorophyll fluorescence is an indicator of photosynthetic activity and hence the living status of plant cells. In this paper, we developed a novel microfluidic biosensor system that can quickly and accurately detect the viability of single microalgae cells based on chlorophyll fluorescence. The system is composed of a laser diode as an excitation light source, a photodiode detector, a signal analysis circuit, and a microfluidic chip as a microalgae cell transportation platform. To demonstrate the utility of this system, six different living and dead algae samples (Karenia mikimotoi Hansen, Chlorella vulgaris, Nitzschia closterium, Platymonas subcordiformis, Pyramidomonas delicatula and Dunaliella salina) were tested. The developed biosensor can distinguish clearly between the living microalgae cells and the dead microalgae cells. The smallest microalgae cells that can be detected by using this biosensor are 3 μm ones. Even smaller microalgae cells could be detected by increasing the excitation light power. The developed microfluidic biosensor has great potential for in situ ballast water analysis. | ['Junsheng Wang', 'Jinyang Sun', 'Yongxin Song', 'Yongyi Xu', 'Xinxiang Pan', 'Yeqing Sun', 'Dongqing Li'] | A Label-Free Microfluidic Biosensor for Activity Detection of Single Microalgae Cells Based on Chlorophyll Fluorescence | 484,152 |
Proper machine condition monitoring is really crucial for any industrial and mechanical systems. The efficiency of mechanical systems greatly relies on rotating components like shaft, bearing and rotor. This paper focuses on detecting different fault in the roller bearings by casting the problem as machine learning based pattern classification problem. The different bearing fault conditions considered are, bearing-good condition, bearing with inner race fault, bearing with outer race fault and bearing with inner and outer race fault. Earlier the statistical features of the vibration signals were used for the classification task. In this paper, the cyclostationary behavior of the vibration signals is exploited for the purpose. In the feature space the vibration signals are represented by cyclostationary feature vectors extracted from it. The features thus extracted were trained and tested using pattern classification algorithms like decision tree J48, Sequential Minimum Optimization (SMO) and Regularized Least Square (RLS) based classification and provides a comparison on accuracies of each method in detecting faults. | ['Sachin Kumar', 'Neethu Mohan', 'Prabaharan Poornachandran', 'K. P. Soman'] | Condition Monitoring in Roller Bearings using Cyclostationary Features | 383,778 |
Is the decision to go open-source always purely altruistic? Not for many large companies, and that is not a bad thing. | ['Bryant Eastham'] | Panasonic and the OpenDOF project: open-source vision in a large company | 589,717 |
REFLECTIONS Postcards from the UK-CS for all | ['Deepak Kumar'] | REFLECTIONS Postcards from the UK-CS for all | 868,752 |
Motion estimation (ME) is one of the key elements in video coding standard which eliminates the temporal redundancies between successive frames. In recent international video coding standards, sub-pixel ME is proposed for its excellent coding performance. Compared with integer-pixel ME, sub-pixel ME needs interpolation to get the value in sub-pixel position. Also, Hadamard transform will be applied in order to achieve better performance. Therefore, it is becoming more and more critical to develop fast sub-pixel ME algorithms. In this paper, a novel fast sub-pixel ME algorithm is proposed which makes full use of 8 neighboring integer-pixel points. This algorithm models the error surface in sub-pixel position by a second order function with five parameters two times to predict the best sub-pixel position. Experimental results show that the proposed method can reduce the complexity significantly with negligible quality degradation. | ['Wei Dai', 'Oscar C. Au', 'Sijin Li', 'Lin Sun', 'Ruobing Zou'] | Fast sub-pixel motion estimation with simplified modeling in HEVC | 192,222 |
CONTEXT: Recent studies have shown that estimation accuracy can be affected by only using a window of recent projects (instead of all past projects) as training data for building an effort estimation model. The effect and its extent can be affected by the effort estimation methods used, and the windowing policy used (fixed size or fixed duration). The generality of the windowing approach remains uncertain, because only a few effort estimation methods have been examined with each policy. OBJECTIVE: To investigate the effect on estimation accuracy of using the fixed-duration window policy, particularly in comparison to the fixed-size window policy, when using Classification and Regression Trees (CART) as the estimation method. METHOD: Using a single-company ISBSG data set studied previously in similar research, we examine the effects of using a fixed-duration windowing policy on the accuracy of estimates using CART. RESULTS: Fixed-duration windows rarely improve the accuracy of estimates with CART, compared to using all past projects as training data. Few window sizes lead to statistically significant differences. The effect is smaller than when fixed-size windows are used. CONCLUSIONS: Fixed-duration windows are not helpful with this data set when using CART as the estimation method. The results support the preference for the fixed-size window policy that was found in previous research. This contributes further to understanding the effect of using windows. | ['Sousuke Amasaki', 'Chris Lokan'] | Evaluation of Moving Window Policies with CART | 730,456 |
Pervasive societal dependency on large scale, unbounded network systems, the substantial risks of such dependency, and the growing sophistication of system intruders, have focused increased attention on how to ensure network system survivability. Survivability is the capacity of a system to provide essential services even after successful intrusion and compromise, and to recover full services in a timely manner. Requirements for survivable systems must include definitions of essential and non essential services, plus definitions of new survivability services for intrusion resistance, recognition, and recovery. Survivable system requirements must also specify both legitimate and intruder usage scenarios, and survivability practices for system development, operation, and evolution. The paper defines a framework for survivable systems requirements definition and discusses requirements for several emerging survivability strategies. Survivability must be designed into network systems, beginning with effective survivability requirements analysis and definition. | ['Richard C. Linger', 'Nancy R. Mead', 'Howard F. Lipson'] | Requirements definition for survivable network systems | 38,445 |
The new psychological disorder of Internet addiction is fast accruing both popular and professional recognition. Past studies have indicated that some patterns of Internet use are associated with loneliness, shyness, anxiety, depression, and self-consciousness, but there appears to be little consensus about Internet addiction disorder. This exploratory study attempted to examine the potential influences of personality variables, such as shyness and locus of control, online experiences, and demographics on Internet addiction. Data were gathered from a convenient sample using a combination of online and offline methods. The respondents comprised 722 Internet users mostly from the Net-generation. Results indicated that the higher the tendency of one being addicted to the Internet, the shyer the person is, the less faith the person has, the firmer belief the person holds in the irresistible power of others, and the higher trust the person places on chance in determining his or her own course of life. People who are addicted to the Internet make intense and frequent use of it both in terms of days per week and in length of each session, especially for online communication via e-mail, ICQ, chat rooms, newsgroups, and online games. Furthermore, full-time students are more likely to be addicted to the Internet, as they are considered high-risk for problems because of free and unlimited access and flexible time schedules. Implications to help professionals and student affairs policy makers are addressed. | ['Katherine Chak', 'Louis Leung'] | Shyness and Locus of Control as Predictors of Internet Addiction and Internet Use | 111,134 |
Traditionally, ad hoc networks have been viewed as a connected graph over which end-to-end routing paths had to be established.Mobility was considered a necessary evil that invalidates paths and needs to be overcome in an intelligent way to allow for seamless ommunication between nodes.However, it has recently been recognized that mobility an be turned into a useful ally, by making nodes carry data around the network instead of transmitting them. This model of routing departs from the traditional paradigm and requires new theoretical tools to model its performance. A mobility-assisted protocol forwards data only when appropriate relays encounter each other, and thus the time between such encounters, called hitting or meeting time, is of high importance.In this paper, we derive accurate closed form expressions for the expected encounter time between different nodes, under ommonly used mobility models. We also propose a mobility model that can successfully capture some important real-world mobility haracteristics, often ignored in popular mobility models, and alculate hitting times for this model as well. Finally, we integrate this results with a general theoretical framework that can be used to analyze the performance of mobility-assisted routing schemes. We demonstrate that derivative results oncerning the delay of various routing s hemes are very accurate, under all the mobility models examined. Hence, this work helps in better under-standing the performance of various approaches in different settings, and an facilitate the design of new, improved protocols. | ['Thrasyvoulos Spyropoulos', 'Konstantinos Psounis', 'Cauligi S. Raghavendra'] | Performance analysis of mobility-assisted routing | 199,169 |
Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given field. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995-1997 - the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is 'incorrect' if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an 'each way' bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to confirm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. | ['Anito Joseph', 'Norman Fenton', 'Martin Neil'] | Predicting football results using Bayesian nets and other machine learning techniques | 526,542 |
This paper presents a method for improving the torque ripple in permanent magnet synchronous motor (PMSM) drives under direct torque control (DTC) using a fuzzy logic controller (FLC). A DTC PMSM drive using conventional two and three states of torque error, and a FLC were simulated using the SimuLink package. The results showed that the DTC/FLC gives a better dynamic performance, regarding torque and flux ripples, and speed performance when compared with the conventional DTC that has two and three-level hysteresis torque controllers. | ['H.F. Soliman', 'Malik E. Elbuluk'] | Improving the Torque Ripple in DTC of PMSM Using Fuzzy Logic | 380,127 |
Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface. | ['Xuehan Xiong', 'Fernando De la Torre'] | Supervised Descent Method and Its Applications to Face Alignment | 395,126 |
The performance of regular and irregular Gallager-type error-correcting code is investigated via methods of statistical physics. The transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity may be saturated in equilibrium for many of the regular codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding. We show that irregular codes may saturate Shannon's capacity but with improved dynamical properties. | ['Yoshiyuki Kabashima', 'Tatsuto Murayama', 'David Saad', 'Renato Vicente'] | Regular and Irregular Gallager-zype Error-Correcting Codes | 205,074 |
Agent-based modeling and simulation (ABMS) is one of the more promising simulation techniques to study the interdependencies in critical infrastructures. Moreover, federated simulation has two relevant properties, simulation models reuse and expertise sharing, that could be exploited in a multi-sectorial field, such as critical infrastructure protection. In this paper we propose a new methodology which exploits the benefit of both ABMS and Federated simulation, to study interdependencies in critical infrastructures. First of all we discus advantages of federated agent-based modeling and difficulties in implementing a Federated ABMS framework. To demonstrate the relevance of our solution we propose an example driven approach that poses the attention on critical information infrastructure. We have also implemented a Federated ABMS framework, which federate Repast, an agent-based simulation engine and OMNeT++ an IT systems and communication networks modeling and simulation environment. A selection of simulation results shown how Federated ABMS could shed light on system interdependencies and how it helps in quantifying them. | ['Emiliano Casalicchio', 'Emanuele Galli', 'Salvatore Tucci'] | Federated Agent-based Modeling and Simulation Approach to Study Interdependencies in IT Critical Infrastructures | 148,159 |
We consider the problem of computing the time convex hull of a set of points in the presence of a straight-line highway in the plane. The traveling speed in the plane is assumed to be much slower than that along the highway. The shortest time path between two arbitrary points is either the straight-line segment connecting these two points or a path that passes through the highway. The time convex hull, CH t (P), of a set P of n points is the smallest set containing P such that all the shortest time paths between any two points lie in CH t (P). In this paper we give a Theta(n log n) time algorithm for solving the time convex hull problem for a set of n points in the presence of a highway. | ['Teng-Kai Yu', 'D. T. Lee'] | Time Convex Hull with a Highway | 454,788 |
Traditional smallholder farming systems dominate the savanna range countries of sub-Saharan Africa and provide the foundation for the region’s food security. Despite continued expansion of smallholder farming into the surrounding savanna landscapes, food insecurity in the region persists. Central to the monitoring of food security in these countries, and to understanding the processes behind it, are reliable, high-quality datasets of cultivated land. Remote sensing has been frequently used for this purpose but distinguishing crops under certain stages of growth from savanna woodlands has remained a major challenge. Yet, crop production in dryland ecosystems is most vulnerable to seasonal climate variability, amplifying the need for high quality products showing the distribution and extent of cropland. The key objective in this analysis is the development of a classification protocol for African savanna landscapes, emphasizing the delineation of cropland. We integrate remote sensing techniques with probabilistic modeling into an innovative workflow. We present summary results for this methodology applied to a land cover classification of Zambia’s Southern Province. Five primary land cover categories are classified for the study area, producing an overall map accuracy of 88.18%. Omission error within the cropland class is 12.11% and commission error 9.76%. | ['Sean P. Sweeney', 'Tatyana B. Ruseva', 'Lyndon D. Estes', 'Tom P. Evans'] | Mapping Cropland in Smallholder-Dominated Savannas: Integrating Remote Sensing Techniques and Probabilistic Modeling | 568,831 |
Runtime Support for Human-in-the-Loop Feature Engineering System. | ['Michael R. Anderson', 'Dolan Antenucci', 'Michael J. Cafarella'] | Runtime Support for Human-in-the-Loop Feature Engineering System. | 987,945 |
Editorial: Secure collaboration in design and supply chain management | ['Yong Zeng', 'Lingyu Wang'] | Editorial: Secure collaboration in design and supply chain management | 698,096 |
The completion of the sequencing of the human genome and the concurrent, rapid development of high-throughput proteomic methods have resulted in an increasing need for automated approaches to archive proteomic data in a repository that enables the exchange of data among researchers and also accurate integration with genomic data. PeptideAtlas (http://www.peptideatlas.org/) addresses these needs by identifying peptides by tandem mass spectrometry (MS/MS), statistically validating those identifications and then mapping identified sequences to the genomes of eukaryotic organisms. A meaningful comparison of data across different experiments generated by different groups using different types of instruments is enabled by the implementation of a uniform analytic process. This uniform statistical validation ensures a consistent and high-quality set of peptide and protein identifications. The raw data from many diverse proteomic experiments are made available in the associated PeptideAtlas repository in several formats. Here we present a summary of our process and details about the Human, Drosophila and Yeast PeptideAtlas builds. | ['Frank Desiere', 'Eric W. Deutsch', 'Nichole L. King', 'Alexey I. Nesvizhskii', 'Parag Mallick', 'Jimmy K. Eng', 'Sharon S. Chen', 'James S. Eddes', 'Sandra N. Loevenich', 'Ruedi Aebersold'] | The PeptideAtlas project | 440,441 |
The paper considers the issue of activating inactive terminals by control signaling in the downlink in a massive MIMO system. There are two basic difficulties with this. First, the lack of CSI at the transmitter. Second, the short coherence interval, which limits the number of orthogonal pilots in the case of many antennas. The proposed scheme deals with these issues by repeating the transmission over the antennas. We show that this repetition does not affect the spectral efficiency significantly, while making it possible to estimate the channel in a standard way using MMSE. The paper also sheds some light the uplink-downlink power balance in massive MIMO. | ['Marcus Karlsson', 'Erik G. Larsson'] | On the operation of massive MIMO with and without transmitter CSI | 262,006 |
This paper introduces a novel constraint in the master joystick workspace, termed the dynamic kinesthetic boundary, that aids a pilot to navigate an aerial robotic vehicle through a cluttered environment. The proposed approach exploits spatial cues by projecting the remote environment into a hard boundary in the master device workspace that provides a pilot with a natural representation of approaching obstacles. The approach is distinguished from classical force feedback approaches by allowing normal operation of the vehicle in free flight and only imposing constraints when approaching and interacting with the environment. A key advantage is that contact with the environment constraint is immediately perceptible to a pilot, allowing them to make suitable adjustments to their inputs. Modulation of the velocity reference for the slave robot ensures obstacle avoidance while allowing a vehicle to approach as close as desired to an object, albeit at a slow speed. A comprehensive user study was performed to systematically test the proposed algorithm and comparisons to two existing state-of-the-art approaches are provided to demonstrate the relative performance of the proposed approach. | ['Xiaolei Hou', 'Robert E. Mahony'] | Dynamic Kinesthetic Boundary for Haptic Teleoperation of VTOL Aerial Robots in Complex Environments | 713,604 |
Leveraging Semantic Annotations to Link Wikipedia and News Archives | ['Arunav Mishra', 'Klaus Berberich'] | Leveraging Semantic Annotations to Link Wikipedia and News Archives | 695,383 |
We present two process algebra models of a Kai-protein based circadian clock. Our models are represented in the Bio-PEPA and the continuous pi-calculus process algebras. The circadian clock is not based on transcription and has been shown to persist with a rhythmic signal when removed from a living cell. Our models allow us to speculate as to the mechanisms which allow for the rhythmic signals. We reproduce previous results based on ODE models and then use our models as the basis for stochastic simulation. | ['Chris Banks', 'Allan Clark', 'Anastasis Georgoulas', 'Stephen Gilmore', 'Jane Hillston', 'Dimitrios Milios', 'Ian Stark'] | Stochastic Modelling of the Kai-based Circadian Clock | 8,394 |
This paper presents a survey of data center network architectures that use both optical and packet switching components. Various proposed architectures and their corresponding network operation details are discussed. Electronic processing-based packet switch architectures and hybrid optical–electronic-based switch architectures are presented. These hybrid optical switch architectures use optical switching elements in addition to traditional electronic processing entities. The choice of components used for realizing functionality including the network interfaces, buffers, lookup elements and the switching fabrics have been analyzed. These component choices are summarized for different architectures. A qualitative comparison of the various architectures is also presented. | ['Ganesh Chennimala Sankaran', 'Krishna M. Sivalingam'] | A survey of hybrid optical data center network architectures | 840,142 |
Flip Distance between Triangulations of a Simple Polygon is NP-Complete | ['Oswin Aichholzer', 'Wolfgang Mulzer', 'Alexander Pilz'] | Flip Distance between Triangulations of a Simple Polygon is NP-Complete | 971,161 |
Extending the EmotiNet Knowledge Base to Improve the Automatic Detection of Implicitly Expressed Emotions from Text | ['Alexandra Balahur', 'Jesús M. Hermida'] | Extending the EmotiNet Knowledge Base to Improve the Automatic Detection of Implicitly Expressed Emotions from Text | 611,060 |
Motivation: Clustering protein sequence data into functionally specific families is a difficult but important problem in biological research. One useful approach for tackling this problem involves representing the sequence dataset as a protein similarity network, and afterwards clustering the network using advanced graph analysis techniques. Although a multitude of such network clustering algorithms have been developed over the past few years, comparing algorithms is often difficult because performance is affected by the specifics of network construction. We investigate an important aspect of network construction used in analyzing protein superfamilies and present a heuristic approach for improving the performance of several algorithms.#R##N##R##N#Results: We analyzed how the performance of network clustering algorithms relates to thresholding the network prior to clustering. Our results, over four different datasets, show how for each input dataset there exists an optimal threshold range over which an algorithm generates its most accurate clustering output. Our results further show how the optimal threshold range correlates with the shape of the edge weight distribution for the input similarity network. We used this correlation to develop an automated threshold selection heuristic in order to most optimally filter a similarity network prior to clustering. This heuristic allows researchers to process their protein datasets with runtime efficient network clustering algorithms without sacrificing the clustering accuracy of the final results.#R##N##R##N#Availability: Python code for implementing the automated threshold selection heuristic, together with the datasets used in our analysis, are available at http://www.rbvi.ucsf.edu/Research/cytoscape/threshold_scripts.zip.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information:Supplementary data are available at Bioinformatics online. | ['Leonard Apeltsin', 'John H. Morris', 'Patricia C. Babbitt', 'Thomas E. Ferrin'] | Improving the quality of protein similarity network clustering algorithms using the network edge weight distribution | 440,881 |
We consider abstract-argumentation-theoretic coalition formability in this work. Taking a model from political alliance among political parties, we will contemplate profitability, and then formability, of a coalition. As is commonly understood, a group forms a coalition with another group for a greater good, the goodness measured against some criteria. As is also commonly understood, however, a coalition may deliver benefits to a group X at the sacrifice of something that X was able to do before coalition formation, which X may be no longer able to do under the coalition. Use of the typical conflict-free sets of arguments is not very fitting for accommodating this aspect of coalition, which prompts us to turn to a weaker notion, conflict-eliminability, as a property that a set of arguments should primarily satisfy. We require numerical quantification of attack strengths as well as of argument strengths for its characterisation. We will first analyse semantics of profitability of a given conflict-eliminable set forming a coalition with another conflict-eliminable set, and will then provide four coalition formability semantics, each of which formalises certain utility postulate(s) taking the coalition profitability into account. | ['Ryuta Arisaka', 'Ken Satoh'] | Coalition Formability Semantics with Conflict-Eliminable Sets of Arguments | 723,723 |
A System of Systems Approach to the Evolutionary Transformation of Power Management Systems. | ['Jan-Philipp Steghöfer', 'Gerrit Anders', 'Florian Siefert', 'Wolfgang Reif'] | A System of Systems Approach to the Evolutionary Transformation of Power Management Systems. | 794,710 |
This paper addresses a control allocation method in order to compensate for the effect of thrusters' dead-zones, on the actuation system of an underwater vehicle. This solution concurrently considers dead-zone effect and actuation limitation, in order to increase system reactivity. Moreover the impact on energy consumption is also considered. Experimental results are given to demonstrate the effectiveness and correctness of the proposed method. | ['Benoit Ropars', 'Adrien Lasbouygues', 'Lionel Lapierre', 'David Andreu'] | Thruster's dead-zones compensation for the actuation system of an underwater vehicle | 127,771 |
In a crowd model based on leader-follower interactions, where positions of the leaders are viewed as the control input, up-to-date solutions rely on knowledge of the agents' coordinates. In practice, it is more realistic to exploit knowledge of statistical properties of the group of agents, rather than their exact positions. In order to shape the crowd, we study thus the problem of controlling the moments instead, since it is well known that shape can be determined by moments. An optimal control for the moments tracking problem is obtained by solving a modified Hamilton-Jacobi-Bellman (HJB) equation, which only uses the moments and leaders' states as feedback. The optimal solution can be solved fast enough for on-line implementations. | ['Yuecheng Yang', 'Dimos V. Dimarogonas', 'Xiaoming Hu'] | Shaping up crowd of agents through controlling their statistical moments | 492,465 |
From the industrial point of view, image quality is a key-issue. Many post-processing algorithms have been proposed to improve visual quality after the MPEG decoder. Most of them need precise location of the 8/spl times/8 grid on which the blocking effect appears. However, in real-life applications, the blocking effect is rarely located on such a basic grid, due to the cascaded bit-rate or format transcoding, rescaling, etc., that occur during acquisition, compression, transmission and display of the video. Consequently, most of these methods see their efficiency largely reduced, or are simply useless. A grid detector is proposed, based on a fine modeling of blocking artifacts in the wavelet domain. It aims at providing essential information to any post-processing algorithm that requires the position of the grid. Several experiments and reliable subjective tests demonstrate the accuracy of the proposed grid detector, and highlight the added value it yields to a post-processing algorithm in terms of visual quality. | ['Estelle Lesellier', 'Joël Jung'] | Robust wavelet-based arbitrary grid detection for MPEG | 42,176 |
This article presents a blind predictive decision-feedback equalization (DFE) scheme motivated by the work of Labat et al. (see idid., vol.46, p.921-30, July 1998). The proposed scheme outperforms another previously proposed blind predictive DFE scheme, and also eliminates the filter interchange required in the scheme of Labat et al. | ['Ganesh Ananthaswamy', 'Dennis Goeckel'] | A fast-acquiring blind predictive DFE | 428,273 |
This paper focuses on motion control of a piezo-based dual-stage nanopositioner where hysteresis and dynamic effects dominate the output response. A typical dual-stage micro/nano-scale positioning system consists of a long-range, low-speed actuator connected in series with a short-range, high-speed actuator. This arrangement allows for fast, long-range, and precise positioning. The main challenges, however, are that both hysteresis and dynamic effects (induced structural vibrations and creep) cause excessive positioning error and the control design problem requires balancing the relative contributions of the individual actuators (range, speed, and precision). To enable high-performance operation, a feedforward master-slave controller with hysteresis compensation is proposed. First, the Prandtl Ishlinskii hysteresis model is exploited for inversion-based hysteresis compensation to linearize the response of each actuator. Then, a feedforward master-slave controller is used to minimize the dynamic effects and provide a systematic way to balance the relative contributions of the two actuators. A detailed presentation of the control structure is presented, followed by showing simulation and experimental results to demonstrate the efficacy of the approach. | ['William Nagel', 'Garrett M. Clayton', 'Kam K. Leang'] | Master-slave control with hysteresis inversion for dual-stage nanopositioning systems | 839,266 |
Bayesian Active Learning for Posterior Estimation - IJCAI-15 Distinguished Paper. | ['Kirthevasan Kandasamy', 'Jeff G. Schneider', 'Barnabás Póczos'] | Bayesian Active Learning for Posterior Estimation - IJCAI-15 Distinguished Paper. | 757,910 |
Disfluency in speech input to infants? the interaction of mother and child to create error-free speech input for language acquisition. | ['Melanie Soderstrom', 'James L. Morgan'] | Disfluency in speech input to infants? the interaction of mother and child to create error-free speech input for language acquisition. | 764,374 |
This article provides sound and complete logical systems for several fragments of English which go beyond syllogistic logic in that they use verbs as well as other limited syntactic material: universally and existentially quantified noun phrases, building on the work of Nishihara, Morita and Iwata (1990, Systems and Computers in Japan, 21, 96–111); complemented noun phrases, following our Moss (2007, Syllogistic Logic with Complements); and noun phrases which might contain relative clauses, recursively, based on McAllester and Givan (1992, Artifical Intelligence, 56, 1–20). The logics are all syllogistic in the sense that they do not make use of individual variables. Variables in our systems range over nouns, and in the last system, over verbs as well. | ['Lawrence S. Moss'] | Syllogistic Logics with Verbs | 196,396 |
Cognitive radio (CR) is proposed to use the spectrum opportunistically. Spectrum opportunity (SOP) can be defined as the possibility of a spectrum-aware communication between a CR transmitter and a CR receiver. Successful spectrum-aware communication between the communicating CRs, which is utilization of SOP (USOP), depends on the SOP detection and the correct transmission of a packet. Spectrum sensing performance, physical channel, and network parameters affect the probability of USOP. In this letter, we characterize the probability of the USOP under different network topologies. The network topology is due to the relation between transmission ranges of licensed users and CRs. We numerically study this probability for different network parameters and topologies. We find that the characteristics of USOP highly depend on the network topology, CR sensing performance and licensed users’ activities. | ['Mustafa Ozger', 'Ozgur B. Akan'] | On the Utilization of Spectrum Opportunity in Cognitive Radio Networks | 610,756 |
ABSTRACT Many e ective supervised discriminative dictionary learning methods have been developed in the literature.However, when training these algorithms, precise ground-truth of the training data is required to provide veryaccurate point-wise labels. Yet, in many applications, accurate labels are not always feasible. This is especiallytrue in the case of buried object detection in which the size of the objects are not consistent. In this paper, anew multiple instance dictionary learning algorithm for detecting buried objects using a handheld WEMI sensoris detailed. The new algorithm, Task Driven Extended Functions of Multiple Instances, can overcome datathat does not have very precise point-wise labels and still learn a highly discriminative dictionary. Results arepresented and discussed on measured WEMI data.Keywords: Supervised Dictionary Learning, Landmine Detection, Electromagnetic Induction, Extended Func-tions of Multiple Instances 1. INTRODUCTION Electromagnetic induction (EMI) sensors have been studied extensively for their use in detecting buried land-mines. | ['Matthew Cook', 'Alina Zare', 'Dominic K. C. Ho'] | Buried object detection using handheld WEMI with task-driven extended functions of multiple instances | 693,263 |
Feature-preserving denoising of time-varying range data | ['Oliver Schall', 'Alexander G. Belyaev', 'Hans-Peter Seidel'] | Feature-preserving denoising of time-varying range data | 311,544 |
Region-of-Interest (ROI) location information in videos has many practical usages in video coding field, such as video content analysis and user experience improvement. Although ROI-based coding has been studied widely by many researchers to improve coding efficiency for video contents, the ROI location information itself is seldom coded in video bitstream. In this paper, we will introduce our proposed ROI location coding tool which has been adopted in surveillance profile of AVS2 video coding standard (surveillance profile). Our tool includes three schemes: direct-coding scheme, differential- coding scheme, and reconstructed-coding scheme. We will illustrate the details of these schemes, and perform analysis of their advantages and disadvantages, respectively. | ['Mingliang Chen', 'Weiyao Lin', 'Xiaozhen Zheng'] | An efficient coding method for coding Region-of-Interest locations in AVS2 | 204,849 |
Service Providers are constantly searching for means to reduce costs, generate new revenue streams and increase their competitive advantage. The migration to Colorless and Directionless mesh architecture will enable the Service Providers to realize new scalability, availability, flexibility and dynamic reconfigurability demands for optical networks. But commodification and standardization of the underlying technologies has made it very difficult to differentiate their service offerings especially in terms of price and performance. Hence, leveraging the intelligence of efficient routing algorithms, Service Providers can notably reduce the total cost of ownership of the network. This paper presents a rationalized routing algorithm for a Colorless and Directionless mesh core optical network, which enables a significant CAPEX reduction by decreasing the number of active degrees of a node. | ['B. Sai Kishore.', 'G. Prasanna', 'Arutselvi Devarajan', 'K Sandesha', 'D. Nanda Venkata Gopal', 'K. Venkataramaniah', 'Pavan Voruganti', 'Ron Johnson'] | CAPEX minimization through node degree reduction in a Colorless and Directionless ROADM architecture for flexible optical networks | 555,450 |
Approche Anytime pour l'ajustement de l'incrément dans les enchères multicritères automatisées (présentation courte). | ['Imène Brigui-Chtioui', 'Philippe Caillou', 'Suzanne Pinson'] | Approche Anytime pour l'ajustement de l'incrément dans les enchères multicritères automatisées (présentation courte). | 787,284 |
Mobile device hardware can limit the sophistication of mobile applications. One strategy for side-stepping these constraints is to opportunistically offload computations to the cloud, where more capable hardware can do the heavy lifting. We propose a platform that accomplishes this via compressive offloading, a novel application of compressive sensing in a distributed shared memory setting. Our prototype gives up to an order-of-magnitude acceleration and 60% longer battery life to the end user of an example handwriting recognition app. We argue that offloading is beneficial to both end users and cloud providers--the former experiences a performance boost and the latter receives a steady stream of small computations to fill periods of under-utilization. Such workloads, originating from ARM-based mobile devices, are especially well-suited for offloading to emerging ARM-based data centers. | ['Chit-Kwan Lin', 'H. T. Kung'] | Mobile app acceleration via fine-grain offloading to the cloud | 221,076 |
The random early detection (RED) scheme for congestion control in TCP is well known over a decade. Due to a number of control parameters in RED, it cannot make acceptable packet-dropping decision, especially, under heavy network load and high delay to provide high throughput and low packet loss rate. We propose a solution to this problem using Markov chain based decision rule. We modeled the oscillation of the average queue size as a homogeneous Markov chain with three states and simulated the system using the network simulator software NS-2. The simulations show that the proposed scheme successfully estimates the maximum packet dropping probability for random early detection. It detects the congestion very early and adjusts the packet-dropping probability so that RED can make wise packet-dropping decisions. Simulation results show that the proposed scheme provides improved connection throughput and reduced packet loss rate. | ['Shan Suthaharan'] | Markov model based congestion control for TCP | 156,037 |
The authors define the concept of information fusion and show how they used it to estimate summer sea ice concentration in the marginal ice zone (MIZ) from single-channel SAR satellite imagery. They used data about melt stage, wind speed, and surface temperature to generate temporally-accumulated information, and fused this information with the SAR image, resulting in an interpretation of summer MIZ imagery. They also used the results of previous classifications of the same area to guide and correct future interpretations, thus fusing historical information with imagery and nonimagery data. They chose to study the summer MIZ since summer melt conditions cause classification based upon backscatter intensity to fail, as the backscatter of open water, thin ice, first-year ice, and multiyear ice overlap to a large degree. This makes it necessary to fuse various information and data to achieve proper segmentation and automated classification of the image. Their results were evaluated qualitatively and showed that their approach produces very good ice concentration estimates in the summer MIZ. | ['Donna Haverkamp', 'Costas Tsatsoulis'] | Information fusion for estimation of summer MIZ ice concentration from SAR imagery | 95,325 |
This paper discusses two related challenges faced by software engineering instructors. First, assuming that projects are necessary to produce successful computer science majors, what should be the role of projects and how best do we integrate theory and application? Second, what life cycle models and associated processes should students have the opportunity to experience, and where in the curriculum should a disciplined process first appear? We review several curriculum plans that have been employed to address these problems. We also offer recommendations based on our experiences both with undergraduate computer science majors and with high school students in project Tri-P-LETS, where beginning programming students are taught to develop games and software simulations following a process. | ['Linda Sherrell', 'Sajjan G. Shiva'] | Will earlier projects plus a disciplined process enforce SE principles throughout the CS curriculum | 391,740 |
Most results in revenue-maximizing auction design hinge on "getting the price right" --- offering goods to bidders at a price low enough to encourage a sale, but high enough to garner non-trivial revenue. Getting the price right can be hard work, especially when the seller has little or no a priori information about bidders' valuations. A simple alternative approach is to "let the market do the work", and have prices emerge from competition for scarce goods. The simplest-imaginable implementation of this idea is the following: first, if necessary, impose an artificial limit on the number of goods that can be sold; second, run the welfare-maximizing VCG mechanism subject to this limit. We prove that such "supply-limiting mechanisms" achieve near-optimal expected revenue in a range of single- and multi-parameter Bayesian settings. Indeed, despite their simplicity, we prove that they essentially match the state-of-the-art in prior-independent mechanism design. | ['Tim Roughgarden', 'Inbal Talgam-Cohen', 'Qiqi Yan'] | Supply-limiting mechanisms | 347,532 |
Currently, although logistics systems are based mostly on the barcode system and partly on radio frequency identification (RFID) systems, they still depend on humans checking or validating products. Situation-awareness technologies can operate similarly to humans. We suggest a middleware system based on an ontology using situation-awareness technologies for logistics. The middleware is aware of the current user state and sends the required data to an application. It can help build new applications by enhancing speed and can support automated service processing by invoking the service required in the next working step. Situation-awareness technologies provide data or service appropriate to the object’s current state and allow human participation to be minimized. Moreover, they can automatically execute working processes without human intervention. Improving process automation of logistics systems with situation-awareness technologies reduces labor requirements for checking and validating objects and thus saves logistics costs. | ['Taewoo Nam', 'Keunhyuk Yeom'] | A Situation-Aware Middleware Based on Ontology Modeling and Rule | 331,420 |
A low-swing transceiver for 10mm-long 0.54mum-wide on-chip interconnects is presented. A capacitive pre-emphasis transmitter lowers the power and increases the bandwidth. The receiver uses DFE with a power-efficient continuous-time feedback filter. The transceiver, fabricated in 1.2V 90nm CMOS, achieves 2Gb/s. It consumes 0.28pJ/b, which is 7times lower than earlier work | ['Eisse Mensink', 'Daniël Schinkel', 'Eric A. M. Klumperink', 'van Ed Tuijl', 'Bram Nauta'] | A 0.28pJ/b 2Gb/s/ch Transceiver in 90nm CMOS for 10mm On-Chip interconnects | 523,833 |
Explicit Path Tracking by Autonomous Vehicles | ['Dong Hun Shin', 'Sanjiv Singh', 'Ju Jang Lee'] | Explicit Path Tracking by Autonomous Vehicles | 487,993 |
This paper introduces a new system for real-time land mine detection using sensor data generated by a Ground Penetrating Radar (GPR). The GPR produces a three-dimensional array of intensity values, representing a volume below the surface of the ground. Features are computed from this array and two types of membership degrees are assigned to each location. A fuzzy membership value provides a degree of belongingness of a given observation in the classes of mines, false alarms, and background, while a possibilistic membership value provides a degree of typicality. Both membership degrees are combined using simple rules to assign a confidence value. The parameters of the membership functions are obtained by clustering the training data and using the statistics of each partition. Our preliminary results show that the proposed approach is simple, efficient, and yet, yields results comparable to more complex detection systems. | ['Hichem Frigui', 'Kotturu Satyanarayana', 'Paul D. Gader'] | Detection of land mines using fuzzy and possibilistic membership functions | 46,247 |
This paper introduces ULDBs, an extension of relational databases with simple yet expressive constructs for representing and manipulating both lineage and uncertainty. Uncertain data and data lineage are two important areas of data management that have been considered extensively in isolation, however many applications require the features in tandem. Fundamentally, lineage enables simple and consistent representation of uncertain data, it correlates uncertainty in query results with uncertainty in the input data, and query processing with lineage and uncertainty together presents computational benefits over treating them separately.We show that the ULDB representation is complete, and that it permits straightforward implementation of many relational operations. We define two notions of ULDB minimality--data-minimal and lineage-minimal--and study minimization of ULDB representations under both notions. With lineage, derived relations are no longer self-contained: their uncertainty depends on uncertainty in the base data. We provide an algorithm for the new operation of extracting a database subset in the presence of interconnected uncertainty. Finally, we show how ULDBs enable a new approach to query processing in probabilistic databases.ULDBs form the basis of the Trio system under development at Stanford. | ['Omar Benjelloun', 'Anish Das Sarma', 'Alon Y. Halevy', 'Jennifer Widom'] | ULDBs: databases with uncertainty and lineage | 491,334 |
Interference coordination schemes are essential for OFDM based systems such as LTE where the implementation of tighter frequency allocation\re-use gives rise to high intercell interference, resulting in reduced coverage and low cell-edge throughputs. Re-use partitioning is one of the most attractive schemes among the plethora of interference co-ordination and avoidance methods. In this paper, we present the network level performance comparison of this scheme with the traditional re-use 1 and 3 schemes for the LTE-FDD systems. The results show that re-use partitioning benefits from the strengths of both re-use 1 around the cell-center and re-use 3 for the cell-edge users, thereby providing excellent coverage and throughput gains. | ['A. Lodhi', 'Akram Awad', 'Tom Jeffries', 'Petrit Nahi'] | On re-use partitioning in LTE-FDD systems | 134,753 |
This paper devises a new means of filter diversification, dubbed multi-fold filter convolution (M-FFC), for face recognition. On the assumption that M-FFC receives single-scale Gabor filters of varying orientations as input, these filters are self-cross convolved by M-fold to instantiate an offspring set. The M-FFC flexibility also permits the self-cross convolution amongst Gabor filters and other filter banks of profoundly dissimilar traits, e.g., principal component analysis (PCA) filters, and independent component analysis (ICA) filters, in our case. A 2-FFC instance therefore yields three offspring sets from: (1) Gabor filters solely, (2) Gabor and PCA filters, and (3) Gabor and ICA filters, to render the learning-free and the learning-based 2-FFC descriptors. To facilitate a sensible Gabor filter selection for M-FFC, the 40 multi-scale, multi-orientation Gabor filters is condensed into 8 elementary filters. In addition to that, an average pooling operator is used to leverage the 2-FFC histogram features, prior to whitening PCA compression. The empirical results substantiate that the 2-FFC descriptors prevail over, or on par with, other face descriptors on both identification and verification tasks. | ['Cheng Yaw Low', 'Andrew Beng Jin Teoh', 'Cong Jie Ng'] | Multi-Fold Gabor, PCA and ICA Filter Convolution Descriptor for Face Recognition | 716,418 |
This paper presents a network adapted selective frame-dropping algorithm for streaming media over the Internet, which aims to address the problem of random packet losses resulted from network bandwidth mismatching. The basic idea is to first determine a sending window for each GOP according to its rendering time interval, and then selectively drop some frames with low priority to ensure that other more important frames in this GOP can be reliably delivered to the receiver within the time limitation of the GOP's sending window. For each GOP, the frame-dropping policies are determined in advance according to the transmitting results of its preceding GOP, and then slightly re-adjusted after each frame has been sent according to the current available network bandwidth. Experimental results show that the proposed algorithm can achieve free-error and fluent rendering effects even in the case of worse network conditions. | ['H. Longshe Huo', 'F. Qiang Fu', 'Z. Yuanzhi Zou', 'G. Wen Gao'] | Network Adapted Selective Frame-Dropping Algorithm for Streaming Media | 156,620 |
Knowledge-based natural language processing systems learn by reading, i.e., they process texts to extract knowledge. The performance of these systems crucially depends on knowledge about the domain of language itself, such as lexicons and ontologies to ground the semantics of the texts. In this paper we describe the architecture of the GIBRALTAR system, which is based on the OntoSem semantic analyzer, which learns by reading by learning to read. That is, while processing texts GIBRALTAR extracts both knowledge about the topics of the texts and knowledge about language (e.g., new ontological concepts and semantic mappings from previously unknown words to ontological concepts) that enables improved text processing. We present the results of initial experiments with GIBRALTAR and directions for future research. | ['Sergei Nirenburg', 'Tim Oates', 'Jesse English'] | Learning by Reading by Learning to Read | 266,101 |
This study examined the relationship between academic integration and self-efficacy with regard to institution types and students' majors among IA1 (Information Management) and CS (Computer Science) students. Academic integration was an important factor affecting student retention, and self-ifficacy has also been found to influence student intention to persist. Nevertheless, research has not yet examined the relationship between academic integration and self-efficacy. Institution types implied various student cohorts and may have different results. Information Technology (IT) related majors, which were 1M and CS in Taiwan, were specifically examined in terms of the study major. A Taiwanese National survey database conducted in 2005 was used to achieve the research objective. Fourteen student attributes were extracted into four factors. They were 'study strategies and habits', 'academic satisfaction', 'social self-efficacy', and 'self confidence'. MANOVA was used to analyze the interaction effects between academic integration and self-efficacy. The independent variables were institution types and students' majors. One outcome of this study was the finding of a positive relationship between 'academic integration' and 'self efficacy'. The results also showed that students of public institutions have higher levels of self-efficacy than students of private ones. Another finding is that IA1 students seem to have better study strategies and habits than CS students while CS students were found to have better collaboration and satisfaction with their institutions than 1M students. Counselling services and team projects are suggested to enhance students' levels of academic integration and self-efficacy | ['Fumei Weng', 'France Cheong', 'Christopher Cheong'] | IT education in Taiwan: Relationship between self-efficacy and academic integration among students | 454,929 |
Fast inference using transition matrices (FITM) is a new fast algorithm for performing inferences in fuzzy systems. It is based on the assumption that fuzzy inputs can be expressed as a linear composition of the fuzzy sets used in the rule base. This representation let us interpret a fuzzy set as a vector, so we can just work with the coordinates of it instead of working with the whole set. The inference is made using transition matrices. The key of the method is the fact that a lot of operations can be precomputed offline to obtain the transition matrices, so actual inferences are reduced to a few online matrix additions and multiplications. The algorithm is designed for the standard additive model using the sum-product inference composition. | ['Santiago Aja-Fernández', 'Carlos Alberola-López'] | Fast inference in SAM fuzzy systems using transition matrices | 268,127 |
Segmentation Through Edge-linking - Segmentation for Video-based Driver Assistance Systems. | ['Andreas Laika', 'Adrian Taruttis', 'Walter Stechele'] | Segmentation Through Edge-linking - Segmentation for Video-based Driver Assistance Systems. | 943,016 |
It is common nowadays to employ FPGAS, not only as a means of rapidly prototyping and testing dedicated solutions, but also as a platform on which to implement actual production systems. Although modern FPGAS allow the designer to modify dynamically even only portions of the chip, to this date there is a lack of satisfying design methodologies that using only non-proprietary widely available tools make it possible to optimally implement a high-level specification into a partially dynamically reconfigurable system. The aim of this work is to propose a methodology for solving this problem. The main features of the Caronte methodology are: 1. full exploitation of partial dynamic reconfiguration; 2. the reconfiguration is internal; 3. a real-time Unix-like operating system helps the management of complex systems with multiple tasks, and simplifies reconfiguration through an optimized device driver. | ['Alberto Donato', 'Fabrizio Ferrandi', 'Massimo Redaelli', 'Marco D. Santambrogio', 'Donatella Sciuto'] | Caronte: a complete methodology for the implementation of partially dynamically self-reconfiguring systems on FPGA platforms | 458,699 |
We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying $n$-point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of points (the connection cost), and the sum of the waiting times of the requests (the delay cost). We present an algorithm with a competitive ratio of $O(\log n)$, which improves the upper bound of $O(\log^2n+\log\Delta)$ of Emek et al, by removing the dependence on $\Delta$, the aspect ratio of the metric space (which can be unbounded as a function of $n$). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height $h$, whose cost is guaranteed to be at most $O(1)$ times the connection cost plus $O(h)$ times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding $n$-point metric spaces into distributions over weighted hierarchically separated trees of height $O(\log n)$, with distortion $O(\log n)$. We also prove a lower bound of $\Omega(\sqrt{\log n})$ on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with $n$, and is attained on the metric of $n$ equally spaced points on a line. #R##N#The problem of Min-cost Bipartite Perfect Matching with Delays (MBPMD) is the same as MPMD except that every request is either positive or negative, and requests can be matched only if they have opposite polarity. We prove an upper bound of $O(\log n)$ and a lower bound of $\Omega(\log^{1/3}n)$ on the competitive ratio of MBPMD with a more involved analysis. | ['Yossi Azar', 'Ashish Chiplunkar', 'Haim Kaplan'] | Polylogarithmic Bounds on the Competitiveness of Min-cost (Bipartite) Perfect Matching with Delays | 916,207 |
Fuzzy rules are suitable for describing uncertain phenomena and natural for human understanding and they are, in general, efficient for classification. In addition, fuzzy rules allow us to effectively classify data having non-axis-parallel decision boundaries, which is difficult for the conventional attribute-based methods. In this paper, we propose an optimized fuzzy rule generation method for classification both in accuracy and comprehensibility (or rule complexity). We investigate the use of genetic algorithm to determine an optimal set of membership functions for quantitative data. In our method, for a given set of membership functions a fuzzy decision tree is constructed and its accuracy and rule complexity are evaluated, which are combined into the fitness function to be optimized. We have experimented our algorithm with several benchmark data sets. The experiment results show that our method is more efficient in performance and comprehensibility of rules compared with the existing methods including C4.5 and FID3.1. | ['Myung Won Kim', 'Joung Woo Ryu'] | Optimized fuzzy classification for data mining | 877,265 |
This paper presents a new VLSI design management system. Unlike existing systems it models and stores not only the design data but also a description of the design process itself. This information is accumulated and used to present the designer with alternative design methodologies and to suggest the most promising one, based on previous designs. Experimental results on 22 simple designs indicate that the system selected the most appropriate methodology in 81% of the cases. | ['Anas Kabbaj', 'Eduard Cerny', 'Michel Dagenais', 'François Bouthillier'] | Design by similarity using transaction modeling and statistical techniques | 141,574 |
Machines, not humans, are the world's dominant knowledge accumulators but humans remain the dominant decision makers. Interpreting and disseminating the knowledge accumulated by machines requires expertise, time, and is prone to failure. The problem of how best to convey accumulated knowledge from computers to humans is a critical bottleneck in the broader application of machine learning. We propose an approach based on human teaching where the problem is formalized as selecting a small subset of the data that will, with high probability, lead the human user to the correct inference. This approach, though successful for modeling human learning in simple laboratory experiments, has failed to achieve broader relevance due to challenges in formulating general and scalable algorithms. We propose general-purpose teaching via pseudo-marginal sampling and demonstrate the algorithm by teaching topic models. Simulation results show our sampling-based approach: effectively approximates the probability where ground-truth is possible via enumeration, results in data that are markedly different from those expected by random sampling, and speeds learning especially for small amounts of data. Application to movie synopsis data illustrates differences between teaching and random sampling for teaching distributions and specific topics, and demonstrates gains in scalability and applicability to real-world problems. | ['Baxter S. Eaves', 'Patrick Shafto'] | Toward a general, scaleable framework for Bayesian teaching with applications to topic models | 760,850 |
This paper proposes a comprehensive critical survey on the issues of warehousing and protecting big data, which are recognized as critical challenges of emerging big data research. Indeed, both are critical aspects to be considered in order to build truly, high-performance and highly-flexible big data management systems. We report on state-of-the-art approaches, methodologies and trends, and finally conclude by providing open problems and challenging research directions to be considered by future efforts. | ['Alfredo Cuzzocrea'] | Warehousing and Protecting Big Data: State-Of-The-Art-Analysis, Methodologies, Future Challenges | 713,931 |
Using the case of a low cost airline company’s website we analyze some special research questions of information technology valuation. The distinctive characteristics of this research are the ex post valuation perspective; the parallel and comparative use of accounting and business valuation approaches; and the integrated application of discounted cash flow and real option valuation. As the examined international company is a strategic user of e-technology and wants to manage and account intangible IT-assets explicitly, these specific valuation perspectives are gaining practical significance. | ['Márta Aranyossy'] | Business Value of IT Investment: The Case of a Low Cost Airline's Website | 32,985 |
Dynamic Virtual Overlay Networks for Large Scale Resource Federation Frameworks | ['Sebastian Wahle', 'André Steinbach', 'Thomas Magedanz', 'Konrad Campowsky'] | Dynamic Virtual Overlay Networks for Large Scale Resource Federation Frameworks | 832,832 |
Design and Nanomotion Control of a Noncontact Stage with Squeeze Bearings | ['Hayato Yoshioka', 'Toshimichi Gokan', 'Hidenori Shinno'] | Design and Nanomotion Control of a Noncontact Stage with Squeeze Bearings | 986,308 |
Nested datatypes are families of datatypes that are indexed over all types such that the constructors may relate different family members (unlike the homogeneous lists). Moreover, the argument types of the constructors refer to indices given by expressions in which the family name may occur. Especially in this case of true nesting, termination of functions that traverse these data structures is far from being obvious. A joint paper with A. Abel and T. Uustalu (Theor. Comput. Sci., 333 (1–2), 2005, pp. 3–66) proposed iteration schemes that guarantee termination not by structural requirements but just by polymorphic typing. They are generic in the sense that no specific syntactic form of the underlying datatype “functor” is required. However, there was no induction principle for the verification of the programs thus obtained, although they are well known in the usual model of initial algebras on endofunctor categories. The new contribution is a representation of nested datatypes in intensional type theory (more specifically, in the calculus of inductive constructions) that is still generic and covers true nesting, guarantees termination of all expressible programs, and has an induction principle that allows to prove functoriality of monotonicity witnesses (maps for nested datatypes) and naturality properties of iteratively defined polymorphic functions. | ['Ralph Matthes'] | An induction principle for nested datatypes in intensional type theory | 44,130 |