corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-675501
cs/0701145
Non-Repudiation in Internet Telephony
<|reference_start|>Non-Repudiation in Internet Telephony: We present a concept to achieve non-repudiation for natural language conversations over the Internet. The method rests on chained electronic signatures applied to pieces of packet-based, digital, voice communication. It establishes the integrity and authenticity of the bidirectional data stream and its temporal sequence and thus the security context of a conversation. The concept is close to the protocols for Voice over the Internet (VoIP), provides a high level of inherent security, and extends naturally to multilateral non-repudiation, e.g., for conferences. Signatures over conversations can become true declarations of will in analogy to electronically signed, digital documents. This enables binding verbal contracts, in principle between unacquainted speakers, and in particular without witnesses. A reference implementation of a secure VoIP archive is exhibited.<|reference_end|>
arxiv
@article{kuntze2007non-repudiation, title={Non-Repudiation in Internet Telephony}, author={Nicolai Kuntze, Andreas U. Schmidt, and Christian Hett}, journal={arXiv preprint arXiv:cs/0701145}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701145}, primaryClass={cs.CR} }
kuntze2007non-repudiation
arxiv-675502
cs/0701146
State constraints and list decoding for the AVC
<|reference_start|>State constraints and list decoding for the AVC: List decoding for arbitrarily varying channels (AVCs) under state constraints is investigated. It is shown that rates within $\epsilon$ of the randomized coding capacity of AVCs with input-dependent state can be achieved under maximal error with list decoding using lists of size $O(1/\epsilon)$. Under average error an achievable rate region and converse bound are given for lists of size $L$. These bounds are based on two different notions of symmetrizability and do not coincide in general. An example is given that shows that for list size $L$ the capacity may be positive but strictly smaller than the randomized coding capacity. This behavior is different than the situation without state constraints.<|reference_end|>
arxiv
@article{sarwate2007state, title={State constraints and list decoding for the AVC}, author={Anand D. Sarwate, Michael Gastpar}, journal={arXiv preprint arXiv:cs/0701146}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701146}, primaryClass={cs.IT math.IT} }
sarwate2007state
arxiv-675503
cs/0701147
A Generic Analysis Environment for Curry Programs
<|reference_start|>A Generic Analysis Environment for Curry Programs: We present CurryBrowser, a generic analysis environment for the declarative multi-paradigm language Curry. CurryBrowser supports browsing through the program code of an application written in Curry, i.e., the main module and all directly or indirectly imported modules. Each module can be shown in different formats (e.g., source code, interface, intermediate code) and, inside each module, various properties of functions defined in this module can be analyzed. In order to support the integration of various program analyses, CurryBrowser has a generic interface to connect local and global analyses implemented in Curry. CurryBrowser is completely implemented in Curry using libraries for GUI programming and meta-programming.<|reference_end|>
arxiv
@article{hanus2007a, title={A Generic Analysis Environment for Curry Programs}, author={Michael Hanus}, journal={arXiv preprint arXiv:cs/0701147}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701147}, primaryClass={cs.PL} }
hanus2007a
arxiv-675504
cs/0701148
Proceedings of the 16th Workshop in Logic-based Methods in Programming Environments (WLPE2006)
<|reference_start|>Proceedings of the 16th Workshop in Logic-based Methods in Programming Environments (WLPE2006): This volume contains the papers presented at WLPE'06: the 16th Workshop on Logic-based Methods in Programming Environments held on August 16, 2006 in the Seattle Sheraton Hotel and Towers, Seattle, Washington (USA). It was organised as a satellite workshop of ICLP'06, the 22th International Conference on Logic Programming.<|reference_end|>
arxiv
@article{vanhoof2007proceedings, title={Proceedings of the 16th Workshop in Logic-based Methods in Programming Environments (WLPE2006)}, author={Wim Vanhoof and Susana Munoz-Hernandez}, journal={arXiv preprint arXiv:cs/0701148}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701148}, primaryClass={cs.PL} }
vanhoof2007proceedings
arxiv-675505
cs/0701149
Power-Bandwidth Tradeoff in Dense Multi-Antenna Relay Networks
<|reference_start|>Power-Bandwidth Tradeoff in Dense Multi-Antenna Relay Networks: We consider a dense fading multi-user network with multiple active multi-antenna source-destination pair terminals communicating simultaneously through a large common set of $K$ multi-antenna relay terminals in the full spatial multiplexing mode. We use Shannon-theoretic tools to analyze the tradeoff between energy efficiency and spectral efficiency (known as the power- bandwidth tradeoff) in meaningful asymptotic regimes of signal-to-noise ratio (SNR) and network size. We design linear distributed multi-antenna relay beamforming (LDMRB) schemes that exploit the spatial signature of multi-user interference and characterize their power-bandwidth tradeoff under a system wide power constraint on source and relay transmissions. The impact of multiple users, multiple relays and multiple antennas on the key performance measures of the high and low SNR regimes is investigated in order to shed new light on the possible reduction in power and bandwidth requirements through the usage of such practical relay cooperation techniques. Our results indicate that point-to-point coded multi-user networks supported by distributed relay beamforming techniques yield enhanced energy efficiency and spectral efficiency, and with appropriate signaling and sufficient antenna degrees of freedom, can achieve asymptotically optimal power-bandwidth tradeoff with the best possible (i.e., as in the cutset bound) energy scaling of $K^{-1}$ and the best possible spectral efficiency slope at any SNR for large number of relay terminals.<|reference_end|>
arxiv
@article{oyman2007power-bandwidth, title={Power-Bandwidth Tradeoff in Dense Multi-Antenna Relay Networks}, author={Ozgur Oyman, Arogyaswami J. Paulraj}, journal={arXiv preprint arXiv:cs/0701149}, year={2007}, doi={10.1109/TWC.2007.05815}, archivePrefix={arXiv}, eprint={cs/0701149}, primaryClass={cs.IT math.IT} }
oyman2007power-bandwidth
arxiv-675506
cs/0701150
Contains and Inside relationships within combinatorial Pyramids
<|reference_start|>Contains and Inside relationships within combinatorial Pyramids: Irregular pyramids are made of a stack of successively reduced graphs embedded in the plane. Such pyramids are used within the segmentation framework to encode a hierarchy of partitions. The different graph models used within the irregular pyramid framework encode different types of relationships between regions. This paper compares different graph models used within the irregular pyramid framework according to a set of relationships between regions. We also define a new algorithm based on a pyramid of combinatorial maps which allows to determine if one region contains the other using only local calculus.<|reference_end|>
arxiv
@article{brun2007contains, title={Contains and Inside relationships within combinatorial Pyramids}, author={Luc Brun (GREYC), Walter G. Kropatsch (PRIP)}, journal={Pattern Recognition 39 (01/04/2006) 515-526}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701150}, primaryClass={cs.CV} }
brun2007contains
arxiv-675507
cs/0701151
Time- and Space-Efficient Evaluation of Some Hypergeometric Constants
<|reference_start|>Time- and Space-Efficient Evaluation of Some Hypergeometric Constants: The currently best known algorithms for the numerical evaluation of hypergeometric constants such as $\zeta(3)$ to $d$ decimal digits have time complexity $O(M(d) \log^2 d)$ and space complexity of $O(d \log d)$ or $O(d)$. Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm with the same asymptotic complexity, but more efficient in practice. Our implementation of this algorithm improves slightly over existing programs for the computation of $\pi$, and we announce a new record of 2 billion digits for $\zeta(3)$.<|reference_end|>
arxiv
@article{cheng2007time-, title={Time- and Space-Efficient Evaluation of Some Hypergeometric Constants}, author={Howard Cheng, Guillaume Hanrot (INRIA Lorraine - LORIA), Emmanuel Thom'e (INRIA Lorraine - LORIA), Eugene Zima, Paul Zimmermann (INRIA Lorraine - LORIA)}, journal={arXiv preprint arXiv:cs/0701151}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701151}, primaryClass={cs.SC} }
cheng2007time-
arxiv-675508
cs/0701152
Characterization of Rate Region in Interference Channels with Constrained Power
<|reference_start|>Characterization of Rate Region in Interference Channels with Constrained Power: In this paper, an $n$-user Gaussian interference channel, where the power of the transmitters are subject to some upper-bounds is studied. We obtain a closed-form expression for the rate region of such a channel based on the Perron-Frobenius theorem. While the boundary of the rate region for the case of unconstrained power is a well-established result, this is the first result for the case of constrained power. We extend this result to the time-varying channels and obtain a closed-form solution for the rate region of such channels.<|reference_end|>
arxiv
@article{mahdavi-doost2007characterization, title={Characterization of Rate Region in Interference Channels with Constrained Power}, author={Hajar Mahdavi-Doost, Masoud Ebrahimi, and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0701152}, year={2007}, doi={10.1109/ISIT.2007.4557585}, archivePrefix={arXiv}, eprint={cs/0701152}, primaryClass={cs.IT math.IT} }
mahdavi-doost2007characterization
arxiv-675509
cs/0701153
Online Bandwidth Allocation
<|reference_start|>Online Bandwidth Allocation: The paper investigates a version of the resource allocation problem arising in the wireless networking, namely in the OVSF code reallocation process. In this setting a complete binary tree of a given height $n$ is considered, together with a sequence of requests which have to be served in an online manner. The requests are of two types: an insertion request requires to allocate a complete subtree of a given height, and a deletion request frees a given allocated subtree. In order to serve an insertion request it might be necessary to move some already allocated subtrees to other locations in order to free a large enough subtree. We are interested in the worst case average number of such reallocations needed to serve a request. It was proved in previous work that the competitive ratio of the optimal online algorithm solving this problem is between 1.5 and O(n). We partially answer the question about its exact value by giving an O(1)-competitive online algorithm. Same model has been used in the context of memory management systems, and analyzed for the number of reallocations needed to serve a request in the worst case. In this setting, our result is a corresponding amortized analysis.<|reference_end|>
arxiv
@article{forišek2007online, title={Online Bandwidth Allocation}, author={Michal Foriv{s}ek, Branislav Katreniak, Jana Katreniakov'a, Rastislav Kr'aloviv{c}, Richard Kr'aloviv{c}, Vladim'ir Koutn'y, Dana Pardubsk'a, Tom'av{s} Plachetka, Branislav Rovan}, journal={arXiv preprint arXiv:cs/0701153}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701153}, primaryClass={cs.DS cs.NI} }
forišek2007online
arxiv-675510
cs/0701154
Logic Meets Algebra: the Case of Regular Languages
<|reference_start|>Logic Meets Algebra: the Case of Regular Languages: The study of finite automata and regular languages is a privileged meeting point of algebra and logic. Since the work of Buchi, regular languages have been classified according to their descriptive complexity, i.e. the type of logical formalism required to define them. The algebraic point of view on automata is an essential complement of this classification: by providing alternative, algebraic characterizations for the classes, it often yields the only opportunity for the design of algorithms that decide expressibility in some logical fragment. We survey the existing results relating the expressibility of regular languages in logical fragments of MSO[S] with algebraic properties of their minimal automata. In particular, we show that many of the best known results in this area share the same underlying mechanics and rely on a very strong relation between logical substitutions and block-products of pseudovarieties of monoid. We also explain the impact of these connections on circuit complexity theory.<|reference_end|>
arxiv
@article{tesson2007logic, title={Logic Meets Algebra: the Case of Regular Languages}, author={Pascal Tesson and Denis Therien}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February 23, 2007) lmcs:2226}, year={2007}, doi={10.2168/LMCS-3(1:4)2007}, archivePrefix={arXiv}, eprint={cs/0701154}, primaryClass={cs.LO} }
tesson2007logic
arxiv-675511
cs/0701155
Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals
<|reference_start|>Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals: Data analysis applications typically aggregate data across many dimensions looking for anomalies or unusual patterns. The SQL aggregate functions and the GROUP BY operator produce zero-dimensional or one-dimensional aggregates. Applications need the N-dimensional generalization of these operators. This paper defines that operator, called the data cube or simply cube. The cube operator generalizes the histogram, cross-tabulation, roll-up, drill-down, and sub-total constructs found in most report writers. The novelty is that cubes are relations. Consequently, the cube operator can be imbedded in more complex non-procedural data analysis programs. The cube operator treats each of the N aggregation attributes as a dimension of N-space. The aggregate of a particular set of attribute values is a point in this space. The set of points forms an N-dimensional cube. Super-aggregates are computed by aggregating the N-cube to lower dimensional spaces. This paper (1) explains the cube and roll-up operators, (2) shows how they fit in SQL, (3) explains how users can define new aggregate functions for cubes, and (4) discusses efficient techniques to compute the cube. Many of these features are being added to the SQL Standard.<|reference_end|>
arxiv
@article{gray2007data, title={Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals}, author={Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, Hamid Pirahesh}, journal={Data Mining and Knowledge Discovery 1(1): 29-53 (1997)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701155}, primaryClass={cs.DB} }
gray2007data
arxiv-675512
cs/0701156
Data Management: Past, Present, and Future
<|reference_start|>Data Management: Past, Present, and Future: Soon most information will be available at your fingertips, anytime, anywhere. Rapid advances in storage, communications, and processing allow us move all information into Cyberspace. Software to define, search, and visualize online information is also a key to creating and accessing online information. This article traces the evolution of data management systems and outlines current trends. Data management systems began by automating traditional tasks: recording transactions in business, science, and commerce. This data consisted primarily of numbers and character strings. Today these systems provide the infrastructure for much of our society, allowing fast, reliable, secure, and automatic access to data distributed throughout the world. Increasingly these systems automatically design and manage access to the data. The next steps are to automate access to richer forms of data: images, sound, video, maps, and other media. A second major challenge is automatically summarizing and abstracting data in anticipation of user requests. These multi-media databases and tools to access them will be a cornerstone of our move to Cyberspace.<|reference_end|>
arxiv
@article{gray2007data, title={Data Management: Past, Present, and Future}, author={Jim Gray}, journal={IEEE Computer 29(10): 38-46 (1996)}, year={2007}, number={MSR-TR-96-18}, archivePrefix={arXiv}, eprint={cs/0701156}, primaryClass={cs.DB} }
gray2007data
arxiv-675513
cs/0701157
A Critique of ANSI SQL Isolation Levels
<|reference_start|>A Critique of ANSI SQL Isolation Levels: ANSI SQL-92 defines Isolation Levels in terms of phenomena: Dirty Reads, Non-Repeatable Reads, and Phantoms. This paper shows that these phenomena and the ANSI SQL definitions fail to characterize several popular isolation levels, including the standard locking implementations of the levels. Investigating the ambiguities of the phenomena leads to clearer definitions; in addition new phenomena that better characterize isolation types are introduced. An important multiversion isolation type, Snapshot Isolation, is defined.<|reference_end|>
arxiv
@article{berenson2007a, title={A Critique of ANSI SQL Isolation Levels}, author={Hal Berenson, Phil Bernstein, Jim Gray, Jim Melton, Elizabeth O'Neil, Patrick O'Neil}, journal={Proc. ACM SIGMOD 95, pp. 1-10, San Jose CA, June 1995}, year={2007}, number={MSR-TR-95-51}, archivePrefix={arXiv}, eprint={cs/0701157}, primaryClass={cs.DB} }
berenson2007a
arxiv-675514
cs/0701158
Queues Are Databases
<|reference_start|>Queues Are Databases: Message-oriented-middleware (MOM) has become an small industry. MOM offers queued transaction processing as an advance over pure client-server transaction processing. This note makes four points: Queued transaction processing is less general than direct transaction processing. Queued systems are built on top of direct systems. You cannot build a direct system atop a queued system. It is difficult to build direct, conversational, or distributed transactions atop a queued system. Queues are interesting databases with interesting concurrency control. It is best to build these mechanisms into a standard database system so other applications can use these interesting features. Queue systems need DBMS functionality. Queues need security, configuration, performance monitoring, recovery, and reorganization utilities. Database systems already have these features. A full-function MOM system duplicates these database features. Queue managers are simple TP-monitors managing server pools driven by queues. Database systems are encompassing many server pool features as they evolve to TP-lite systems.<|reference_end|>
arxiv
@article{gray2007queues, title={Queues Are Databases}, author={Jim Gray}, journal={arXiv preprint arXiv:cs/0701158}, year={2007}, number={MSR-TR-95-56}, archivePrefix={arXiv}, eprint={cs/0701158}, primaryClass={cs.DB} }
gray2007queues
arxiv-675515
cs/0701159
Supporting Finite Element Analysis with a Relational Database Backend, Part I: There is Life beyond Files
<|reference_start|>Supporting Finite Element Analysis with a Relational Database Backend, Part I: There is Life beyond Files: In this paper, we show how to use a Relational Database Management System in support of Finite Element Analysis. We believe it is a new way of thinking about data management in well-understood applications to prepare them for two major challenges, - size and integration (globalization). Neither extreme size nor integration (with other applications over the Web) was a design concern 30 years ago when the paradigm for FEA implementation first was formed. On the other hand, database technology has come a long way since its inception and it is past time to highlight its usefulness to the field of scientific computing and computer based engineering. This series aims to widen the list of applications for database designers and for FEA users and application developers to reap some of the benefits of database development.<|reference_end|>
arxiv
@article{heber2007supporting, title={Supporting Finite Element Analysis with a Relational Database Backend, Part I: There is Life beyond Files}, author={Gerd Heber, Jim Gray}, journal={arXiv preprint arXiv:cs/0701159}, year={2007}, number={MSR-TR-2005-49}, archivePrefix={arXiv}, eprint={cs/0701159}, primaryClass={cs.DB cs.CE} }
heber2007supporting
arxiv-675516
cs/0701160
Supporting Finite Element Analysis with a Relational Database Backend, Part II: Database Design and Access
<|reference_start|>Supporting Finite Element Analysis with a Relational Database Backend, Part II: Database Design and Access: This is Part II of a three article series on using databases for Finite Element Analysis (FEA). It discusses (1) db design, (2) data loading, (3) typical use cases during grid building, (4) typical use cases during simulation (get and put), (5) typical use cases during analysis (also done in Part III) and some performance measures of these cases. It argues that using a database is simpler to implement than custom data schemas, has better performance because it can use data parallelism, and better supports FEA modularity and tool evolution because database schema evolution, data independence, and self-defining data.<|reference_end|>
arxiv
@article{heber2007supporting, title={Supporting Finite Element Analysis with a Relational Database Backend, Part II: Database Design and Access}, author={Gerd Heber, Jim Gray}, journal={arXiv preprint arXiv:cs/0701160}, year={2007}, number={MSR-TR-2006-21}, archivePrefix={arXiv}, eprint={cs/0701160}, primaryClass={cs.DB cs.CE} }
heber2007supporting
arxiv-675517
cs/0701161
Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive
<|reference_start|>Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive: A $2k computer can execute about 8k transactions per second. This is 80x more than one of the largest US bank's 1970's traffic - it approximates the total US 1970's financial transaction volume. Very modest modern computers can easily solve yesterday's problems.<|reference_end|>
arxiv
@article{gray2007thousands, title={Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive}, author={Jim Gray, Charles Levine}, journal={arXiv preprint arXiv:cs/0701161}, year={2007}, number={MSR-TR-2005-39}, archivePrefix={arXiv}, eprint={cs/0701161}, primaryClass={cs.DB cs.PF} }
gray2007thousands
arxiv-675518
cs/0701162
A Measure of Transaction Processing 20 Years Later
<|reference_start|>A Measure of Transaction Processing 20 Years Later: This provides a retrospective of the paper "A Measure of Transaction Processing" published in 1985. It shows that transaction processing peak performance and price-peformance have improved about 100,000x respectively and that sort/sequential performance has approximately doubled each year (so a million fold improvement) even though processor performance plateaued in 1995.<|reference_end|>
arxiv
@article{gray2007a, title={A Measure of Transaction Processing 20 Years Later}, author={Jim Gray}, journal={arXiv preprint arXiv:cs/0701162}, year={2007}, number={MSR-TR-2005-57}, archivePrefix={arXiv}, eprint={cs/0701162}, primaryClass={cs.DB cs.PF} }
gray2007a
arxiv-675519
cs/0701163
Using Table Valued Functions in SQL Server 2005 To Implement a Spatial Data Library
<|reference_start|>Using Table Valued Functions in SQL Server 2005 To Implement a Spatial Data Library: This article explains how to add spatial search functions (point-near-point and point in polygon) to Microsoft SQL Server 2005 using C# and table-valued functions. It is possible to use this library to add spatial search to your application without writing any special code. The library implements the public-domain C# Hierarchical Triangular Mesh (HTM) algorithms from Johns Hopkins University. That C# library is connected to SQL Server 2005 via a set of scalar-valued and table-valued functions. These functions act as a spatial index.<|reference_end|>
arxiv
@article{gray2007using, title={Using Table Valued Functions in SQL Server 2005 To Implement a Spatial Data Library}, author={Jim Gray, Alex Szalay, Gyorgy Fekete}, journal={arXiv preprint arXiv:cs/0701163}, year={2007}, number={MSR-TR-2005-122}, archivePrefix={arXiv}, eprint={cs/0701163}, primaryClass={cs.DB cs.CE} }
gray2007using
arxiv-675520
cs/0701164
Indexing the Sphere with the Hierarchical Triangular Mesh
<|reference_start|>Indexing the Sphere with the Hierarchical Triangular Mesh: We describe a method to subdivide the surface of a sphere into spherical triangles of similar, but not identical, shapes and sizes. The Hierarchical Triangular Mesh (HTM) is a quad-tree that is particularly good at supporting searches at different resolutions, from arc seconds to hemispheres. The subdivision scheme is universal, providing the basis for addressing and for fast lookups. The HTM provides the basis for an efficient geospatial indexing scheme in relational databases where the data have an inherent location on either the celestial sphere or the Earth. The HTM index is superior to cartographical methods using coordinates with singularities at the poles. We also describe a way to specify surface regions that efficiently represent spherical query areas. This article presents the algorithms used to identify the HTM triangles covering such regions.<|reference_end|>
arxiv
@article{szalay2007indexing, title={Indexing the Sphere with the Hierarchical Triangular Mesh}, author={Alexander S. Szalay, Jim Gray, George Fekete, Peter Z. Kunszt, Peter Kukol, Ani Thakar}, journal={arXiv preprint arXiv:cs/0701164}, year={2007}, number={MSR-TR-2005-123}, archivePrefix={arXiv}, eprint={cs/0701164}, primaryClass={cs.DB cs.DS} }
szalay2007indexing
arxiv-675521
cs/0701165
Petascale Computational Systems
<|reference_start|>Petascale Computational Systems: Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.<|reference_end|>
arxiv
@article{bell2007petascale, title={Petascale Computational Systems}, author={Gordon Bell, Jim Gray, Alex Szalay}, journal={arXiv preprint arXiv:cs/0701165}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701165}, primaryClass={cs.DB cs.AR} }
bell2007petascale
arxiv-675522
cs/0701166
Empirical Measurements of Disk Failure Rates and Error Rates
<|reference_start|>Empirical Measurements of Disk Failure Rates and Error Rates: The SATA advertised bit error rate of one error in 10 terabytes is frightening. We moved 2 PB through low-cost hardware and saw five disk read error events, several controller failures, and many system reboots caused by security patches. We conclude that SATA uncorrectable read errors are not yet a dominant system-fault source - they happen, but are rare compared to other problems. We also conclude that UER (uncorrectable error rate) is not the relevant metric for our needs. When an uncorrectable read error happens, there are typically several damaged storage blocks (and many uncorrectable read errors.) Also, some uncorrectable read errors may be masked by the operating system. The more meaningful metric for data architects is Mean Time To Data Loss (MTTDL.)<|reference_end|>
arxiv
@article{gray2007empirical, title={Empirical Measurements of Disk Failure Rates and Error Rates}, author={Jim Gray, Catharine van Ingen}, journal={arXiv preprint arXiv:cs/0701166}, year={2007}, number={MSR-TR-2005-166}, archivePrefix={arXiv}, eprint={cs/0701166}, primaryClass={cs.DB cs.AR} }
gray2007empirical
arxiv-675523
cs/0701167
Large-Scale Query and XMatch, Entering the Parallel Zone
<|reference_start|>Large-Scale Query and XMatch, Entering the Parallel Zone: Current and future astronomical surveys are producing catalogs with millions and billions of objects. On-line access to such big datasets for data mining and cross-correlation is usually as highly desired as unfeasible. Providing these capabilities is becoming critical for the Virtual Observatory framework. In this paper we present various performance tests that show how using Relational Database Management Systems (RDBMS) and a Zoning algorithm to partition and parallelize the computation, we can facilitate large-scale query and cross-match.<|reference_end|>
arxiv
@article{nieto-santisteban2007large-scale, title={Large-Scale Query and XMatch, Entering the Parallel Zone}, author={Maria A. Nieto-Santisteban, Aniruddha R. Thakar, Alexander S. Szalay, Jim Gray}, journal={arXiv preprint arXiv:cs/0701167}, year={2007}, number={MSR-TR-2005- 169}, archivePrefix={arXiv}, eprint={cs/0701167}, primaryClass={cs.DB cs.CE} }
nieto-santisteban2007large-scale
arxiv-675524
cs/0701168
To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?
<|reference_start|>To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?: Application designers often face the question of whether to store large objects in a filesystem or in a database. Often this decision is made for application design simplicity. Sometimes, performance measurements are also used. This paper looks at the question of fragmentation - one of the operational issues that can affect the performance and/or manageability of the system as deployed long term. As expected from the common wisdom, objects smaller than 256KB are best stored in a database while objects larger than 1M are best stored in the filesystem. Between 256KB and 1MB, the read:write ratio and rate of object overwrite or replacement are important factors. We used the notion of "storage age" or number of object overwrites as way of normalizing wall clock time. Storage age allows our results or similar such results to be applied across a number of read:write ratios and object replacement rates.<|reference_end|>
arxiv
@article{sears2007to, title={To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?}, author={Russell Sears, Catharine Van Ingen, Jim Gray}, journal={CIDR 2007}, year={2007}, number={MSR-TR-2006-45}, archivePrefix={arXiv}, eprint={cs/0701168}, primaryClass={cs.DB} }
sears2007to
arxiv-675525
cs/0701169
A Framework for Designing MIMO systems with Decision Feedback Equalization or Tomlinson-Harashima Precoding
<|reference_start|>A Framework for Designing MIMO systems with Decision Feedback Equalization or Tomlinson-Harashima Precoding: We consider joint transceiver design for general Multiple-Input Multiple-Output communication systems that implement interference (pre-)subtraction, such as those based on Decision Feedback Equalization (DFE) or Tomlinson-Harashima precoding (THP). We develop a unified framework for joint transceiver design by considering design criteria that are expressed as functions of the Mean Square Error (MSE) of the individual data streams. By deriving two inequalities that involve the logarithms of the individual MSEs, we obtain optimal designs for two classes of communication objectives, namely those that are Schur-convex and Schur-concave functions of these logarithms. For Schur-convex objectives, the optimal design results in data streams with equal MSEs. This design simultaneously minimizes the total MSE and maximizes the mutual information for the DFE-based model. For Schur-concave objectives, the optimal DFE design results in linear equalization and the optimal THP design results in linear precoding. The proposed framework embraces a wide range of design objectives and can be regarded as a counterpart of the existing framework of linear transceiver design.<|reference_end|>
arxiv
@article{shenouda2007a, title={A Framework for Designing MIMO systems with Decision Feedback Equalization or Tomlinson-Harashima Precoding}, author={Michael Botros Shenouda and T. N. Davidson}, journal={arXiv preprint arXiv:cs/0701169}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701169}, primaryClass={cs.IT math.IT} }
shenouda2007a
arxiv-675526
cs/0701170
Life Under Your Feet: An End-to-End Soil Ecology Sensor Network, Database, Web Server, and Analysis Service
<|reference_start|>Life Under Your Feet: An End-to-End Soil Ecology Sensor Network, Database, Web Server, and Analysis Service: Wireless sensor networks can revolutionize soil ecology by providing measurements at temporal and spatial granularities previously impossible. This paper presents a soil monitoring system we developed and deployed at an urban forest in Baltimore as a first step towards realizing this vision. Motes in this network measure and save soil moisture and temperature in situ every minute. Raw measurements are periodically retrieved by a sensor gateway and stored in a central database where calibrated versions are derived and stored. The measurement database is published through Web Services interfaces. In addition, analysis tools let scientists analyze current and historical data and help manage the sensor network. The article describes the system design, what we learned from the deployment, and initial results obtained from the sensors. The system measures soil factors with unprecedented temporal precision. However, the deployment required device-level programming, sensor calibration across space and time, and cross-referencing measurements with external sources. The database, web server, and data analysis design required considerable innovation and expertise. So, the ratio of computer-scientists to ecologists was 3:1. Before sensor networks can fulfill their potential as instruments that can be easily deployed by scientists, these technical problems must be addressed so that the ratio is one nerd per ten ecologists.<|reference_end|>
arxiv
@article{szlavecz2007life, title={Life Under Your Feet: An End-to-End Soil Ecology Sensor Network, Database, Web Server, and Analysis Service}, author={Katalin Szlavecz, Andreas Terzis, Stuart Ozer, Razvan Musaloiu-E, Joshua Cogan, Sam Small, Randal Burns, Jim Gray, Alex Szalay}, journal={arXiv preprint arXiv:cs/0701170}, year={2007}, number={MSR TR 2006 90}, archivePrefix={arXiv}, eprint={cs/0701170}, primaryClass={cs.DB cs.CE} }
szlavecz2007life
arxiv-675527
cs/0701171
The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching Spatial Datasets
<|reference_start|>The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching Spatial Datasets: Zones index an N-dimensional Euclidian or metric space to efficiently support points-near-a-point queries either within a dataset or between two datasets. The approach uses relational algebra and the B-Tree mechanism found in almost all relational database systems. Hence, the Zones Algorithm gives a portable-relational implementation of points-near-point, spatial cross-match, and self-match queries. This article corrects some mistakes in an earlier article we wrote on the Zones Algorithm and describes some algorithmic improvements. The Appendix includes an implementation of point-near-point, self-match, and cross-match using the USGS city and stream gauge database.<|reference_end|>
arxiv
@article{gray2007the, title={The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching Spatial Datasets}, author={Jim Gray, Maria A. Nieto-Santisteban, Alexander S. Szalay}, journal={arXiv preprint arXiv:cs/0701171}, year={2007}, number={MSR TR 2006 52}, archivePrefix={arXiv}, eprint={cs/0701171}, primaryClass={cs.DB cs.DS} }
gray2007the
arxiv-675528
cs/0701172
Cross-Matching Multiple Spatial Observations and Dealing with Missing Data
<|reference_start|>Cross-Matching Multiple Spatial Observations and Dealing with Missing Data: Cross-match spatially clusters and organizes several astronomical point-source measurements from one or more surveys. Ideally, each object would be found in each survey. Unfortunately, the observation conditions and the objects themselves change continually. Even some stationary objects are missing in some observations; sometimes objects have a variable light flux and sometimes the seeing is worse. In most cases we are faced with a substantial number of differences in object detections between surveys and between observations taken at different times within the same survey or instrument. Dealing with such missing observations is a difficult problem. The first step is to classify misses as ephemeral - when the object moved or simply disappeared, masked - when noise hid or corrupted the object observation, or edge - when the object was near the edge of the observational field. This classification and a spatial library to represent and manipulate observational footprints help construct a Match table recording both hits and misses. Transitive closure clusters friends-of-friends into object bundles. The bundle summary statistics are recorded in a Bundle table. This design is an evolution of the Sloan Digital Sky Survey cross-match design that compared overlapping observations taken at different times. Cross-Matching Multiple Spatial Observations and Dealing with Missing Data.<|reference_end|>
arxiv
@article{gray2007cross-matching, title={Cross-Matching Multiple Spatial Observations and Dealing with Missing Data}, author={Jim Gray, Alex Szalay, Tamas Budavari, Robert Lupton, Maria Nieto-Santisteban, Ani Thakar}, journal={arXiv preprint arXiv:cs/0701172}, year={2007}, number={MSR TR 2006-175}, archivePrefix={arXiv}, eprint={cs/0701172}, primaryClass={cs.DB cs.CE} }
gray2007cross-matching
arxiv-675529
cs/0701173
SkyServer Traffic Report - The First Five Years
<|reference_start|>SkyServer Traffic Report - The First Five Years: The SkyServer is an Internet portal to the Sloan Digital Sky Survey Catalog Archive Server. From 2001 to 2006, there were a million visitors in 3 million sessions generating 170 million Web hits, 16 million ad-hoc SQL queries, and 62 million page views. The site currently averages 35 thousand visitors and 400 thousand sessions per month. The Web and SQL logs are public. We analyzed traffic and sessions by duration, usage pattern, data product, and client type (mortal or bot) over time. The analysis shows (1) the site's popularity, (2) the educational website that delivered nearly fifty thousand hours of interactive instruction, (3) the relative use of interactive, programmatic, and batch-local access, (4) the success of offering ad-hoc SQL, personal database, and batch job access to scientists as part of the data publication, (5) the continuing interest in "old" datasets, (6) the usage of SQL constructs, and (7) a novel approach of using the corpus of correct SQL queries to suggest similar but correct statements when a user presents an incorrect SQL statement.<|reference_end|>
arxiv
@article{singh2007skyserver, title={SkyServer Traffic Report - The First Five Years}, author={Vik Singh, Jim Gray, Ani Thakar, Alexander S. Szalay, Jordan Raddick, Bill Boroski, Svetlana Lebedeva, Brian Yanny}, journal={arXiv preprint arXiv:cs/0701173}, year={2007}, number={MSR TR-2006-190}, archivePrefix={arXiv}, eprint={cs/0701173}, primaryClass={cs.DB cs.CE} }
singh2007skyserver
arxiv-675530
cs/0701174
A Prototype for Educational Planning Using Course Constraints to Simulate Student Populations
<|reference_start|>A Prototype for Educational Planning Using Course Constraints to Simulate Student Populations: Distance learning universities usually afford their students the flexibility to advance their studies at their own pace. This can lead to a considerable fluctuation of student populations within a program's courses, possibly affecting the academic viability of a program as well as the related required resources. Providing a method that estimates this population could be of substantial help to university management and academic personnel. We describe how to use course precedence constraints to calculate alternative tuition paths and then use Markov models to estimate future populations. In doing so, we identify key issues of a large scale potential deployment.<|reference_end|>
arxiv
@article{hadzilacos2007a, title={A Prototype for Educational Planning Using Course Constraints to Simulate Student Populations}, author={T. Hadzilacos, D. Kalles, D. Koumanakos, V. Mitsionis}, journal={arXiv preprint arXiv:cs/0701174}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701174}, primaryClass={cs.AI cs.CY cs.DS cs.SC} }
hadzilacos2007a
arxiv-675531
cs/0701175
On the Software and Knowledge Engineering Aspects of the Educational Process
<|reference_start|>On the Software and Knowledge Engineering Aspects of the Educational Process: The Hellenic Open University has embarked on a large-scale effort to enhance its textbook-based material with content that demonstrably supports the basic tenets of distance learning. The challenge is to set up a framework that allows for the production-level creation, distribution and consumption of content, and at the same time, evaluate the effort in terms of technological, educational and organizational knowledge gained. This paper presents a model of the educational process that is used as a development backbone and argues about its conceptual and technical practicality at large.<|reference_end|>
arxiv
@article{hadzilacos2007on, title={On the Software and Knowledge Engineering Aspects of the Educational Process}, author={Th. Hadzilacos, D. Kalles, M. Pouliopoulou}, journal={arXiv preprint arXiv:cs/0701175}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701175}, primaryClass={cs.SE cs.DL} }
hadzilacos2007on
arxiv-675532
cs/0701176
Towards Practical Typechecking for Macro Tree Transducers
<|reference_start|>Towards Practical Typechecking for Macro Tree Transducers: Macro tree transducers (mtt) are an important model that both covers many useful XML transformations and allows decidable exact typechecking. This paper reports our first step toward an implementation of mtt typechecker that has a practical efficiency. Our approach is to represent an input type obtained from a backward inference as an alternating tree automaton, in a style similar to Tozawa's XSLT0 typechecking. In this approach, typechecking reduces to checking emptiness of an alternating tree automaton. We propose several optimizations (Cartesian factorization, state partitioning) on the backward inference process in order to produce much smaller alternating tree automata than the naive algorithm, and we present our efficient algorithm for checking emptiness of alternating tree automata, where we exploit the explicit representation of alternation for local optimizations. Our preliminary experiments confirm that our algorithm has a practical performance that can typecheck simple transformations with respect to the full XHTML in a reasonable time.<|reference_end|>
arxiv
@article{frisch2007towards, title={Towards Practical Typechecking for Macro Tree Transducers}, author={Alain Frisch (INRIA Rocquencourt), Haruo Hosoya (CST)}, journal={arXiv preprint arXiv:cs/0701176}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701176}, primaryClass={cs.PL} }
frisch2007towards
arxiv-675533
cs/0701177
Pitch Tracking of Acoustic Signals based on Average Squared Mean Difference Function
<|reference_start|>Pitch Tracking of Acoustic Signals based on Average Squared Mean Difference Function: In this paper, a method of pitch tracking based on variance minimization of locally periodic subsamples of an acoustic signal is presented. Replicates along the length of the periodically sampled data of the signal vector are taken and locally averaged sample variances are minimized to estimate the fundamental frequency. Using this method, pitch tracking of any text independent voiced signal is possible for different speakers.<|reference_end|>
arxiv
@article{chakraborty2007pitch, title={Pitch Tracking of Acoustic Signals based on Average Squared Mean Difference Function}, author={Roudra Chakraborty, Debapriya Sengupta, Sagnik Sinha}, journal={arXiv preprint arXiv:cs/0701177}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701177}, primaryClass={cs.SD} }
chakraborty2007pitch
arxiv-675534
cs/0701178
Distributed Detection in Sensor Networks with Limited Range Sensors
<|reference_start|>Distributed Detection in Sensor Networks with Limited Range Sensors: We consider a multi-object detection problem over a sensor network (SNET) with limited range sensors. This problem complements the widely considered decentralized detection problem where all sensors observe the same object. While the necessity for global collaboration is clear in the decentralized detection problem, the benefits of collaboration with limited range sensors is unclear and has not been widely explored. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR). We first extend the FDR procedure and develop a transformation that exploits complete or partial knowledge of either the observed distributions at each sensor or the ensemble (mixture) distribution across all sensors. We then show that this transformation applies to multi-dimensional observations, thus extending FDR to multi-dimensional settings. We also extend FDR theory to cases where distributions under both null and positive hypotheses are uncertain. We then propose a robust distributed algorithm to perform detection. We further demonstrate scalability to large SNETs by showing that the upper bound on the communication complexity scales linearly with the number of sensors that are in the vicinity of objects and is independent of the total number of sensors. Finally, we deal with situations where the sensing model may be uncertain and establish robustness of our techniques to such uncertainties.<|reference_end|>
arxiv
@article{ermis2007distributed, title={Distributed Detection in Sensor Networks with Limited Range Sensors}, author={Erhan B. Ermis, Venkatesh Saligrama}, journal={arXiv preprint arXiv:cs/0701178}, year={2007}, doi={10.1109/TSP.2009.2033300}, archivePrefix={arXiv}, eprint={cs/0701178}, primaryClass={cs.IT math.IT} }
ermis2007distributed
arxiv-675535
cs/0701179
Scatter of Weak Robots
<|reference_start|>Scatter of Weak Robots: In this paper, we first formalize the problem to be solved, i.e., the Scatter Problem (SP). We then show that SP cannot be deterministically solved. Next, we propose a randomized algorithm for this problem. The proposed solution is trivially self-stabilizing. We then show how to design a self-stabilizing version of any deterministic solution for the Pattern Formation and the Gathering problems.<|reference_end|>
arxiv
@article{dieudonné2007scatter, title={Scatter of Weak Robots}, author={Yoann Dieudonn'e (LaRIA), Franck Petit (LaRIA)}, journal={arXiv preprint arXiv:cs/0701179}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701179}, primaryClass={cs.DC} }
dieudonné2007scatter
arxiv-675536
cs/0701180
Ontology from Local Hierarchical Structure in Text
<|reference_start|>Ontology from Local Hierarchical Structure in Text: We study the notion of hierarchy in the context of visualizing textual data and navigating text collections. A formal framework for ``hierarchy'' is given by an ultrametric topology. This provides us with a theoretical foundation for concept hierarchy creation. A major objective is {\em scalable} annotation or labeling of concept maps. Serendipitously we pursue other objectives such as deriving common word pair (and triplet) phrases, i.e., word 2- and 3-grams. We evaluate our approach using (i) a collection of texts, (ii) a single text subdivided into successive parts (for which we provide an interactive demonstrator), and (iii) a text subdivided at the sentence or line level. While detailing a generic framework, a distinguishing feature of our work is that we focus on {\em locality} of hierarchic structure in order to extract semantic information.<|reference_end|>
arxiv
@article{murtagh2007ontology, title={Ontology from Local Hierarchical Structure in Text}, author={F. Murtagh, J. Mothe and K. Englmeier}, journal={arXiv preprint arXiv:cs/0701180}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701180}, primaryClass={cs.IR} }
murtagh2007ontology
arxiv-675537
cs/0701181
A Note on Local Ultrametricity in Text
<|reference_start|>A Note on Local Ultrametricity in Text: High dimensional, sparsely populated data spaces have been characterized in terms of ultrametric topology. This implies that there are natural, not necessarily unique, tree or hierarchy structures defined by the ultrametric topology. In this note we study the extent of local ultrametric topology in texts, with the aim of finding unique ``fingerprints'' for a text or corpus, discriminating between texts from different domains, and opening up the possibility of exploiting hierarchical structures in the data. We use coherent and meaningful collections of over 1000 texts, comprising over 1.3 million words.<|reference_end|>
arxiv
@article{murtagh2007a, title={A Note on Local Ultrametricity in Text}, author={Fionn Murtagh}, journal={arXiv preprint arXiv:cs/0701181}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701181}, primaryClass={cs.CL} }
murtagh2007a
arxiv-675538
cs/0701182
Supplement to: Code Spectrum and Reliability Function: Binary Symmetric Channel
<|reference_start|>Supplement to: Code Spectrum and Reliability Function: Binary Symmetric Channel: A much simpler proof of Theorem 1 from M.Burnashev "Code spectrum and reliability function: Binary symmetric channel" is presented.<|reference_end|>
arxiv
@article{burnashev2007supplement, title={Supplement to: Code Spectrum and Reliability Function: Binary Symmetric Channel}, author={Marat Burnashev}, journal={arXiv preprint arXiv:cs/0701182}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701182}, primaryClass={cs.IT math.IT} }
burnashev2007supplement
arxiv-675539
cs/0701183
Certification of the QR factor R, and of lattice basis reducedness
<|reference_start|>Certification of the QR factor R, and of lattice basis reducedness: Given a lattice basis of n vectors in Z^n, we propose an algorithm using 12n^3+O(n^2) floating point operations for checking whether the basis is LLL-reduced. If the basis is reduced then the algorithm will hopefully answer ''yes''. If the basis is not reduced, or if the precision used is not sufficient with respect to n, and to the numerical properties of the basis, the algorithm will answer ''failed''. Hence a positive answer is a rigorous certificate. For implementing the certificate itself, we propose a floating point algorithm for computing (certified) error bounds for the entries of the R factor of the QR matrix factorization. This algorithm takes into account all possible approximation and rounding errors. The cost 12n^3+O(n^2) of the certificate is only six times more than the cost of numerical algorithms for computing the QR factorization itself, and the certificate may be implemented using matrix library routines only. We report experiments that show that for a reduced basis of adequate dimension and quality the certificate succeeds, and establish the effectiveness of the certificate. This effectiveness is applied for certifying the output of fastest existing floating point heuristics of LLL reduction, without slowing down the whole process.<|reference_end|>
arxiv
@article{villard2007certification, title={Certification of the QR factor R, and of lattice basis reducedness}, author={Gilles Villard (LIP)}, journal={arXiv preprint arXiv:cs/0701183}, year={2007}, number={Vil06-1}, archivePrefix={arXiv}, eprint={cs/0701183}, primaryClass={cs.SC cs.NA} }
villard2007certification
arxiv-675540
cs/0701184
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning
<|reference_start|>Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning: In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability (SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations (and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior -- the proof arguments -- illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks.<|reference_end|>
arxiv
@article{hoffmann2007structure, title={Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning}, author={Joerg Hoffmann and Carla Gomes and Bart Selman}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (February 26, 2007) lmcs:2228}, year={2007}, doi={10.2168/LMCS-3(1:6)2007}, archivePrefix={arXiv}, eprint={cs/0701184}, primaryClass={cs.AI} }
hoffmann2007structure
arxiv-675541
cs/0701185
Graph Operations on Clique-Width Bounded Graphs
<|reference_start|>Graph Operations on Clique-Width Bounded Graphs: Clique-width is a well-known graph parameter. Many NP-hard graph problems admit polynomial-time solutions when restricted to graphs of bounded clique-width. The same holds for NLC-width. In this paper we study the behavior of clique-width and NLC-width under various graph operations and graph transformations. We give upper and lower bounds for the clique-width and NLC-width of the modified graphs in terms of the clique-width and NLC-width of the involved graphs.<|reference_end|>
arxiv
@article{gurski2007graph, title={Graph Operations on Clique-Width Bounded Graphs}, author={Frank Gurski}, journal={arXiv preprint arXiv:cs/0701185}, year={2007}, doi={10.1007/s00224-016-9685-1}, archivePrefix={arXiv}, eprint={cs/0701185}, primaryClass={cs.DS cs.DM} }
gurski2007graph
arxiv-675542
cs/0701186
Certification of bounds on expressions involving rounded operators
<|reference_start|>Certification of bounds on expressions involving rounded operators: Gappa uses interval arithmetic to certify bounds on mathematical expressions that involve rounded as well as exact operators. Gappa generates a theorem with its proof for each bound treated. The proof can be checked with a higher order logic automatic proof checker, either Coq or HOL Light, and we have developed a large companion library of verified facts for Coq dealing with the addition, multiplication, division, and square root, in fixed- and floating-point arithmetics. Gappa uses multiple-precision dyadic fractions for the endpoints of intervals and performs forward error analysis on rounded operators when necessary. When asked, Gappa reports the best bounds it is able to reach for a given expression in a given context. This feature is used to quickly obtain coarse bounds. It can also be used to identify where the set of facts and automatic techniques implemented in Gappa becomes insufficient. Gappa handles seamlessly additional properties expressed as interval properties or rewriting rules in order to establish more intricate bounds. Recent work showed that Gappa is perfectly suited to the proof of correctness of small pieces of software. Proof obligations can be written by designers, produced by third-party tools or obtained by overloading arithmetic operators.<|reference_end|>
arxiv
@article{daumas2007certification, title={Certification of bounds on expressions involving rounded operators}, author={Marc Daumas (LIRMM, LP2A), Guillaume Melquiond (LIP, INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:cs/0701186}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701186}, primaryClass={cs.MS} }
daumas2007certification
arxiv-675543
cs/0701187
Verification Across Intellectual Property Boundaries
<|reference_start|>Verification Across Intellectual Property Boundaries: In many industries, the importance of software components provided by third-party suppliers is steadily increasing. As the suppliers seek to secure their intellectual property (IP) rights, the customer usually has no direct access to the suppliers' source code, and is able to enforce the use of verification tools only by legal requirements. In turn, the supplier has no means to convince the customer about successful verification without revealing the source code. This paper presents an approach to resolve the conflict between the IP interests of the supplier and the quality interests of the customer. We introduce a protocol in which a dedicated server (called the "amanat") is controlled by both parties: the customer controls the verification task performed by the amanat, while the supplier controls the communication channels of the amanat to ensure that the amanat does not leak information about the source code. We argue that the protocol is both practically useful and mathematically sound. As the protocol is based on well-known (and relatively lightweight) cryptographic primitives, it allows a straightforward implementation on top of existing verification tool chains. To substantiate our security claims, we establish the correctness of the protocol by cryptographic reduction proofs.<|reference_end|>
arxiv
@article{chaki2007verification, title={Verification Across Intellectual Property Boundaries}, author={Sagar Chaki, Christian Schallhart, Helmut Veith}, journal={arXiv preprint arXiv:cs/0701187}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701187}, primaryClass={cs.OH} }
chaki2007verification
arxiv-675544
cs/0701188
Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections
<|reference_start|>Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections: Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type.<|reference_end|>
arxiv
@article{eberly2007faster, title={Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections}, author={Wayne Eberly (UCALGARY), Mark Giesbrecht (UWO), Pascal Giorgi (LP2A), Arne Storjohann (UWO), Gilles Villard (LIP)}, journal={arXiv preprint arXiv:cs/0701188}, year={2007}, number={EGGSV07-1}, archivePrefix={arXiv}, eprint={cs/0701188}, primaryClass={cs.SC cs.NA} }
eberly2007faster
arxiv-675545
cs/0701189
A New Self-Stabilizing Maximal Matching Algorithm
<|reference_start|>A New Self-Stabilizing Maximal Matching Algorithm: The maximal matching problem has received considerable attention in the self-stabilizing community. Previous work has given different self-stabilizing algorithms that solves the problem for both the adversarial and fair distributed daemon, the sequential adversarial daemon, as well as the synchronous daemon. In the following we present a single self-stabilizing algorithm for this problem that unites all of these algorithms in that it stabilizes in the same number of moves as the previous best algorithms for the sequential adversarial, the distributed fair, and the synchronous daemon. In addition, the algorithm improves the previous best moves complexities for the distributed adversarial daemon from O(n^2) and O(delta m) to O(m) where n is the number of processes, m is thenumber of edges, and delta is the maximum degree in the graph.<|reference_end|>
arxiv
@article{manne2007a, title={A New Self-Stabilizing Maximal Matching Algorithm}, author={Fredrik Manne, Morten Mjelde, Laurence Pilard, S'ebastien Tixeuil (LRI)}, journal={arXiv preprint arXiv:cs/0701189}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701189}, primaryClass={cs.DS cs.DC} }
manne2007a
arxiv-675546
cs/0701190
A Peer-to-Peer Browsable File Index using a Popularity Based Global Namespace
<|reference_start|>A Peer-to-Peer Browsable File Index using a Popularity Based Global Namespace: The distribution of files using decentralized, peer-to-peer (P2P) systems, has significant advantages over centralized approaches. It is however more difficult to settle on the best approach for file sharing. Most file sharing systems are based on query string searches, leading to a relatively simple but inefficient broadcast or to an efficient but relatively complicated index in a structured environment. In this paper we use a browsable peer-to-peer file index consisting of files which serve as directory nodes, interconnecting to form a directory network. We implemented the system based on BitTorrent and Kademlia. The directory network inherits all of the advantages of decentralization and provides browsable, efficient searching. To avoid conflict between users in the P2P system while also imposing no additional restrictions, we allow multiple versions of each directory node to simultaneously exist -- using popularity as the basis for default browsing behavior. Users can freely add files and directory nodes to the network. We show, using a simulation of user behavior and file quality, that the popularity based system consistently leads users to a high quality directory network; above the average quality of user updates. Q<|reference_end|>
arxiv
@article{jacobs2007a, title={A Peer-to-Peer Browsable File Index using a Popularity Based Global Namespace}, author={Thomas Jacobs and Aaron Harwood}, journal={arXiv preprint arXiv:cs/0701190}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701190}, primaryClass={cs.DC cs.NI} }
jacobs2007a
arxiv-675547
cs/0701191
The parallel implementation of the Astr\'ee static analyzer
<|reference_start|>The parallel implementation of the Astr\'ee static analyzer: The Astr\'{e}e static analyzer is a specialized tool that can prove the absence of runtime errors, including arithmetic overflows, in large critical programs. Keeping analysis times reasonable for industrial use is one of the design objectives. In this paper, we discuss the parallel implementation of the analysis.<|reference_end|>
arxiv
@article{monniaux2007the, title={The parallel implementation of the Astr\'{e}e static analyzer}, author={David Monniaux (LIENS)}, journal={APLAS: Programming languages and systems (2005) 86-96}, year={2007}, doi={10.1007/11575467_7}, archivePrefix={arXiv}, eprint={cs/0701191}, primaryClass={cs.PL cs.PF} }
monniaux2007the
arxiv-675548
cs/0701192
The pitfalls of verifying floating-point computations
<|reference_start|>The pitfalls of verifying floating-point computations: Current critical systems commonly use a lot of floating-point computations, and thus the testing or static analysis of programs containing floating-point operators has become a priority. However, correctly defining the semantics of common implementations of floating-point is tricky, because semantics may change with many factors beyond source-code level, such as choices made by compilers. We here give concrete examples of problems that can appear and solutions to implement in analysis software.<|reference_end|>
arxiv
@article{monniaux2007the, title={The pitfalls of verifying floating-point computations}, author={David Monniaux (LIENS, Verimag - Imag)}, journal={ACM Transactions on Programming Languages and Systems 30, 3 (2008) 12}, year={2007}, doi={10.1145/1353445.1353446}, archivePrefix={arXiv}, eprint={cs/0701192}, primaryClass={cs.PL cs.NA} }
monniaux2007the
arxiv-675549
cs/0701193
A Static Analyzer for Large Safety-Critical Software
<|reference_start|>A Static Analyzer for Large Safety-Critical Software: We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, the octagon, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing).<|reference_end|>
arxiv
@article{blanchet2007a, title={A Static Analyzer for Large Safety-Critical Software}, author={Bruno Blanchet (LIENS), Patrick Cousot (LIENS), Radhia Cousot (STIX), Jer^ome Feret (LIENS), Laurent Mauborgne (LIENS), Antoine Min'e (LIENS), David Monniaux (LIENS), Xavier Rival (LIENS)}, journal={PLDI: Conference on Programming Language Design and Implementation (2003) 196 - 207}, year={2007}, doi={10.1145/781131.781153}, archivePrefix={arXiv}, eprint={cs/0701193}, primaryClass={cs.PL cs.PF} }
blanchet2007a
arxiv-675550
cs/0701194
Menzerath-Altmann Law for Syntactic Structures in Ukrainian
<|reference_start|>Menzerath-Altmann Law for Syntactic Structures in Ukrainian: In the paper, the definition of clause suitable for an automated processing of a Ukrainian text is proposed. The Menzerath-Altmann law is verified on the sentence level and the parameters for the dependences of the clause length counted in words and syllables on the sentence length counted in clauses are calculated for "Perekhresni Stezhky" ("The Cross-Paths"), a novel by Ivan Franko.<|reference_end|>
arxiv
@article{buk2007menzerath-altmann, title={Menzerath-Altmann Law for Syntactic Structures in Ukrainian}, author={Solomija Buk and Andrij Rovenchak}, journal={Glottotheory. Vol. 1, No. 1, pp 10-17 (2008)}, year={2007}, doi={10.1515/glot-2008-0002}, archivePrefix={arXiv}, eprint={cs/0701194}, primaryClass={cs.CL} }
buk2007menzerath-altmann
arxiv-675551
cs/0701195
An Abstract Monte-Carlo Method for the Analysis of Probabilistic Programs
<|reference_start|>An Abstract Monte-Carlo Method for the Analysis of Probabilistic Programs: We introduce a new method, combination of random testing and abstract interpretation, for the analysis of programs featuring both probabilistic and non-probabilistic nondeterminism. After introducing "ordinary" testing, we show how to combine testing and abstract interpretation and give formulas linking the precision of the results to the number of iterations. We then discuss complexity and optimization issues and end with some experimental results.<|reference_end|>
arxiv
@article{monniaux2007an, title={An Abstract Monte-Carlo Method for the Analysis of Probabilistic Programs}, author={David Monniaux (LIENS)}, journal={POPL: Annual Symposium on Principles of Programming Languages (2001) 93 - 101}, year={2007}, doi={10.1145/360204.360211}, archivePrefix={arXiv}, eprint={cs/0701195}, primaryClass={cs.PL cs.PF} }
monniaux2007an
arxiv-675552
cs/0701196
One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks
<|reference_start|>One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks: This paper formulates and studies a general distributed field reconstruction problem using a dense network of noisy one-bit randomized scalar quantizers in the presence of additive observation noise of unknown distribution. A constructive quantization, coding, and field reconstruction scheme is developed and an upper-bound to the associated mean squared error (MSE) at any point and any snapshot is derived in terms of the local spatio-temporal smoothness properties of the underlying field. It is shown that when the noise, sensor placement pattern, and the sensor schedule satisfy certain weak technical requirements, it is possible to drive the MSE to zero with increasing sensor density at points of field continuity while ensuring that the per-sensor bitrate and sensing-related network overhead rate simultaneously go to zero. The proposed scheme achieves the order-optimal MSE versus sensor density scaling behavior for the class of spatially constant spatio-temporal fields.<|reference_end|>
arxiv
@article{wang2007one-bit, title={One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks}, author={Ye Wang, Prakash Ishwar and Venkatesh Saligrama}, journal={arXiv preprint arXiv:cs/0701196}, year={2007}, doi={10.1109/TSP.2008.926192}, archivePrefix={arXiv}, eprint={cs/0701196}, primaryClass={cs.IT math.IT} }
wang2007one-bit
arxiv-675553
cs/0701197
On Delayed Sequential Coding of Correlated Sources
<|reference_start|>On Delayed Sequential Coding of Correlated Sources: Motivated by video coding applications, the problem of sequential coding of correlated sources with encoding and/or decoding frame-delays is studied. The fundamental tradeoffs between individual frame rates, individual frame distortions, and encoding/decoding frame-delays are derived in terms of a single-letter information-theoretic characterization of the rate-distortion region for general inter-frame source correlations and certain types of potentially frame specific and coupled single-letter fidelity criteria. The sum-rate-distortion region is characterized in terms of generalized directed information measures highlighting their role in delayed sequential source coding problems. For video sources which are spatially stationary memoryless and temporally Gauss-Markov, MSE frame distortions, and a sum-rate constraint, our results expose the optimality of idealized differential predictive coding among all causal sequential coders, when the encoder uses a positive rate to describe each frame. Somewhat surprisingly, causal sequential encoding with one-frame-delayed noncausal sequential decoding can exactly match the sum-rate-MSE performance of joint coding for all nontrivial MSE-tuples satisfying certain positive semi-definiteness conditions. Thus, even a single frame-delay holds potential for yielding significant performance improvements. Generalizations to higher order Markov sources are also presented and discussed. A rate-distortion performance equivalence between, causal sequential encoding with delayed noncausal sequential decoding, and, delayed noncausal sequential encoding with causal sequential decoding, is also established.<|reference_end|>
arxiv
@article{ma2007on, title={On Delayed Sequential Coding of Correlated Sources}, author={Nan Ma, Ye Wang, and Prakash Ishwar}, journal={arXiv preprint arXiv:cs/0701197}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701197}, primaryClass={cs.IT math.IT} }
ma2007on
arxiv-675554
cs/0701198
Fitting the WHOIS Internet data
<|reference_start|>Fitting the WHOIS Internet data: We consider the RIPE WHOIS Internet data as characterized by the Cooperative Association for Internet Data Analysis (CAIDA), and show that the Tempered Preferential Attachment model [1] provides an excellent fit to this data. [1] D'Souza, Borgs, Chayes, Berger and Kleinberg, to appear PNAS USA, 2007.<|reference_end|>
arxiv
@article{d'souza2007fitting, title={Fitting the WHOIS Internet data}, author={R. M. D'Souza, C. Borgs, J. T. Chayes, N. Berger and R. D. Kleinberg}, journal={arXiv preprint arXiv:cs/0701198}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701198}, primaryClass={cs.NI} }
d'souza2007fitting
arxiv-675555
cs/0701199
A Virtual Logo Keyboard for People with Motor Disabilities
<|reference_start|>A Virtual Logo Keyboard for People with Motor Disabilities: In our society, people with motor impairments are oftentimes socially excluded from their environment. This is unfortunate because every human being should have the possibility to obtain the necessary conditions to live a normal life. Although there is technology to assist people with motor impairments, few systems are targeted for programming environments. We have created a system, called Logo Keyboard, to assist people with motor disabilities to program with the Logo programming language. With this special keyboard we can help more people to get involved into computer programming and to develop projects in different areas.<|reference_end|>
arxiv
@article{norte2007a, title={A Virtual Logo Keyboard for People with Motor Disabilities}, author={Stephane Norte and Fernando G. Lobo}, journal={arXiv preprint arXiv:cs/0701199}, year={2007}, number={200701}, archivePrefix={arXiv}, eprint={cs/0701199}, primaryClass={cs.HC} }
norte2007a
arxiv-675556
cs/0701200
Reasoning from a schema and from an analog in software code reuse
<|reference_start|>Reasoning from a schema and from an analog in software code reuse: The activity of design involves the decomposition of problems into subproblems and the development and evaluation of solutions. In many cases, solution development is not done from scratch. Designers often evoke and adapt solutions developed in the past. These solutions may come from an internal source, i.e. the memory of the designers, and/or from an external source. The goal of this paper is to analyse the characteristics of the cognitive mechanisms, the knowledge and the representations involved in the code reuse activity performed by experienced programmers. More generally, the focus is the control structure of the reuse activity. Data collected in an experiment in which programmers had to design programs are analyzed. Two code reuse situations are distinguished depending on whether or not the processes involved in reuse start before the elaboration of what acts as a source-solution. Our analysis highlights the use of reasoning from a schema and from an analog in the code reuse activity.<|reference_end|>
arxiv
@article{detienne2007reasoning, title={Reasoning from a schema and from an analog in software code reuse}, author={Franc{c}oise Detienne (INRIA)}, journal={Dans Fourth Workshop on Empirical Studies of Programmers, ESP91 (1991) 5-22}, year={2007}, archivePrefix={arXiv}, eprint={cs/0701200}, primaryClass={cs.SE} }
detienne2007reasoning
arxiv-675557
cs/0702001
Measuring Cognitive Activities in Software Engineering
<|reference_start|>Measuring Cognitive Activities in Software Engineering: This paper presents an approach to the study of cognitive activities in collaborative software development. This approach has been developed by a multidisciplinary team made up of software engineers and cognitive psychologists. The basis of this approach is to improve our understanding of software development by observing professionals at work. The goal is to derive lines of conduct or good practices based on observations and analyses of the processes that are naturally used by software engineers. The strategy involved is derived from a standard approach in cognitive science. It is based on the videotaping of the activities of software engineers, transcription of the videos, coding of the transcription, defining categories from the coded episodes and defining cognitive behaviors or dialogs from the categories. This project presents two original contributions that make this approach generic in software engineering. The first contribution is the introduction of a formal hierarchical coding scheme, which will enable comparison of various types of observations. The second is the merging of psychological and statistical analysis approaches to build a cognitive model. The details of this new approach are illustrated with the initial data obtained from the analysis of technical review meetings.<|reference_end|>
arxiv
@article{robillard2007measuring, title={Measuring Cognitive Activities in Software Engineering}, author={Pierre Robillard, Patrick D'Astous, Franc{c}oise D'etienne (INRIA), Willemien Visser (INRIA)}, journal={Dans ICSE98, 20th International Conference on Software Engineering (1998)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702001}, primaryClass={cs.HC} }
robillard2007measuring
arxiv-675558
cs/0702002
The Effect of Object-Oriented Programming Expertise in Several Dimensions of Comprehension Strategies
<|reference_start|>The Effect of Object-Oriented Programming Expertise in Several Dimensions of Comprehension Strategies: This study analyzes object-oriented (OO) program comprehension by experts and novices. We examine the effect of expertise in three dimensions of comprehension strategies: the scope of the comprehension, the top-down versus bottom-up direction of the processes, and the guidance of the comprehension activity. Overall, subjects were similar in the scope of their comprehension, although the experts tended to consult more files. We found strong evidence of top-down, inference-driven behaviors, as well as multiple guidance in expert comprehension. We also found evidence of execution-based guidance and less use of top-down processes in novice comprehension. Guidance by inheritance and composition relationships in the OO program was not dominant, but nevertheless played a substantial role in expert program comprehension. However, these static relationships more closely tied to the OO nature of the program were exploited poorly by novices. To conclude, these results are discussed with respect to the literature on procedural program comprehension.<|reference_end|>
arxiv
@article{burkhardt2007the, title={The Effect of Object-Oriented Programming Expertise in Several Dimensions of Comprehension Strategies}, author={Jean-Marie Burkhardt (INRIA, LEI), Franc{c}oise D'etienne (INRIA), Susan Wiedenbeck}, journal={Dans IWPC'98, Sixth International Workshop on Program Comprehension (1998)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702002}, primaryClass={cs.HC} }
burkhardt2007the
arxiv-675559
cs/0702003
Expert Programming Knowledge: a Schema-Based Approach
<|reference_start|>Expert Programming Knowledge: a Schema-Based Approach: The topic of this chapter is the role of expert programming knowledge in the understanding activity. In the "schema-based approach", the role of semantic structures is emphasized whereas, in the "control-flow approach", the role of syntactic structures is emphasized. Data which support schema-based models of understanding are presented. Data which are more consistent with the "control-flow approach" allow to discuss the limits of the former kind of models.<|reference_end|>
arxiv
@article{détienne2007expert, title={Expert Programming Knowledge: a Schema-Based Approach}, author={Franc{c}oise D'etienne (INRIA)}, journal={Psychology of ProgrammingAcademic Press (Ed.) (1990) 205-222}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702003}, primaryClass={cs.HC} }
détienne2007expert
arxiv-675560
cs/0702004
What model(s) for program understanding?
<|reference_start|>What model(s) for program understanding?: The first objective of this paper is to present and discuss various types of models of program understanding. They are discussed in relation to models of text understanding. The second objective of this paper is to assess the effect of purpose for reading, or more specifically programming task, on the cognitive processes involved and representations constructed in program understanding. This is done in the theoretical framework of van Dijk and Kintsch's model of text understanding (1983).<|reference_end|>
arxiv
@article{détienne2007what, title={What model(s) for program understanding?}, author={Franc{c}oise D'etienne (INRIA)}, journal={Dans UCIS'96, Conference on Using Complex Information Systems (1996)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702004}, primaryClass={cs.HC} }
détienne2007what
arxiv-675561
cs/0702005
An empirical study of software reuse by experts in object-oriented design
<|reference_start|>An empirical study of software reuse by experts in object-oriented design: This paper presents an empirical study of the software reuse activity by expert designers in the context of object-oriented design. Our study focuses on the three following aspects of reuse : (1) the interaction between some design processes, e.g. constructing a problem representation, searching for and evaluating solutions, and reuse processes, i.e. retrieving and using previous solutions, (2) the mental processes involved in reuse, e.g. example-based retrieval or bottom-up versus top-down expanding of the solution, and (3) the mental representations constructed throughout the reuse activity, e.g. dynamic versus static representations. Some implications of these results for the specification of software reuse support environments are discussed.<|reference_end|>
arxiv
@article{burkhardt2007an, title={An empirical study of software reuse by experts in object-oriented design}, author={Jean-Marie Burkhardt (INRIA, LEI), Franc{c}oise D'etienne (INRIA)}, journal={Dans INTERACT'95 (1995)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702005}, primaryClass={cs.HC} }
burkhardt2007an
arxiv-675562
cs/0702006
Negotiation in collaborative assessment of design solutions: an empirical study on a Concurrent Engineering process
<|reference_start|>Negotiation in collaborative assessment of design solutions: an empirical study on a Concurrent Engineering process: In Concurrent engineering, design solutions are not only produced by individuals specialized in a given field. Due to the team nature of the design activity, solutions are negotiated. Our objective is to analyse the argumentation processes leading to these negotiated solutions. These processes take place in the meetings which group together specialists with a co-design aim. We conducted cognitive ergonomics research work during the definition phase of an aeronautical design project in which the participants work in Concurrent Engineering. We recorded, retranscribed and analysed 7 multi-speciality meetings. These meetings were organised, as needed, to assess the integration of the solutions of each speciality into a global solution. We found that there are three main design proposal assessment modes which can be combined in these meetings: (a) analytical assessment mode, (b) comparative assessment mode (c) analogical assessment mode. Within these assessment modes, different types of arguments are used. Furthermore we found a typical temporal negotiation process.<|reference_end|>
arxiv
@article{martin2007negotiation, title={Negotiation in collaborative assessment of design solutions: an empirical study on a Concurrent Engineering process}, author={G'eraldine Martin, Franc{c}oise D'etienne (INRIA), Elisabeth Lavigne}, journal={Dans CE'2000, International Conference on Concurrent Engineering (2000)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702006}, primaryClass={cs.HC} }
martin2007negotiation
arxiv-675563
cs/0702007
Power Optimal Scheduling for Guaranteed Throughput in Multi-access Fading Channels
<|reference_start|>Power Optimal Scheduling for Guaranteed Throughput in Multi-access Fading Channels: A power optimal scheduling algorithm that guarantees desired throughput and bounded delay to each user is developed for fading multi-access multi-band systems. The optimization is over the joint space of all rate allocation and coding strategies. The proposed scheduling assigns rates on each band based only on the current system state, and subsequently uses optimal multi-user signaling to achieve these rates. The scheduling is computationally simple, and hence scalable. Due to uplink-downlink duality, all the results extend in straightforward fashion to the broadcast channels.<|reference_end|>
arxiv
@article{chaporkar2007power, title={Power Optimal Scheduling for Guaranteed Throughput in Multi-access Fading Channels}, author={Prasanna Chaporkar, Kimmo Kansanen, Ralf R. M"uller}, journal={arXiv preprint arXiv:cs/0702007}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702007}, primaryClass={cs.IT math.IT} }
chaporkar2007power
arxiv-675564
cs/0702008
MMSE Optimal Algebraic Space-Time Codes
<|reference_start|>MMSE Optimal Algebraic Space-Time Codes: Design of Space-Time Block Codes (STBCs) for Maximum Likelihood (ML) reception has been predominantly the main focus of researchers. However, the ML decoding complexity of STBCs becomes prohibitive large as the number of transmit and receive antennas increase. Hence it is natural to resort to a suboptimal reception technique like linear Minimum Mean Squared Error (MMSE) receiver. Barbarossa et al and Liu et al have independently derived necessary and sufficient conditions for a full rate linear STBC to be MMSE optimal, i.e achieve least Symbol Error Rate (SER). Motivated by this problem, certain existing high rate STBC constructions from crossed product algebras are identified to be MMSE optimal. Also, it is shown that a certain class of codes from cyclic division algebras which are special cases of crossed product algebras are MMSE optimal. Hence, these STBCs achieve least SER when MMSE reception is employed and are fully diverse when ML reception is employed.<|reference_end|>
arxiv
@article{rajan2007mmse, title={MMSE Optimal Algebraic Space-Time Codes}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:cs/0702008}, year={2007}, doi={10.1109/TWC.2008.070172}, archivePrefix={arXiv}, eprint={cs/0702008}, primaryClass={cs.IT math.IT} }
rajan2007mmse
arxiv-675565
cs/0702009
On Evaluating the Rate-Distortion Function of Sources with Feed-Forward and the Capacity of Channels with Feedback
<|reference_start|>On Evaluating the Rate-Distortion Function of Sources with Feed-Forward and the Capacity of Channels with Feedback: We study the problem of computing the rate-distortion function for sources with feed-forward and the capacity for channels with feedback. The formulas (involving directed information) for the optimal rate-distortion function with feed-forward and channel capacity with feedback are multi-letter expressions and cannot be computed easily in general. In this work, we derive conditions under which these can be computed for a large class of sources/channels with memory and distortion/cost measures. Illustrative examples are also provided.<|reference_end|>
arxiv
@article{venkataramanan2007on, title={On Evaluating the Rate-Distortion Function of Sources with Feed-Forward and the Capacity of Channels with Feedback}, author={Ramji Venkataramanan, S. Sandeep Pradhan}, journal={arXiv preprint arXiv:cs/0702009}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702009}, primaryClass={cs.IT math.IT} }
venkataramanan2007on
arxiv-675566
cs/0702010
A canonical form for some piecewise defined functions
<|reference_start|>A canonical form for some piecewise defined functions: We define a canonical form for piecewise defined functions. We show that this has a wider range of application as well as better complexity properties than previous work.<|reference_end|>
arxiv
@article{carette2007a, title={A canonical form for some piecewise defined functions}, author={Jacques Carette}, journal={arXiv preprint arXiv:cs/0702010}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702010}, primaryClass={cs.SC cs.MS} }
carette2007a
arxiv-675567
cs/0702011
Dealing With Logical Omniscience: Expressiveness and Pragmatics
<|reference_start|>Dealing With Logical Omniscience: Expressiveness and Pragmatics: We examine four approaches for dealing with the logical omniscience problem and their potential applicability: the syntactic approach, awareness, algorithmic knowledge, and impossible possible worlds. Although in some settings these approaches are equi-expressive and can capture all epistemic states, in other settings of interest (especially with probability in the picture), we show that they are not equi-expressive. We then consider the pragmatics of dealing with logical omniscience-- how to choose an approach and construct an appropriate model.<|reference_end|>
arxiv
@article{halpern2007dealing, title={Dealing With Logical Omniscience: Expressiveness and Pragmatics}, author={Joseph Y. Halpern, Riccardo Pucella}, journal={arXiv preprint arXiv:cs/0702011}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702011}, primaryClass={cs.LO cs.AI} }
halpern2007dealing
arxiv-675568
cs/0702012
Plagiarism Detection in arXiv
<|reference_start|>Plagiarism Detection in arXiv: We describe a large-scale application of methods for finding plagiarism in research document collections. The methods are applied to a collection of 284,834 documents collected by arXiv.org over a 14 year period, covering a few different research disciplines. The methodology efficiently detects a variety of problematic author behaviors, and heuristics are developed to reduce the number of false positives. The methods are also efficient enough to implement as a real-time submission screen for a collection many times larger.<|reference_end|>
arxiv
@article{sorokina2007plagiarism, title={Plagiarism Detection in arXiv}, author={Daria Sorokina, Johannes Gehrke, Simeon Warner, Paul Ginsparg}, journal={arXiv preprint arXiv:cs/0702012}, year={2007}, doi={10.1109/ICDM.2006.126}, archivePrefix={arXiv}, eprint={cs/0702012}, primaryClass={cs.DB cs.DL cs.IR} }
sorokina2007plagiarism
arxiv-675569
cs/0702013
A polynomial time algorithm to approximate the mixed volume within a simply exponential factor
<|reference_start|>A polynomial time algorithm to approximate the mixed volume within a simply exponential factor: Let ${\bf K} = (K_1, ..., K_n)$ be an $n$-tuple of convex compact subsets in the Euclidean space $\R^n$, and let $V(\cdot)$ be the Euclidean volume in $\R^n$. The Minkowski polynomial $V_{{\bf K}}$ is defined as $V_{{\bf K}}(\lambda_1, ... ,\lambda_n) = V(\lambda_1 K_1 +, ..., + \lambda_n K_n)$ and the mixed volume $V(K_1, ..., K_n)$ as $$ V(K_1, ..., K_n) = \frac{\partial^n}{\partial \lambda_1...\partial \lambda_n} V_{{\bf K}}(\lambda_1 K_1 +, ..., + \lambda_n K_n). $$ Our main result is a poly-time algorithm which approximates $V(K_1, ..., K_n)$ with multiplicative error $e^n$ and with better rates if the affine dimensions of most of the sets $K_i$ are small. Our approach is based on a particular approximation of $\log(V(K_1, ..., K_n))$ by a solution of some convex minimization problem. We prove the mixed volume analogues of the Van der Waerden and Schrijver-Valiant conjectures on the permanent. These results, interesting on their own, allow us to justify the abovementioned approximation by a convex minimization, which is solved using the ellipsoid method and a randomized poly-time time algorithm for the approximation of the volume of a convex set.<|reference_end|>
arxiv
@article{gurvits2007a, title={A polynomial time algorithm to approximate the mixed volume within a simply exponential factor}, author={Leonid Gurvits}, journal={arXiv preprint arXiv:cs/0702013}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702013}, primaryClass={cs.CG cs.CC math.CO} }
gurvits2007a
arxiv-675570
cs/0702014
Probabilistic Analysis of Linear Programming Decoding
<|reference_start|>Probabilistic Analysis of Linear Programming Decoding: We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman et al. succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous non-asymptotic results for LDPC codes, and in particular exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a generalized matching. An interesting by-product of our analysis is to establish the existence of ``probabilistic expansion'' in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst-case setting.<|reference_end|>
arxiv
@article{daskalakis2007probabilistic, title={Probabilistic Analysis of Linear Programming Decoding}, author={Constantinos Daskalakis, Alexandros G. Dimakis, Richard M. Karp, Martin J. Wainwright}, journal={arXiv preprint arXiv:cs/0702014}, year={2007}, doi={10.1109/TIT.2008.926452}, archivePrefix={arXiv}, eprint={cs/0702014}, primaryClass={cs.IT cs.DM math.IT} }
daskalakis2007probabilistic
arxiv-675571
cs/0702015
Network Coding for Distributed Storage Systems
<|reference_start|>Network Coding for Distributed Storage Systems: Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called Regenerating Codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, Regenerating Codes can reduce maintenance bandwidth use by 25 percent or more compared with the best previous design--a hybrid of replication and erasure codes--while simplifying system architecture.<|reference_end|>
arxiv
@article{dimakis2007network, title={Network Coding for Distributed Storage Systems}, author={Alexandros G. Dimakis, P. Brighten Godfrey, Martin J. Wainwright, Kannan Ramchandran}, journal={arXiv preprint arXiv:cs/0702015}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702015}, primaryClass={cs.IT cs.NI math.IT} }
dimakis2007network
arxiv-675572
cs/0702016
A multivariate interlace polynomial
<|reference_start|>A multivariate interlace polynomial: We define a multivariate polynomial that generalizes several interlace polynomials defined by Arratia, Bollobas and Sorkin on the one hand, and Aigner and van der Holst on the other. We follow the route traced by Sokal, who defined a multivariate generalization of Tutte's polynomial. We also show that bounded portions of our interlace polynomial can be evaluated in polynomial time for graphs of bounded clique-width. Its full evaluation is necessarly exponential just because of the size of the result.<|reference_end|>
arxiv
@article{courcelle2007a, title={A multivariate interlace polynomial}, author={Bruno Courcelle (LaBRI)}, journal={Electronic Journal of Combinatories 15, 1 (2008) R69}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702016}, primaryClass={cs.LO cs.DM} }
courcelle2007a
arxiv-675573
cs/0702017
Comment on Improved Analysis of List Decoding and Its Application to Convolutional Codes and Turbo Codes
<|reference_start|>Comment on Improved Analysis of List Decoding and Its Application to Convolutional Codes and Turbo Codes: In a recent paper [1] an improved analysis concerning the analysis of List Decoding was presented. The event that the correct codeword is excluded from the list is central. For the additive white Gaussian noise (AWGN) channel an important quantity is the in [1] called effective Euclidean distance. This was earlier considered in [2] under the name Vector Euclidean Distance, where also a simple mathematical expression for this quantity was easily derived for any list size. In [1], a geometrical analysis gives this when the list size is 1, 2 or 3.<|reference_end|>
arxiv
@article{aulin2007comment, title={Comment on Improved Analysis of List Decoding and Its Application to Convolutional Codes and Turbo Codes}, author={Tor M. Aulin}, journal={arXiv preprint arXiv:cs/0702017}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702017}, primaryClass={cs.IT math.IT} }
aulin2007comment
arxiv-675574
cs/0702018
Estimation of the Rate-Distortion Function
<|reference_start|>Estimation of the Rate-Distortion Function: Motivated by questions in lossy data compression and by theoretical considerations, we examine the problem of estimating the rate-distortion function of an unknown (not necessarily discrete-valued) source from empirical data. Our focus is the behavior of the so-called "plug-in" estimator, which is simply the rate-distortion function of the empirical distribution of the observed data. Sufficient conditions are given for its consistency, and examples are provided to demonstrate that in certain cases it fails to converge to the true rate-distortion function. The analysis of its performance is complicated by the fact that the rate-distortion function is not continuous in the source distribution; the underlying mathematical problem is closely related to the classical problem of establishing the consistency of maximum likelihood estimators. General consistency results are given for the plug-in estimator applied to a broad class of sources, including all stationary and ergodic ones. A more general class of estimation problems is also considered, arising in the context of lossy data compression when the allowed class of coding distributions is restricted; analogous results are developed for the plug-in estimator in that case. Finally, consistency theorems are formulated for modified (e.g., penalized) versions of the plug-in, and for estimating the optimal reproduction distribution.<|reference_end|>
arxiv
@article{harrison2007estimation, title={Estimation of the Rate-Distortion Function}, author={M. T. Harrison and I. Kontoyiannis}, journal={IEEE Transactions on Information Theory, 54 (2008): 3757-3762}, year={2007}, doi={10.1109/TIT.2008.926387}, archivePrefix={arXiv}, eprint={cs/0702018}, primaryClass={cs.IT math.IT math.ST stat.TH} }
harrison2007estimation
arxiv-675575
cs/0702019
A Dynamic I/O Model for TRACON Traffic Management
<|reference_start|>A Dynamic I/O Model for TRACON Traffic Management: This work investigates the TRACON flow management around a major airport. Aircraft flows are analyzed through a study of TRACON trajectories records. Rerouting and queuing processes are highlighted and airport characteristics are shown as function of the number of planes in the TRACON. Then, a simple input-output TRACON queuing and landing model is proposed. This model is calibrated and validated using available TRACON data. It reproduces the same phenomenon as the real system. This model is used to show the impact of limiting the number of aircrafts in the TRACON. A limited number of aircraft does not increase delays but reduces the controller's workload and increases safety.<|reference_end|>
arxiv
@article{gariel2007a, title={A Dynamic I/O Model for TRACON Traffic Management}, author={Maxime Gariel, John-Paul Clarke and Eric Feron}, journal={arXiv preprint arXiv:cs/0702019}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702019}, primaryClass={cs.OH} }
gariel2007a
arxiv-675576
cs/0702020
Construction of Minimal Tail-Biting Trellises for Codes over Finite Abelian Groups
<|reference_start|>Construction of Minimal Tail-Biting Trellises for Codes over Finite Abelian Groups: A definition of atomic codeword for a group code is presented. Some properties of atomic codewords of group codes are investigated. Using these properties, it is shown that every minimal tail-biting trellis for a group code over a finite abelian group can be constructed from its characteristic generators, which extends the work of Koetter and Vardy who treated the case of a linear code over a field. We also present an efficient algorithm for constructing the minimal tail-biting trellis of a group code over a finite abelian group, given a generator matrix.<|reference_end|>
arxiv
@article{yang2007construction, title={Construction of Minimal Tail-Biting Trellises for Codes over Finite Abelian Groups}, author={Qinqin Yang and Zhongping Qin}, journal={arXiv preprint arXiv:cs/0702020}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702020}, primaryClass={cs.IT math.IT} }
yang2007construction
arxiv-675577
cs/0702021
Probability Bracket Notation, Markov Chains, Stochastic Processes, and Microscopic Probabilistic Processes
<|reference_start|>Probability Bracket Notation, Markov Chains, Stochastic Processes, and Microscopic Probabilistic Processes: Inspired by the Dirac vector probability notation (VPN), we propose the Probability Bracket Notation (PBN), a new set of symbols defined similarly (but not identically) as in the VPN. Applying the PBN to fundamental definitions and theorems for discrete and continuous random variables, we show that the PBN could play a similar role in the probability space as the VBN in the Hilbert vector. Our system P-kets are identified with the probability vectors in Markov chains (MC). The master equation of homogeneous MC in the Schrodinger pictures can be basis-independent. Our system P-bra is linked to the Doi state function and the Peliti standard bra. Transformed from the Schrodinger picture to the Heisenberg picture, the time dependence of the system P-ket of a homogeneous MC (HMC) is shifted to the observable as a stochastic process. Using the correlations established by the special Wick rotation (SWR), the microscopic probabilistic processes (MPPs) are investigated for single and many-particle systems. The expected occupation number of particles in quantum statistics is reproduced by associating time with temperature (the Wick-Matsubara relation).<|reference_end|>
arxiv
@article{wang2007probability, title={Probability Bracket Notation, Markov Chains, Stochastic Processes, and Microscopic Probabilistic Processes}, author={Xing M. Wang}, journal={arXiv preprint arXiv:cs/0702021}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702021}, primaryClass={cs.OH math.PR} }
wang2007probability
arxiv-675578
cs/0702022
Gnutella: Topology Dynamics On Phase Space
<|reference_start|>Gnutella: Topology Dynamics On Phase Space: In this paper, the topology dynamic of Gnutella is studied through phase space. The dynamic changes in peer degree are studied as a time series in two dimensional phase space which is defined as the number of connected leaves and the number of connected ultras. The reported degrees are concentrated in three special Software related regions that we named as Ultra Stable Region, Leaf Stable Region and Transition Belt. A method is proposed to classify degree traces in phase space into different classes. Connection churn then is studied along with the churn in degree. It shows that the topological structure of Gnutella is rather stable in its connection degree but not the topology itself. The connection drop rate is estimated and the live time of connections is inferred afterwards. M/M/m/m loss queue system is introduced to model the degree keeping process in Gnutella. This model revealed that the degree stable is ensured by large new connection efforts. In other words the stable in topological structure of Gnutella is a results of essential unstable in its topology. That opens a challenge to the basic design philosophy of this network.<|reference_end|>
arxiv
@article{li2007gnutella:, title={Gnutella: Topology Dynamics On Phase Space}, author={Chunxi Li and Changjia Chen}, journal={arXiv preprint arXiv:cs/0702022}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702022}, primaryClass={cs.NI} }
li2007gnutella:
arxiv-675579
cs/0702023
High-rate, Multi-Symbol-Decodable STBCs from Clifford Algebras
<|reference_start|>High-rate, Multi-Symbol-Decodable STBCs from Clifford Algebras: It is well known that Space-Time Block Codes (STBCs) obtained from Orthogonal Designs (ODs) are single-symbol-decodable (SSD) and from Quasi-Orthogonal Designs (QODs) are double-symbol decodable. However, there are SSD codes that are not obtainable from ODs and DSD codes that are not obtainable from QODs. In this paper a method of constructing $g$-symbol decodable ($g$-SD) STBCs using representations of Clifford algebras are presented which when specialized to $g=1,2$ gives SSD and DSD codes respectively. For the number of transmit antennas $2^a$ the rate (in complex symbols per channel use) of the $g$-SD codes presented in this paper is $\frac{a+1-g}{2^{a-g}}$. The maximum rate of the DSD STBCs from QODs reported in the literature is $\frac{a}{2^{a-1}}$ which is smaller than the rate $\frac{a-1}{2^{a-2}}$ of the DSD codes of this paper, for $2^a$ transmit antennas. In particular, the reported DSD codes for 8 and 16 transmit antennas offer rates 1 and 3/4 respectively whereas the known STBCs from QODs offer only 3/4 and 1/2 respectively. The construction of this paper is applicable for any number of transmit antennas.<|reference_end|>
arxiv
@article{karmakar2007high-rate,, title={High-rate, Multi-Symbol-Decodable STBCs from Clifford Algebras}, author={Sanjay Karmakar and B.Sundar Rajan}, journal={arXiv preprint arXiv:cs/0702023}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702023}, primaryClass={cs.IT math.IT} }
karmakar2007high-rate,
arxiv-675580
cs/0702024
Searching for low weight pseudo-codewords
<|reference_start|>Searching for low weight pseudo-codewords: Belief Propagation (BP) and Linear Programming (LP) decodings of Low Density Parity Check (LDPC) codes are discussed. We summarize results of instanton/pseudo-codeword approach developed for analysis of the error-floor domain of the codes. Instantons are special, code and decoding specific, configurations of the channel noise contributing most to the Frame-Error-Rate (FER). Instantons are decoded into pseudo-codewords. Instanton/pseudo-codeword with the lowest weight describes the largest Signal-to-Noise-Ratio (SNR) asymptotic of FER, while the whole spectra of the low weight instantons is descriptive of the FER vs SNR profile in the extended error-floor domain. First, we describe a general optimization method that allows to find the instantons for any coding/decoding. Second, we introduce LP-specific pseudo-codeword search algorithm that allows efficient calculations of the pseudo-codeword spectra. Finally, we discuss results of combined BP/LP error-floor exploration experiments for two model codes.<|reference_end|>
arxiv
@article{chertkov2007searching, title={Searching for low weight pseudo-codewords}, author={M. Chertkov (Los Alamos), M. Stepanov (UA, Tucson)}, journal={arXiv preprint arXiv:cs/0702024}, year={2007}, number={LA-UR # 07-0509}, archivePrefix={arXiv}, eprint={cs/0702024}, primaryClass={cs.IT math.IT} }
chertkov2007searching
arxiv-675581
cs/0702025
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs
<|reference_start|>Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs: This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.<|reference_end|>
arxiv
@article{pueschel2007algebraic, title={Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs}, author={Markus Pueschel and Jose M. F. Moura}, journal={IEEE Transactions on Signal Processing, Vol. 56, No. 4, pp. 1502-1521, 2008}, year={2007}, doi={10.1109/TSP.2007.907919}, archivePrefix={arXiv}, eprint={cs/0702025}, primaryClass={cs.IT cs.DS math.IT} }
pueschel2007algebraic
arxiv-675582
cs/0702026
Shape preservation behavior of spline curves
<|reference_start|>Shape preservation behavior of spline curves: Shape preservation behavior of a spline consists of criterial conditions for preserving convexity, inflection, collinearity, torsion and coplanarity shapes of data polgonal arc. We present our results which acts as an improvement in the definitions of and provide geometrical insight into each of the above shape preservation criteria. We also investigate the effect of various results from the literature on various shape preservation criteria. These results have not been earlier refered in the context of shape preservation behaviour of splines. We point out that each curve segment need to satisfy more than one shape preservation criteria. We investigate the conflict between different shape preservation criteria 1)on each curve segment and 2)of adjacent curve segments. We derive simplified formula for shape preservation criteria for cubic curve segments. We study the shape preservation behavior of cubic Catmull-Rom splines and see that, though being very simple spline curve, it indeed satisfy all the shape preservation criteria.<|reference_end|>
arxiv
@article{gautam2007shape, title={Shape preservation behavior of spline curves}, author={Ravi Shankar Gautam}, journal={arXiv preprint arXiv:cs/0702026}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702026}, primaryClass={cs.GR} }
gautam2007shape
arxiv-675583
cs/0702027
The Suspension Calculus and its Relationship to Other Explicit Treatments of Substitution in Lambda Calculi
<|reference_start|>The Suspension Calculus and its Relationship to Other Explicit Treatments of Substitution in Lambda Calculi: The intrinsic treatment of binding in the lambda calculus makes it an ideal data structure for representing syntactic objects with binding such as formulas, proofs, types, and programs. Supporting such a data structure in an implementation is made difficult by the complexity of the substitution operation relative to lambda terms. In this paper we present the suspension calculus, an explicit treatment of meta level binding in the lambda calculus. We prove properties of this calculus which make it a suitable replacement for the lambda calculus in implementation. Finally, we compare the suspension calculus with other explicit treatments of substitution.<|reference_end|>
arxiv
@article{gacek2007the, title={The Suspension Calculus and its Relationship to Other Explicit Treatments of Substitution in Lambda Calculi}, author={Andrew Gacek}, journal={arXiv preprint arXiv:cs/0702027}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702027}, primaryClass={cs.LO cs.PL} }
gacek2007the
arxiv-675584
cs/0702028
Uniform and Partially Uniform Redistribution Rules
<|reference_start|>Uniform and Partially Uniform Redistribution Rules: This short paper introduces two new fusion rules for combining quantitative basic belief assignments. These rules although very simple have not been proposed in literature so far and could serve as useful alternatives because of their low computation cost with respect to the recent advanced Proportional Conflict Redistribution rules developed in the DSmT framework.<|reference_end|>
arxiv
@article{smarandache2007uniform, title={Uniform and Partially Uniform Redistribution Rules}, author={Florentin Smarandache, Jean Dezert}, journal={International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems (IJUFKS), World Scientific, Vol. 19, No. 6, 921-937, 2011}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702028}, primaryClass={cs.AI} }
smarandache2007uniform
arxiv-675585
cs/0702029
On the variance of subset sum estimation
<|reference_start|>On the variance of subset sum estimation: For high volume data streams and large data warehouses, sampling is used for efficient approximate answers to aggregate queries over selected subsets. Mathematically, we are dealing with a set of weighted items and want to support queries to arbitrary subset sums. With unit weights, we can compute subset sizes which together with the previous sums provide the subset averages. The question addressed here is which sampling scheme we should use to get the most accurate subset sum estimates. We present a simple theorem on the variance of subset sum estimation and use it to prove variance optimality and near-optimality of subset sum estimation with different known sampling schemes. This variance is measured as the average over all subsets of any given size. By optimal we mean there is no set of input weights for which any sampling scheme can have a better average variance. Such powerful results can never be established experimentally. The results of this paper are derived mathematically. For example, we show that appropriately weighted systematic sampling is simultaneously optimal for all subset sizes. More standard schemes such as uniform sampling and probability-proportional-to-size sampling with replacement can be arbitrarily bad. Knowing the variance optimality of different sampling schemes can help deciding which sampling scheme to apply in a given context.<|reference_end|>
arxiv
@article{szegedy2007on, title={On the variance of subset sum estimation}, author={Mario Szegedy and Mikkel Thorup}, journal={arXiv preprint arXiv:cs/0702029}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702029}, primaryClass={cs.DS} }
szegedy2007on
arxiv-675586
cs/0702030
Optimizing the SINR operating point of spatial networks
<|reference_start|>Optimizing the SINR operating point of spatial networks: This paper addresses the following question, which is of interest in the design and deployment of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W/N should the band be partitioned into to maximize the number of simultaneous transmissions in the network? In an interference-limited ad-hoc network, dividing the available spectrum results in two competing effects: on the positive side, it reduces the number of users on each band and therefore decreases the interference level which leads to an increased SINR, while on the negative side the SINR requirement for each transmission is increased because the same information rate must be achieved over a smaller bandwidth. Exploring this tradeoff between bandwidth and SINR and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, we analytically derive the optimal SINR threshold (which directly corresponds to the optimal spectral efficiency) on this tradeoff curve and show that it is a function of only the path loss exponent. Furthermore, the optimal SINR point lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited) regimes. In order to operate at this optimal point, the number of frequency bands (i.e., the reuse factor) should be increased until the threshold SINR, which is an increasing function of the reuse factor, is equal to the optimal value.<|reference_end|>
arxiv
@article{jindal2007optimizing, title={Optimizing the SINR operating point of spatial networks}, author={Nihar Jindal, Jeffrey Andrews, Steven Weber}, journal={arXiv preprint arXiv:cs/0702030}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702030}, primaryClass={cs.IT math.IT} }
jindal2007optimizing
arxiv-675587
cs/0702031
Quantized vs Analog Feedback for the MIMO Downlink: A Comparison between Zero-Forcing Based Achievable Rates
<|reference_start|>Quantized vs Analog Feedback for the MIMO Downlink: A Comparison between Zero-Forcing Based Achievable Rates: We consider a MIMO fading broadcast channel and compare the achievable ergodic rates when the channel state information at the transmitter is provided by analog noisy feedback or by quantized (digital) feedback. The superiority of digital feedback is shown, with perfect or imperfect CSIR, whenever the number of feedback channel uses per channel coefficient is larger than 1. Also, we show that by proper design of the digital feedback link, errors in the feedback have a minor effect even by using very simple uncoded modulation. Finally, we show that analog feedback achieves a fraction 1 - 2F of the optimal multiplexing gain even in the presence of a feedback delay, when the fading belongs to the class of Doppler processes with normalized maximum Doppler frequency shift 0 <= F <= 1/2.<|reference_end|>
arxiv
@article{caire2007quantized, title={Quantized vs. Analog Feedback for the MIMO Downlink: A Comparison between Zero-Forcing Based Achievable Rates}, author={Giuseppe Caire, Nihar Jindal, Mari Kobayashi, Niranjay Ravindran}, journal={arXiv preprint arXiv:cs/0702031}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702031}, primaryClass={cs.IT math.IT} }
caire2007quantized
arxiv-675588
cs/0702032
Finding large and small dense subgraphs
<|reference_start|>Finding large and small dense subgraphs: We consider two optimization problems related to finding dense subgraphs. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of highest average degree among all subgraphs with at least k vertices, and the densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems are related to the well-known densest k-subgraph problem (DkS), which is to find the densest subgraph on exactly k vertices. We show that DalkS can be approximated efficiently, while DamkS is nearly as hard to approximate as the densest k-subgraph problem.<|reference_end|>
arxiv
@article{andersen2007finding, title={Finding large and small dense subgraphs}, author={Reid Andersen}, journal={arXiv preprint arXiv:cs/0702032}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702032}, primaryClass={cs.DS} }
andersen2007finding
arxiv-675589
cs/0702033
Bounds on ordered codes and orthogonal arrays
<|reference_start|>Bounds on ordered codes and orthogonal arrays: We derive new estimates of the size of codes and orthogonal arrays in the ordered Hamming space (the Niederreiter-Rosenbloom-Tsfasman space). We also show that the eigenvalues of the ordered Hamming scheme, the association scheme that describes the combinatorics of the space, are given by the multivariable Krawtchouk polynomials, and establish some of their properties.<|reference_end|>
arxiv
@article{barg2007bounds, title={Bounds on ordered codes and orthogonal arrays}, author={Alexander Barg and Punarbasu Purkayastha}, journal={Moscow Mathematical Journal, vol. 9, no. 2, 2009, pp. 211-243.}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702033}, primaryClass={cs.IT math.CO math.IT} }
barg2007bounds
arxiv-675590
cs/0702034
Graph Splicing System
<|reference_start|>Graph Splicing System: The string splicing was introduced by Tom Head which stands as an abstract model for the DNA recombination under the influence of restriction enzymes. The complex chemical process of three dimensional molecules in three dimensional space can be modeled using graphs. The graph splicing systems which were studied so far, can only be applied to a particular type of graphs which could be interpreted as linear or circular graphs. In this paper, we take a different and a novel approach to splice two graphs and introduce a splicing system for graphs that can be applied to all types of graphs. Splicing two graphs can be thought of as a new operation, among the graphs, that generates many new graphs from the given two graphs. Taking a different line of thinking, some of the graph theoretical results of the splicing are studied.<|reference_end|>
arxiv
@article{jeganathan2007graph, title={Graph Splicing System}, author={L.Jeganathan and R.Rama}, journal={arXiv preprint arXiv:cs/0702034}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702034}, primaryClass={cs.DM} }
jeganathan2007graph
arxiv-675591
cs/0702035
New Models for the Correlation in Sensor Data
<|reference_start|>New Models for the Correlation in Sensor Data: In this paper, we propose two new models of spatial correlations in sensor data in a data-gathering sensor network. A particular property of these models is that if a sensor node knows in \textit{how many} bits it needs to transmit its data, then it also knows \textit{which} bits of its data it needs to transmit.<|reference_end|>
arxiv
@article{agnihotri2007new, title={New Models for the Correlation in Sensor Data}, author={Samar Agnihotri}, journal={arXiv preprint arXiv:cs/0702035}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702035}, primaryClass={cs.IT math.IT} }
agnihotri2007new
arxiv-675592
cs/0702036
Efficient First-Order Temporal Logic for Infinite-State Systems
<|reference_start|>Efficient First-Order Temporal Logic for Infinite-State Systems: In this paper we consider the specification and verification of infinite-state systems using temporal logic. In particular, we describe parameterised systems using a new variety of first-order temporal logic that is both powerful enough for this form of specification and tractable enough for practical deductive verification. Importantly, the power of the temporal language allows us to describe (and verify) asynchronous systems, communication delays and more complex properties such as liveness and fairness properties. These aspects appear difficult for many other approaches to infinite-state verification.<|reference_end|>
arxiv
@article{dixon2007efficient, title={Efficient First-Order Temporal Logic for Infinite-State Systems}, author={Clare Dixon, Michael Fisher, Boris Konev, Alexei Lisitsa}, journal={arXiv preprint arXiv:cs/0702036}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702036}, primaryClass={cs.LO} }
dixon2007efficient
arxiv-675593
cs/0702037
Evolutionary Approaches to Minimizing Network Coding Resources
<|reference_start|>Evolutionary Approaches to Minimizing Network Coding Resources: We wish to minimize the resources used for network coding while achieving the desired throughput in a multicast scenario. We employ evolutionary approaches, based on a genetic algorithm, that avoid the computational complexity that makes the problem NP-hard. Our experiments show great improvements over the sub-optimal solutions of prior methods. Our new algorithms improve over our previously proposed algorithm in three ways. First, whereas the previous algorithm can be applied only to acyclic networks, our new method works also with networks with cycles. Second, we enrich the set of components used in the genetic algorithm, which improves the performance. Third, we develop a novel distributed framework. Combining distributed random network coding with our distributed optimization yields a network coding protocol where the resources used for coding are optimized in the setup phase by running our evolutionary algorithm at each node of the network. We demonstrate the effectiveness of our approach by carrying out simulations on a number of different sets of network topologies.<|reference_end|>
arxiv
@article{kim2007evolutionary, title={Evolutionary Approaches to Minimizing Network Coding Resources}, author={Minkyu Kim, Muriel Medard, Varun Aggarwal, Una-May O'Reilly, Wonsik Kim, Chang Wook Ahn, Michelle Effros}, journal={arXiv preprint arXiv:cs/0702037}, year={2007}, doi={10.1109/INFCOM.2007.231}, archivePrefix={arXiv}, eprint={cs/0702037}, primaryClass={cs.NI cs.IT math.IT} }
kim2007evolutionary
arxiv-675594
cs/0702038
Genetic Representations for Evolutionary Minimization of Network Coding Resources
<|reference_start|>Genetic Representations for Evolutionary Minimization of Network Coding Resources: We demonstrate how a genetic algorithm solves the problem of minimizing the resources used for network coding, subject to a throughput constraint, in a multicast scenario. A genetic algorithm avoids the computational complexity that makes the problem NP-hard and, for our experiments, greatly improves on sub-optimal solutions of established methods. We compare two different genotype encodings, which tradeoff search space size with fitness landscape, as well as the associated genetic operators. Our finding favors a smaller encoding despite its fewer intermediate solutions and demonstrates the impact of the modularity enforced by genetic operators on the performance of the algorithm.<|reference_end|>
arxiv
@article{kim2007genetic, title={Genetic Representations for Evolutionary Minimization of Network Coding Resources}, author={Minkyu Kim, Varun Aggarwal, Una-May O'Reilly, Muriel Medard, Wonsik Kim}, journal={arXiv preprint arXiv:cs/0702038}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702038}, primaryClass={cs.NE cs.NI} }
kim2007genetic
arxiv-675595
cs/0702039
Hadwiger and Helly-type theorems for disjoint unit spheres
<|reference_start|>Hadwiger and Helly-type theorems for disjoint unit spheres: We prove Helly-type theorems for line transversals to disjoint unit balls in $\R^{d}$. In particular, we show that a family of $n \geq 2d$ disjoint unit balls in $\R^d$ has a line transversal if, for some ordering $\prec$ of the balls, any subfamily of 2d balls admits a line transversal consistent with $\prec$. We also prove that a family of $n \geq 4d-1$ disjoint unit balls in $\R^d$ admits a line transversal if any subfamily of size $4d-1$ admits a transversal.<|reference_end|>
arxiv
@article{cheong2007hadwiger, title={Hadwiger and Helly-type theorems for disjoint unit spheres}, author={Otfried Cheong, Xavier Goaoc (INRIA Lorraine - LORIA), Andreas Holmsen, Sylvain Petitjean (INRIA Lorraine - LORIA)}, journal={Discrete and Computational Geometry (2006)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702039}, primaryClass={cs.CG} }
cheong2007hadwiger
arxiv-675596
cs/0702040
Polygraphs for termination of left-linear term rewriting systems
<|reference_start|>Polygraphs for termination of left-linear term rewriting systems: We present a methodology for proving termination of left-linear term rewriting systems (TRSs) by using Albert Burroni's polygraphs, a kind of rewriting systems on algebraic circuits. We translate the considered TRS into a polygraph of minimal size whose termination is proven with a polygraphic interpretation, then we get back the property on the TRS. We recall Yves Lafont's general translation of TRSs into polygraphs and known links between their termination properties. We give several conditions on the original TRS, including being a first-order functional program, that ensure that we can reduce the size of the polygraphic translation. We also prove sufficient conditions on the polygraphic interpretations of a minimal translation to imply termination of the original TRS. Examples are given to compare this method with usual polynomial interpretations.<|reference_end|>
arxiv
@article{guiraud2007polygraphs, title={Polygraphs for termination of left-linear term rewriting systems}, author={Yves Guiraud}, journal={arXiv preprint arXiv:cs/0702040}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702040}, primaryClass={cs.LO math.CT} }
guiraud2007polygraphs
arxiv-675597
cs/0702041
The Fibers and Range of Reduction Graphs in Ciliates
<|reference_start|>The Fibers and Range of Reduction Graphs in Ciliates: The biological process of gene assembly has been modeled based on three types of string rewriting rules, called string pointer rules, defined on so-called legal strings. It has been shown that reduction graphs, graphs that are based on the notion of breakpoint graph in the theory of sorting by reversal, for legal strings provide valuable insights into the gene assembly process. We characterize which legal strings obtain the same reduction graph (up to isomorphism), and moreover we characterize which graphs are (isomorphic to) reduction graphs.<|reference_end|>
arxiv
@article{brijder2007the, title={The Fibers and Range of Reduction Graphs in Ciliates}, author={Robert Brijder and Hendrik Jan Hoogeboom}, journal={Acta Informatica, Volume 45, Number 5 / July, 2008, Pages 383-402}, year={2007}, doi={10.1007/s00236-008-0074-3}, number={LIACS Technical Report 2007-01}, archivePrefix={arXiv}, eprint={cs/0702041}, primaryClass={cs.LO} }
brijder2007the
arxiv-675598
cs/0702042
A Formal Model for Programming Wireless Sensor Networks
<|reference_start|>A Formal Model for Programming Wireless Sensor Networks: In this paper we present new developments in the expressiveness and in the theory of a Calculus for Sensor Networks (CSN). We combine a network layer of sensor devices with a local object model to describe sensor devices with state. The resulting calculus is quite small and yet very expressive. We also present a type system and a type invariance result for the calculus. These results provide the fundamental framework for the development of programming languages and run-time environments.<|reference_end|>
arxiv
@article{lopes2007a, title={A Formal Model for Programming Wireless Sensor Networks}, author={Luis Lopes, Francisco Martins, Miguel S. Silva, Joao Barros}, journal={arXiv preprint arXiv:cs/0702042}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702042}, primaryClass={cs.DC cs.PL} }
lopes2007a
arxiv-675599
cs/0702043
Deciding k-colourability of $P_5$-free graphs in polynomial time
<|reference_start|>Deciding k-colourability of $P_5$-free graphs in polynomial time: The problem of computing the chromatic number of a $P_5$-free graph is known to be NP-hard. In contrast to this negative result, we show that determining whether or not a $P_5$-free graph admits a $k$-colouring, for each fixed number of colours $k$, can be done in polynomial time. If such a colouring exists, our algorithm produces it.<|reference_end|>
arxiv
@article{hoàng2007deciding, title={Deciding k-colourability of $P_5$-free graphs in polynomial time}, author={Ch'inh T. Ho`ang, Marcin Kami'nski, Vadim Lozin, J. Sawada, X. Shu}, journal={arXiv preprint arXiv:cs/0702043}, year={2007}, archivePrefix={arXiv}, eprint={cs/0702043}, primaryClass={cs.DS} }
hoàng2007deciding
arxiv-675600
cs/0702044
Transmission Capacity of Ad Hoc Networks with Spatial Diversity
<|reference_start|>Transmission Capacity of Ad Hoc Networks with Spatial Diversity: This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent $\alpha>2$ in which nodes use: (1) static beamforming through $M$ sectorized antennas, for which the increase in transmission capacity is shown to be $\Theta(M^2)$ if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigen-beamforming (maximal ratio transmission/combining), in which the increase is shown to be $\Theta(M^{\frac{2}{\alpha}})$; (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-$m$ fading for increasing $m$. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains.<|reference_end|>
arxiv
@article{hunter2007transmission, title={Transmission Capacity of Ad Hoc Networks with Spatial Diversity}, author={Andrew M. Hunter, Jeffrey G. Andrews, Steven Weber}, journal={arXiv preprint arXiv:cs/0702044}, year={2007}, doi={10.1109/T-WC.2008.071047}, archivePrefix={arXiv}, eprint={cs/0702044}, primaryClass={cs.IT math.IT} }
hunter2007transmission