corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-675201
cs/0611154
Assessing the cognitive consequences of the object-oriented approach: a survey of empirical research on object-oriented design by individuals and teams
<|reference_start|>Assessing the cognitive consequences of the object-oriented approach: a survey of empirical research on object-oriented design by individuals and teams: This paper presents a state-of-art review of empirical research on object-oriented (OO) design. Many claims about the cognitive benefits of the OO paradigm have been made by its advocates. These claims concern the ease of designing and reusing software at the individual level as well as the benefits of this paradigm at the team level. Since these claims are cognitive in nature, its seems important to assess them empirically. After a brief presentation of the main concepts of the OO paradigm, the claims about the superiority of OO design are outlined. The core of this paper consists of a review of empirical studies of OOD. We first discuss results concerning OOD by individuals. On the basis of empirical work, we (1) analyse the design activity of novice OO designers, (2) compare OO design with procedural design and, (3) discuss a typology of problems relevant for the OO approach. Then we assess the claims about naturalness and ease of OO design. The next part discusses results on OO software reuse. On the basis of empirical work, we (1) compare reuse in the OO versus procedural paradigm, (2) discuss the potential for OO software reuse and (3) analyse reuse activity in the OO paradigm. Then we assess claims on reusability. The final part reviews empirical work on OO design by teams. We present results on communication, coordination, knowledge dissemination and interactions with clients. Then we assess claims about OOD at the software design team level. In a general conclusion, we discuss the limitations of these studies and give some directions for future research.<|reference_end|>
arxiv
@article{détienne2006assessing, title={Assessing the cognitive consequences of the object-oriented approach: a survey of empirical research on object-oriented design by individuals and teams}, author={Franc{c}oise D'etienne (INRIA)}, journal={Interacting with Computers 9 (1997) 47-72}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611154}, primaryClass={cs.HC} }
détienne2006assessing
arxiv-675202
cs/0611155
Zig-zag and Replacement Product Graphs and LDPC Codes
<|reference_start|>Zig-zag and Replacement Product Graphs and LDPC Codes: The performance of codes defined from graphs depends on the expansion property of the underlying graph in a crucial way. Graph products, such as the zig-zag product and replacement product provide new infinite families of constant degree expander graphs. The paper investigates the use of zig-zag and replacement product graphs for the construction of codes on graphs. A modification of the zig-zag product is also introduced, which can operate on two unbalanced biregular bipartite graphs.<|reference_end|>
arxiv
@article{kelley2006zig-zag, title={Zig-zag and Replacement Product Graphs and LDPC Codes}, author={Christine A. Kelley, Deepak Sridhara, and Joachim Rosenthal}, journal={arXiv preprint arXiv:cs/0611155}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611155}, primaryClass={cs.IT math.IT} }
kelley2006zig-zag
arxiv-675203
cs/0611156
D-MG Tradeoff and Optimal Codes for a Class of AF and DF Cooperative Communication Protocols
<|reference_start|>D-MG Tradeoff and Optimal Codes for a Class of AF and DF Cooperative Communication Protocols: We consider cooperative relay communication in a fading channel environment under the Orthogonal Amplify and Forward (OAF) and Orthogonal and Non-Orthogonal Selection Decode and Forward (OSDF and NSDF) protocols. For all these protocols, we compute the Diversity-Multiplexing Gain Tradeoff (DMT). We construct DMT optimal codes for the protocols which are sphere decodable and, in certain cases, incur minimum possible delay. Our results establish that the DMT of the OAF protocol is identical to the DMT of the Non-Orthogonal Amplify and Forward (NAF) protocol. Two variants of the NSDF protocol are considered: fixed-NSDF and variable-NSDF protocol. In the variable-NSDF protocol, the fraction of time duration for which the source alone transmits is allowed to vary with the rate of communication. Among the class of static amplify-and-forward and decode-and-forward protocols, the variable-NSDF protocol is shown to have the best known DMT for any number of relays apart from the two-relay case. When there are two relays, the variable-NSDF protocol is shown to improve on the DMT of the best previously-known protocol for higher values of the multiplexing gain. Our results also establish that the fixed-NSDF protocol has a better DMT than the NAF protocol for any number of relays. Finally, we present a DMT optimal code construction for the NAF protocol.<|reference_end|>
arxiv
@article{elia2006d-mg, title={D-MG Tradeoff and Optimal Codes for a Class of AF and DF Cooperative Communication Protocols}, author={Petros Elia, K. Vinodh, M. Anand and P. Vijay Kumar}, journal={arXiv preprint arXiv:cs/0611156}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611156}, primaryClass={cs.IT math.IT} }
elia2006d-mg
arxiv-675204
cs/0611157
Bounding the Bias of Tree-Like Sampling in IP Topologies
<|reference_start|>Bounding the Bias of Tree-Like Sampling in IP Topologies: It is widely believed that the Internet's AS-graph degree distribution obeys a power-law form. Most of the evidence showing the power-law distribution is based on BGP data. However, it was recently argued that since BGP collects data in a tree-like fashion, it only produces a sample of the degree distribution, and this sample may be biased. This argument was backed by simulation data and mathematical analysis, which demonstrated that under certain conditions a tree sampling procedure can produce an artificail power-law in the degree distribution. Thus, although the observed degree distribution of the AS-graph follows a power-law, this phenomenon may be an artifact of the sampling process. In this work we provide some evidence to the contrary. We show, by analysis and simulation, that when the underlying graph degree distribution obeys a power-law with an exponent larger than 2, a tree-like sampling process produces a negligible bias in the sampled degree distribution. Furthermore, recent data collected from the DIMES project, which is not based on BGP sampling, indicates that the underlying AS-graph indeed obeys a power-law degree distribution with an exponent larger than 2. By combining this empirical data with our analysis, we conclude that the bias in the degree distribution calculated from BGP data is negligible.<|reference_end|>
arxiv
@article{cohen2006bounding, title={Bounding the Bias of Tree-Like Sampling in IP Topologies}, author={Reuven Cohen, Mira Gonen, Avishai Wool}, journal={arXiv preprint arXiv:cs/0611157}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611157}, primaryClass={cs.NI} }
cohen2006bounding
arxiv-675205
cs/0611158
Articulation entre \'elaboration de solutions et argumentation polyphonique
<|reference_start|>Articulation entre \'elaboration de solutions et argumentation polyphonique: In this paper, we propose an analytical framework that aims to bring out the nature of participants' contributions to co-design meetings, in a way that synthesises content and function dimensions, together with the dimension of dialogicality. We term the resulting global vision of contribution, the "interactive profile".<|reference_end|>
arxiv
@article{baker2006articulation, title={Articulation entre \'{e}laboration de solutions et argumentation polyphonique}, author={Michael Baker, Franc{c}oise D'etienne (INRIA), Kristine Lundt, Arnauld S'ejourn'e}, journal={Dans EPIQUE'2003 (2003)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611158}, primaryClass={cs.HC} }
baker2006articulation
arxiv-675206
cs/0611159
Cognitive Effort in Collective Software Design: Methodological Perspectives in Cognitive Ergonomics
<|reference_start|>Cognitive Effort in Collective Software Design: Methodological Perspectives in Cognitive Ergonomics: Empirical software engineering is concerned with measuring, or estimating, both the effort put into the software process and the quality of its product. We defend the idea that measuring process effort and product quality and establishing a relation between the two cannot be performed without a model of cognitive and collective activities involved in software design, and without measurement of these activities. This is the object of our field, i.e. Cognitive Ergonomics of design. After a brief presentation of its theoretical and methodological foundations, we will discuss a cognitive approach to design activities and its potential to provide new directions in ESE. Then we will present and discuss an illustration of the methodological directions we have proposed for the analysis and measurement of cognitive activities in the context of collective software design. The two situations analysed are technical review meetings, and Request For Comments-like procedures in Open Source Software design.<|reference_end|>
arxiv
@article{détienne2006cognitive, title={Cognitive Effort in Collective Software Design: Methodological Perspectives in Cognitive Ergonomics}, author={Franc{c}oise D'etienne (INRIA), Jean-Marie Burkhardt, Willemien Visser (INRIA)}, journal={Dans 2nd Workshop on Empirical Software Engineering (2003) 17-25}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611159}, primaryClass={cs.HC} }
détienne2006cognitive
arxiv-675207
cs/0611160
Complementary Sets, Generalized Reed-Muller Codes, and Power Control for OFDM
<|reference_start|>Complementary Sets, Generalized Reed-Muller Codes, and Power Control for OFDM: The use of error-correcting codes for tight control of the peak-to-mean envelope power ratio (PMEPR) in orthogonal frequency-division multiplexing (OFDM) transmission is considered in this correspondence. By generalizing a result by Paterson, it is shown that each q-phase (q is even) sequence of length 2^m lies in a complementary set of size 2^{k+1}, where k is a nonnegative integer that can be easily determined from the generalized Boolean function associated with the sequence. For small k this result provides a reasonably tight bound for the PMEPR of q-phase sequences of length 2^m. A new 2^h-ary generalization of the classical Reed-Muller code is then used together with the result on complementary sets to derive flexible OFDM coding schemes with low PMEPR. These codes include the codes developed by Davis and Jedwab as a special case. In certain situations the codes in the present correspondence are similar to Paterson's code constructions and often outperform them.<|reference_end|>
arxiv
@article{schmidt2006complementary, title={Complementary Sets, Generalized Reed-Muller Codes, and Power Control for OFDM}, author={Kai-Uwe Schmidt}, journal={IEEE Trans. Inf. Theory, vol. 53, no. 2, pp. 808-814, 2007}, year={2006}, doi={10.1109/TIT.2006.889723}, archivePrefix={arXiv}, eprint={cs/0611160}, primaryClass={cs.IT math.IT} }
schmidt2006complementary
arxiv-675208
cs/0611161
On the Peak-to-Mean Envelope Power Ratio of Phase-Shifted Binary Codes
<|reference_start|>On the Peak-to-Mean Envelope Power Ratio of Phase-Shifted Binary Codes: The peak-to-mean envelope power ratio (PMEPR) of a code employed in orthogonal frequency-division multiplexing (OFDM) systems can be reduced by permuting its coordinates and by rotating each coordinate by a fixed phase shift. Motivated by some previous designs of phase shifts using suboptimal methods, the following question is considered in this paper. For a given binary code, how much PMEPR reduction can be achieved when the phase shifts are taken from a 2^h-ary phase-shift keying (2^h-PSK) constellation? A lower bound on the achievable PMEPR is established, which is related to the covering radius of the binary code. Generally speaking, the achievable region of the PMEPR shrinks as the covering radius of the binary code decreases. The bound is then applied to some well understood codes, including nonredundant BPSK signaling, BCH codes and their duals, Reed-Muller codes, and convolutional codes. It is demonstrated that most (presumably not optimal) phase-shift designs from the literature attain or approach our bound.<|reference_end|>
arxiv
@article{schmidt2006on, title={On the Peak-to-Mean Envelope Power Ratio of Phase-Shifted Binary Codes}, author={Kai-Uwe Schmidt}, journal={IEEE Trans. Commun., vol. 56, no. 11, pp. 1816-1823, 2008}, year={2006}, doi={10.1109/TCOMM.2008.060652}, archivePrefix={arXiv}, eprint={cs/0611161}, primaryClass={cs.IT math.IT} }
schmidt2006on
arxiv-675209
cs/0611162
Quaternary Constant-Amplitude Codes for Multicode CDMA
<|reference_start|>Quaternary Constant-Amplitude Codes for Multicode CDMA: A constant-amplitude code is a code that reduces the peak-to-average power ratio (PAPR) in multicode code-division multiple access (MC-CDMA) systems to the favorable value 1. In this paper quaternary constant-amplitude codes (codes over Z_4) of length 2^m with error-correction capabilities are studied. These codes exist for every positive integer m, while binary constant-amplitude codes cannot exist if m is odd. Every word of such a code corresponds to a function from the binary m-tuples to Z_4 having the bent property, i.e., its Fourier transform has magnitudes 2^{m/2}. Several constructions of such functions are presented, which are exploited in connection with algebraic codes over Z_4 (in particular quaternary Reed-Muller, Kerdock, and Delsarte-Goethals codes) to construct families of quaternary constant-amplitude codes. Mappings from binary to quaternary constant-amplitude codes are presented as well.<|reference_end|>
arxiv
@article{schmidt2006quaternary, title={Quaternary Constant-Amplitude Codes for Multicode CDMA}, author={Kai-Uwe Schmidt}, journal={IEEE Trans. Inf. Theory, vol. 55, no. 4, pp. 1824-1832, April 2009}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611162}, primaryClass={cs.IT math.IT} }
schmidt2006quaternary
arxiv-675210
cs/0611163
On Measuring the Impact of Human Actions in the Machine Learning of a Board Game's Playing Policies
<|reference_start|>On Measuring the Impact of Human Actions in the Machine Learning of a Board Game's Playing Policies: We investigate systematically the impact of human intervention in the training of computer players in a strategy board game. In that game, computer players utilise reinforcement learning with neural networks for evolving their playing strategies and demonstrate a slow learning speed. Human intervention can significantly enhance learning performance, but carry-ing it out systematically seems to be more of a problem of an integrated game development environment as opposed to automatic evolutionary learning.<|reference_end|>
arxiv
@article{kalles2006on, title={On Measuring the Impact of Human Actions in the Machine Learning of a Board Game's Playing Policies}, author={Dimitris Kalles (Hellenic Open University)}, journal={arXiv preprint arXiv:cs/0611163}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611163}, primaryClass={cs.AI cs.GT cs.NE} }
kalles2006on
arxiv-675211
cs/0611164
Player co-modelling in a strategy board game: discovering how to play fast
<|reference_start|>Player co-modelling in a strategy board game: discovering how to play fast: In this paper we experiment with a 2-player strategy board game where playing models are evolved using reinforcement learning and neural networks. The models are evolved to speed up automatic game development based on human involvement at varying levels of sophistication and density when compared to fully autonomous playing. The experimental results suggest a clear and measurable association between the ability to win games and the ability to do that fast, while at the same time demonstrating that there is a minimum level of human involvement beyond which no learning really occurs.<|reference_end|>
arxiv
@article{kalles2006player, title={Player co-modelling in a strategy board game: discovering how to play fast}, author={Dimitris Kalles (Hellenic Open University)}, journal={arXiv preprint arXiv:cs/0611164}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611164}, primaryClass={cs.AI cs.LG} }
kalles2006player
arxiv-675212
cs/0611165
Partially ordered distributed computations on asynchronous point-to-point networks
<|reference_start|>Partially ordered distributed computations on asynchronous point-to-point networks: Asynchronous executions of a distributed algorithm differ from each other due to the nondeterminism in the order in which the messages exchanged are handled. In many situations of interest, the asynchronous executions induced by restricting nondeterminism are more efficient, in an application-specific sense, than the others. In this work, we define partially ordered executions of a distributed algorithm as the executions satisfying some restricted orders of their actions in two different frameworks, those of the so-called event- and pulse-driven computations. The aim of these restrictions is to characterize asynchronous executions that are likely to be more efficient for some important classes of applications. Also, an asynchronous algorithm that ensures the occurrence of partially ordered executions is given for each case. Two of the applications that we believe may benefit from the restricted nondeterminism are backtrack search, in the event-driven case, and iterative algorithms for systems of linear equations, in the pulse-driven case.<|reference_end|>
arxiv
@article{correa2006partially, title={Partially ordered distributed computations on asynchronous point-to-point networks}, author={Ricardo C. Correa, Valmir C. Barbosa}, journal={Parallel Computing 35 (2009), 12-28}, year={2006}, doi={10.1016/j.parco.2008.09.011}, archivePrefix={arXiv}, eprint={cs/0611165}, primaryClass={cs.DC} }
correa2006partially
arxiv-675213
cs/0611166
Lossless fitness inheritance in genetic algorithms for decision trees
<|reference_start|>Lossless fitness inheritance in genetic algorithms for decision trees: When genetic algorithms are used to evolve decision trees, key tree quality parameters can be recursively computed and re-used across generations of partially similar decision trees. Simply storing instance indices at leaves is enough for fitness to be piecewise computed in a lossless fashion. We show the derivation of the (substantial) expected speed-up on two bounding case problems and trace the attractive property of lossless fitness inheritance to the divide-and-conquer nature of decision trees. The theoretical results are supported by experimental evidence.<|reference_end|>
arxiv
@article{kalles2006lossless, title={Lossless fitness inheritance in genetic algorithms for decision trees}, author={Dimitris Kalles, Athanassios Papagelis}, journal={arXiv preprint arXiv:cs/0611166}, year={2006}, archivePrefix={arXiv}, eprint={cs/0611166}, primaryClass={cs.AI cs.DS cs.NE} }
kalles2006lossless
arxiv-675214
cs/0612001
Polynomial Time Symmetry and Isomorphism Testing for Connected Graphs
<|reference_start|>Polynomial Time Symmetry and Isomorphism Testing for Connected Graphs: We use the concept of a Kirchhoff resistor network (alternatively random walk on a network) to probe connected graphs and produce symmetry revealing canonical labelings of the graph(s) nodes and edges.<|reference_end|>
arxiv
@article{delacorte2006polynomial, title={Polynomial Time Symmetry and Isomorphism Testing for Connected Graphs}, author={Matthew Delacorte}, journal={arXiv preprint arXiv:cs/0612001}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612001}, primaryClass={cs.DM} }
delacorte2006polynomial
arxiv-675215
cs/0612002
Reuse of designs: Desperately seeking an interdisciplinary cognitive approach
<|reference_start|>Reuse of designs: Desperately seeking an interdisciplinary cognitive approach: This text analyses the papers accepted for the workshop "Reuse of designs: an interdisciplinary cognitive approach". Several dimensions and questions considered as important (by the authors and/or by us) are addressed: What about the "interdisciplinary cognitive" character of the approaches adopted by the authors? Is design indeed a domain where the use of CBR is particularly suitable? Are there important distinctions between CBR and other approaches? Which types of knowledge -other than cases- is being, or might be, used in CBR systems? With respect to cases: are there different "types" of case and different types of case use? which formats are adopted for their representation? do cases have "components"? how are cases organised in the case memory? Concerning their retrieval: which types of index are used? on which types of relation is retrieval based? how does one retrieve only a selected number of cases, i.e., how does one retrieve only the "best" cases? which processes and strategies are used, by the system and by its user? Finally, some important aspects of CBR system development are shortly discussed: should CBR systems be assistance or autonomous systems? how can case knowledge be "acquired"? what about the empirical evaluation of CBR systems? The conclusion points out some lacking points: not much attention is paid to the user, and few papers have indeed adopted an interdisciplinary cognitive approach.<|reference_end|>
arxiv
@article{visser2006reuse, title={Reuse of designs: Desperately seeking an interdisciplinary cognitive approach}, author={Willemien Visser (INRIA Rocquencourt), Brigitte Trousse (INRIA Sophia Antipolis)}, journal={Dans IJCAI Thirteenth International Joint Conference on Artificial Intelligence Workshop "Reuse of designs: An interdisciplinary cognitive approach" (1993)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612002}, primaryClass={cs.HC cs.AI} }
visser2006reuse
arxiv-675216
cs/0612003
Predicate Abstraction via Symbolic Decision Procedures
<|reference_start|>Predicate Abstraction via Symbolic Decision Procedures: We present a new approach for performing predicate abstraction based on symbolic decision procedures. Intuitively, a symbolic decision procedure for a theory takes a set of predicates in the theory and symbolically executes a decision procedure on all the subsets over the set of predicates. The result of the symbolic decision procedure is a shared expression (represented by a directed acyclic graph) that implicitly represents the answer to a predicate abstraction query. We present symbolic decision procedures for the logic of Equality and Uninterpreted Functions (EUF) and Difference logic (DIFF) and show that these procedures run in pseudo-polynomial (rather than exponential) time. We then provide a method to construct symbolic decision procedures for simple mixed theories (including the two theories mentioned above) using an extension of the Nelson-Oppen combination method. We present preliminary evaluation of our Procedure on predicate abstraction benchmarks from device driver verification in SLAM.<|reference_end|>
arxiv
@article{lahiri2006predicate, title={Predicate Abstraction via Symbolic Decision Procedures}, author={Shuvendu K. Lahiri, Thomas Ball, Byron Cook}, journal={Logical Methods in Computer Science, Volume 3, Issue 2 (April 24, 2007) lmcs:2218}, year={2006}, doi={10.2168/LMCS-3(2:1)2007}, archivePrefix={arXiv}, eprint={cs/0612003}, primaryClass={cs.LO cs.PL cs.SC} }
lahiri2006predicate
arxiv-675217
cs/0612004
Object-Oriented Program Comprehension: Effect of Expertise, Task and Phase
<|reference_start|>Object-Oriented Program Comprehension: Effect of Expertise, Task and Phase: The goal of our study is to evaluate the effect on program comprehension of three factors that have not previously been studied in a single experiment. These factors are programmer expertise (expert vs. novice), programming task (documentation vs. reuse), and the development of understanding over time (phase 1 vs. phase 2). This study is carried out in the context of the mental model approach to comprehension based on van Dijk and Kintsch's model (1983). One key aspect of this model is the distinction between two kinds of representation the reader might construct from a text: 1) the textbase, which refers to what is said in the text and how it is said, and 2) the situation model, which represents the situation referred to by the text. We have evaluated the effect of the three factors mentioned above on the development of both the textbase (or program model) and the situation model in object-oriented program comprehension. We found a four-way interaction of expertise, phase, task and type of model. For the documentation group we found that experts and novices differ in the elaboration of their situation model but not their program model. There was no interaction of expertise with phase and type of model in the documentation group. For the reuse group, there was a three-way interaction between phase, expertise and type of model. For the novice reuse group, the effect of the phase was to increase the construction of the situation model but not the program model. With respect to the task, our results show that novices do not spontaneously construct a strong situation model but are able to do so if the task demands it.<|reference_end|>
arxiv
@article{burkhardt2006object-oriented, title={Object-Oriented Program Comprehension: Effect of Expertise, Task and Phase}, author={Jean-Marie Burkhardt (LEI), Franc{c}oise D'etienne (INRIA), Susan Wiedenbeck}, journal={Empirical Software Engineering (2002)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612004}, primaryClass={cs.HC} }
burkhardt2006object-oriented
arxiv-675218
cs/0612005
Quantitative Measurements of the Influence of Participant Roles during Peer Review Meetings
<|reference_start|>Quantitative Measurements of the Influence of Participant Roles during Peer Review Meetings: Peer review meetings (PRMs) are formal meetings during which peers systematically analyze artifacts to improve their quality and report on non-conformities. This paper presents an approach based on protocol analysis for quantifying the influence of participant roles during PRMs. Three views are used to characterize the seven defined participant roles. The project view defines three roles supervisor, procedure expert and developer. The meeting view defines two roles: author and reviewer, and the task view defines the roles reflecting direct and indirect interest in the artifact under review. The analysis, based on log-linear modeling, shows that review activities have different patterns, depending on their focus: form or content. The influence of each role is analyzed with respect to this focus. Interpretation of the quantitative data leads to the suggestion that PRMs could be improved by creating three different types of reviews, each of which collects together specific roles: form review, cognitive synchronization review and content review.<|reference_end|>
arxiv
@article{d'astous2006quantitative, title={Quantitative Measurements of the Influence of Participant Roles during Peer Review Meetings}, author={Patrick D'Astous, Pierre Robillard, Franc{c}oise D'etienne (INRIA), Willemien Visser (INRIA Rocquencourt)}, journal={Empirical Software Engineering 6 (2001) 143-159}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612005}, primaryClass={cs.HC} }
d'astous2006quantitative
arxiv-675219
cs/0612006
Evocation and elaboration of solutions: Different types of problem-solving actions An empirical study on the design of an aerospace artifact
<|reference_start|>Evocation and elaboration of solutions: Different types of problem-solving actions An empirical study on the design of an aerospace artifact: An observational study was conducted on a professional designer working on a design project in aerospace industry. The protocol data were analyzed in order to gain insight into the actions the designer used for the development of a solution to the corresponding problem. Different processes are described: from the "simple" evocation of a solution existing in memory, to the elaboration of a "new" solution out of mnesic entities without any clear link to the current problem. Control is addressed in so far as it concerns the priority among the different types of development processes: the progression from evocation of a "standard" solution to elaboration of a "new" solution is supposed to correspond to the resulting order, that is, the one in which the designer's activity proceeds. Short discussions of * the double status of "problem" and "solution," * the problem/solution knowledge units in memory and their access, and * the different abstraction levels on which problem and solution representations are developed, are illustrated by the results.<|reference_end|>
arxiv
@article{visser2006evocation, title={Evocation and elaboration of solutions: Different types of problem-solving actions. An empirical study on the design of an aerospace artifact}, author={Willemien Visser (INRIA Rocquencourt)}, journal={Dans COGNITIVA 90. At the crossroads of Artificial Intelligence, Cognitive science, and Neuroscience (1991) 689-696}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612006}, primaryClass={cs.HC} }
visser2006evocation
arxiv-675220
cs/0612007
High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs Linear Precoding
<|reference_start|>High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs Linear Precoding: We study the MIMO broadcast channel and compare the achievable throughput for the optimal strategy of dirty paper coding to that achieved with sub-optimal and lower complexity linear precoding (e.g., zero-forcing and block diagonalization) transmission. Both strategies utilize all available spatial dimensions and therefore have the same multiplexing gain, but an absolute difference in terms of throughput does exist. The sum rate difference between the two strategies is analytically computed at asymptotically high SNR, and it is seen that this asymptotic statistic provides an accurate characterization at even moderate SNR levels. Furthermore, the difference is not affected by asymmetric channel behavior when each user a has different average SNR. Weighted sum rate maximization is also considered, and a similar quantification of the throughput difference between the two strategies is performed. In the process, it is shown that allocating user powers in direct proportion to user weights asymptotically maximizes weighted sum rate. For multiple antenna users, uniform power allocation across the receive antennas is applied after distributing power proportional to the user weight.<|reference_end|>
arxiv
@article{lee2006high, title={High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs. Linear Precoding}, author={Juyul Lee and Nihar Jindal}, journal={arXiv preprint arXiv:cs/0612007}, year={2006}, doi={10.1109/ISIT.2005.1523760}, archivePrefix={arXiv}, eprint={cs/0612007}, primaryClass={cs.IT math.IT} }
lee2006high
arxiv-675221
cs/0612008
Design Strategies and Knowledge in Object-Oriented Programming: Effects of Experience
<|reference_start|>Design Strategies and Knowledge in Object-Oriented Programming: Effects of Experience: An empirical study was conducted to analyse design strategies and knowledge used in object-oriented software design. Eight professional programmers experienced with procedural programming languages and either experienced or not experienced in object-oriented design strategies related to two central aspects of the object-oriented paradigm: (1) associating actions, i.e., execution steps, of a complex plan to different objects and revising a complex plan, and (2) defining simple plans at different levels in the class hierarchy. As regards the development of complex plans elements attached to different objects, our results show that, for beginners in OOP, the description of objects and the description of actions are not always integrated in an early design phase, particularly for the declarative problem whereas, for the programmers experienced in OOP, the description of objects and the description of actions tend to be integrated in their first drafts of solutions whichever the problem type. The analysis of design strategies reveal the use of different knowledge according to subjects' language experience: (1) schemas related to procedural languages; actions are organized in an execution order, or (2) schemas related to object-oriented languages; actions and objects are integrated, and actions are organised around objects.<|reference_end|>
arxiv
@article{détienne2006design, title={Design Strategies and Knowledge in Object-Oriented Programming: Effects of Experience}, author={Franc{c}oise D'etienne (INRIA)}, journal={Human-Computer Interaction 10, 2-3 (1995) 129-170}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612008}, primaryClass={cs.HC} }
détienne2006design
arxiv-675222
cs/0612009
Users' participation to the design process in an Open Source Software online community
<|reference_start|>Users' participation to the design process in an Open Source Software online community: The objective of this research is to analyse the ways members of open-source software communities participate in design. In particular we focus on how users of an Open Source (OS) programming language (Python) participate in adding new functionalities to the language. Indeed, in the OS communities, users are highly skilled in computer sciences; they do not correspond to the common representation of end-users and can potentially participate to the design process. Our study characterizes the Python galaxy and analyses a formal process to introduce new functionalities to the language called Python Enhancement Proposal (PEP) from the idea of language evolution to the PEP implementation. The analysis of a particular pushed-by-users PEP from one application domain community (financial), shows: that the design process is distributed and specialized between online and physical interactions spaces; and there are some cross participants between users and developers communities which may reveal boundary spanners roles.<|reference_end|>
arxiv
@article{barcellini2006users', title={Users' participation to the design process in an Open Source Software online community}, author={Flore Barcellini (INRIA Rocquencourt), Franc{c}oise D'etienne (INRIA Rocquencourt), Jean-Marie Burkhardt (LEI)}, journal={Dans 18th Annual Workshop on Psychology of Programming Interest Group PPIG'05 (2006) 99-114}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612009}, primaryClass={cs.HC} }
barcellini2006users'
arxiv-675223
cs/0612010
Articulation entre composantes verbale et graphico-gestuelle de l'interaction dans des r\'eunions de conception architecturale
<|reference_start|>Articulation entre composantes verbale et graphico-gestuelle de l'interaction dans des r\'eunions de conception architecturale: This study is focused on the role of external representations, e.g., skteches, in collaborative architectural design. In particular, we analyse (1) the use of graphico-gestural modalities and, (2) the articulation modes between graphico-gestural and verbal modalities in design interaction. We have elaborated a first classification which distinguishes between two modes of articulation, articulation in integrated activities versus articulation in parallel activities.<|reference_end|>
arxiv
@article{visser2006articulation, title={Articulation entre composantes verbale et graphico-gestuelle de l'interaction dans des r\'{e}unions de conception architecturale}, author={Willemien Visser (INRIA Rocquencourt, INRIA), Franc{c}oise D'etienne (INRIA Rocquencourt, INRIA)}, journal={Dans SCAN'05 (2005)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612010}, primaryClass={cs.HC} }
visser2006articulation
arxiv-675224
cs/0612011
Estimation of Bit and Frame Error Rates of Low-Density Parity-Check Codes on Binary Symmetric Channels
<|reference_start|>Estimation of Bit and Frame Error Rates of Low-Density Parity-Check Codes on Binary Symmetric Channels: A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSC) is proposed. Based on the enumeration of the smallest weight error patterns that can not be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates.<|reference_end|>
arxiv
@article{xiao2006estimation, title={Estimation of Bit and Frame Error Rates of Low-Density Parity-Check Codes on Binary Symmetric Channels}, author={Hua Xiao, Amir H. Banihashemi}, journal={arXiv preprint arXiv:cs/0612011}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612011}, primaryClass={cs.IT math.IT} }
xiao2006estimation
arxiv-675225
cs/0612012
Geographic Gossip on Geometric Random Graphs via Affine Combinations
<|reference_start|>Geographic Gossip on Geometric Random Graphs via Affine Combinations: In recent times, a considerable amount of work has been devoted to the development and analysis of gossip algorithms in Geometric Random Graphs. In a recently introduced model termed "Geographic Gossip," each node is aware of its position but possesses no further information. Traditionally, gossip protocols have always used convex linear combinations to achieve averaging. We develop a new protocol for Geographic Gossip, in which counter-intuitively, we use {\it non-convex affine combinations} as updates in addition to convex combinations to accelerate the averaging process. The dependence of the number of transmissions used by our algorithm on the number of sensors $n$ is $n \exp(O(\log \log n)^2) = n^{1 + o(1)}$. For the previous algorithm, this dependence was $\tilde{O}(n^{1.5})$. The exponent 1+ o(1) of our algorithm is asymptotically optimal. Our algorithm involves a hierarchical structure of $\log \log n$ depth and is not completely decentralized. However, the extent of control exercised by a sensor on another is restricted to switching the other on or off.<|reference_end|>
arxiv
@article{narayanan2006geographic, title={Geographic Gossip on Geometric Random Graphs via Affine Combinations}, author={Hariharan Narayanan}, journal={arXiv preprint arXiv:cs/0612012}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612012}, primaryClass={cs.MA cs.IT math.IT} }
narayanan2006geographic
arxiv-675226
cs/0612013
Economy-based Content Replication for Peering Content Delivery Networks
<|reference_start|>Economy-based Content Replication for Peering Content Delivery Networks: Existing Content Delivery Networks (CDNs) exhibit the nature of closed delivery networks which do not cooperate with other CDNs and in practice, islands of CDNs are formed. The logical separation between contents and services in this context results in two content networking domains. In addition to that, meeting the Quality of Service requirements of users according to negotiated Service Level Agreement is crucial for a CDN. Present trends in content networks and content networking capabilities give rise to the interest in interconnecting content networks. Hence, in this paper, we present an open, scalable, and Service-Oriented Architecture (SOA)-based system that assist the creation of open Content and Service Delivery Networks (CSDNs), which scale and supports sharing of resources through peering with other CSDNs. To encourage resource sharing and peering arrangements between different CDN providers at global level, we propose using market-based models by introducing an economy-based strategy for content replication.<|reference_end|>
arxiv
@article{pathan2006economy-based, title={Economy-based Content Replication for Peering Content Delivery Networks}, author={Al-Mukaddim Khan Pathan, Rajkumar Buyya, James Broberg and Kris Bubendorfer}, journal={arXiv preprint arXiv:cs/0612013}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612013}, primaryClass={cs.DC} }
pathan2006economy-based
arxiv-675227
cs/0612014
Going Stupid with EcoLab
<|reference_start|>Going Stupid with EcoLab: In 2005, Railsback et al. proposed a very simple model ({\em Stupid Model}) that could be implemented within a couple of hours, and later extended to demonstrate the use of common ABM platform functionality. They provided implementations of the model in several agent based modelling platforms, and compared the platforms for ease of implementation of this simple model, and performance. In this paper, I implement Railsback et al's Stupid Model in the EcoLab simulation platform, a C++ based modelling platform, demonstrating that it is a feasible platform for these sorts of models, and compare the performance of the implementation with Repast, Mason and Swarm versions.<|reference_end|>
arxiv
@article{standish2006going, title={Going Stupid with EcoLab}, author={Russell K. Standish}, journal={Simulation, Vol 84, 611-618 (2008)}, year={2006}, doi={10.1177/0037549708097}, archivePrefix={arXiv}, eprint={cs/0612014}, primaryClass={cs.MA} }
standish2006going
arxiv-675228
cs/0612015
On the intersection of additive perfect codes
<|reference_start|>On the intersection of additive perfect codes: The intersection problem for additive (extended and non-extended) perfect codes, i.e. which are the possibilities for the number of codewords in the intersection of two additive codes C1 and C2 of the same length, is investigated. Lower and upper bounds for the intersection number are computed and, for any value between these bounds, codes which have this given intersection value are constructed. For all these codes the abelian group structure of the intersection is characterized. The parameters of this abelian group structure corresponding to the intersection codes are computed and lower and upper bounds for these parameters are established. Finally, constructions of codes the intersection of which fits any parameters between these bounds are given.<|reference_end|>
arxiv
@article{rifà2006on, title={On the intersection of additive perfect codes}, author={J. Rif`a, F. Solov'eva, M. Villanueva}, journal={arXiv preprint arXiv:cs/0612015}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612015}, primaryClass={cs.IT math.IT} }
rifà2006on
arxiv-675229
cs/0612016
Memory of past designs: distinctive roles in individual and collective design
<|reference_start|>Memory of past designs: distinctive roles in individual and collective design: Empirical studies on design have emphasised the role of memory of past solutions. Design involves the use of generic knowledge as well as episodic knowledge about past designs for analogous problems : in this way, it involves the reuse of past designs. We analyse this mechanism of reuse from a socio-cognitive viewpoint. According to a purely cognitive approach, reuse involves cognitive mechanisms linked to the problem solving activity itself. Our socio-cognitive approach accounts for these phenomena as well as reuse mechanisms linked to cooperation, in particular coordination, and confrontation/integration of viewpoints.<|reference_end|>
arxiv
@article{détienne2006memory, title={Memory of past designs: distinctive roles in individual and collective design}, author={Franc{c}oise D'etienne (INRIA)}, journal={Cognitive Technology Journal 1, 8 (2003) 16-24}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612016}, primaryClass={cs.HC} }
détienne2006memory
arxiv-675230
cs/0612017
Confrontation of viewpoints in a concurrent engineering process
<|reference_start|>Confrontation of viewpoints in a concurrent engineering process: We present an empirical study aimed at analysing the use of viewpoints in an industrial Concurrent Engineering context. Our focus is on the viewpoints expressed in the argumentative process taking place in evaluation meetings. Our results show that arguments enabling a viewpoint or proposal to be defended are often characterized by the use of constraints. One result involved the way in which the proposals for solutions are assessed during these meetings. We have revealed the existence of specific assessment modes in these meetings as well as their combination. Then, we show that, even if some constraints are apparently identically used by the different specialists involved in meetings, various meanings and weightings are associated with these constraints by these different specialists.<|reference_end|>
arxiv
@article{martin2006confrontation, title={Confrontation of viewpoints in a concurrent engineering process}, author={G'eraldine Martin, Franc{c}oise D'etienne (INRIA), Elisabeth Lavigne}, journal={Integrated design and manufacturing in mechanical engineeringKluwer Academic Publishers (Ed.) (2002)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612017}, primaryClass={cs.HC} }
martin2006confrontation
arxiv-675231
cs/0612018
Mental Representations Constructed by Experts and Novices in Object-Oriented Program Comprehension
<|reference_start|>Mental Representations Constructed by Experts and Novices in Object-Oriented Program Comprehension: Previous studies on program comprehension were carried out largely in the context of procedural languages. Our purpose is to develop and evaluate a cognitive model of object-oriented (OO) program understanding. Our model is based on the van Dijk and Kintsch's model of text understanding (1983). One key aspect of this theoretical approach is the distinction between two kinds of representation the reader might construct from a text: the textbase and the situation model. On the basis of results of an experiment we have conducted, we evaluate the cognitive validity of this distinction in OO program understanding. We examine how the construction of these two representations is differentially affected by the programmer's expertise and how they evolve differentially over time.<|reference_end|>
arxiv
@article{burkhardt2006mental, title={Mental Representations Constructed by Experts and Novices in Object-Oriented Program Comprehension}, author={Jean-Marie Burkhardt (INRIA, LEI), Franc{c}oise D'etienne (INRIA), Susan Wiedenbeck}, journal={Dans Human-Computer Interaction, INTERACT'97 (1997)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612018}, primaryClass={cs.HC} }
burkhardt2006mental
arxiv-675232
cs/0612019
On Finite Memory Universal Data Compression and Classification of Individual Sequences
<|reference_start|>On Finite Memory Universal Data Compression and Classification of Individual Sequences: Consider the case where consecutive blocks of N letters of a semi-infinite individual sequence X over a finite-alphabet are being compressed into binary sequences by some one-to-one mapping. No a-priori information about X is available at the encoder, which must therefore adopt a universal data-compression algorithm. It is known that if the universal LZ77 data compression algorithm is successively applied to N-blocks then the best error-free compression for the particular individual sequence X is achieved, as $N$ tends to infinity. The best possible compression that may be achieved by any universal data compression algorithm for finite N-blocks is discussed. It is demonstrated that context tree coding essentially achieves it. Next, consider a device called classifier (or discriminator) that observes an individual training sequence X. The classifier's task is to examine individual test sequences of length N and decide whether the test N-sequence has the same features as those that are captured by the training sequence X, or is sufficiently different, according to some appropriatecriterion. Here again, it is demonstrated that a particular universal context classifier with a storage-space complexity that is linear in N, is essentially optimal. This may contribute a theoretical "individual sequence" justification for the Probabilistic Suffix Tree (PST) approach in learning theory and in computational biology.<|reference_end|>
arxiv
@article{ziv2006on, title={On Finite Memory Universal Data Compression and Classification of Individual Sequences}, author={Jacob Ziv}, journal={arXiv preprint arXiv:cs/0612019}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612019}, primaryClass={cs.IT math.IT} }
ziv2006on
arxiv-675233
cs/0612020
Analysing viewpoints in design through the argumentation process
<|reference_start|>Analysing viewpoints in design through the argumentation process: We present an empirical study aimed at analysing the use of viewpoints in an industrial Concurrent Engineering context. Our focus is on the viewpoints expressed in the argumentative process taking place in evaluation meetings. Our results show that arguments enabling a viewpoint or proposal to be defended are often characterized by the use of constraints. Firstly, we show that, even if some constraints are apparently identically used by the different specialists involved in meetings, various meanings and weightings are associated with these constraints by these different specialists. Secondly, we show that the implicit or explicit nature of constraints depends on several interlocutive factors. Thirdly, we show that an argument often covers not only one constraint but a network of constraints. The type of combination reflects viewpoints which have specific status in the meeting. Then, we will propose a first model of the dynamics of viewpoints confrontation/integration.<|reference_end|>
arxiv
@article{martin2006analysing, title={Analysing viewpoints in design through the argumentation process}, author={G'eraldine Martin, Franc{c}oise D'etienne (INRIA), Elisabeth Lavigne}, journal={Dans INTERACT 2001 (2001) 521-529}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612020}, primaryClass={cs.OH} }
martin2006analysing
arxiv-675234
cs/0612021
Multimodality and parallelism in design interaction: co-designers' alignment and coalitions
<|reference_start|>Multimodality and parallelism in design interaction: co-designers' alignment and coalitions: This paper presents an analysis of various forms of articulation between graphico-gestural and verbal modalities in parallel interactions between designers in a collaborative design situation. Based on our methodological framework, we illustrate several forms of multimodal articulations, that is, integrated and non-integrated, through extracts from a corpus on an architectural design meeting. These modes reveal alignment or disalignment between designers, with respect to the focus of their activities. They also show different forms of coalition.<|reference_end|>
arxiv
@article{détienne2006multimodality, title={Multimodality and parallelism in design interaction: co-designers' alignment and coalitions}, author={Franc{c}oise D'etienne, Willemien Visser}, journal={Dans COOP'2006 Volume 137 (2006) 118-131}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612021}, primaryClass={cs.HC} }
détienne2006multimodality
arxiv-675235
cs/0612022
Both Generic Design and Different Forms of Designing
<|reference_start|>Both Generic Design and Different Forms of Designing: This paper defends an augmented cognitively oriented "generic-design hypothesis": There are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (i.e., related to the designers, the artefact, and other task variables influencing these two) introduce specificities in the corresponding design activities and cognitive structures that are used. We thus combine the generic-design hypothesis with that of different "forms" of designing. In this paper, outlining a number of directions that need further elaboration, we propose a series of candidate dimensions underlying such forms of design.<|reference_end|>
arxiv
@article{visser2006both, title={Both Generic Design and Different Forms of Designing}, author={Willemien Visser (INRIA Rocquencourt)}, journal={Dans Wonderground, the 2006 DRS (Design Research Society) International Conference (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612022}, primaryClass={cs.HC} }
visser2006both
arxiv-675236
cs/0612023
Reusing processes and documenting processes: toward an integrated framework
<|reference_start|>Reusing processes and documenting processes: toward an integrated framework: This paper presents a cognitive typology of reuse processes, and a cognitive typology of documenting processes. Empirical studies on design with reuse and on software documenting provide evidence for a generalized cognitive model. First, these studies emphasize the cyclical nature of design: cycles of planning, writing and revising occur. Second, natural language documentation follows the hierarchy of cognitive entities manipulated during design. Similarly software reuse involves exploiting various types of knowledge depending on the phase of design in which reuse is involved. We suggest that these observations can be explained based on cognitive models of text processing: the van Dijk and Kintsch (1983) model of text comprehension, and the Hayes and Flower (1980) model of text production. Based on our generalized cognitive model, we suggest a framework for documenting reusable components.<|reference_end|>
arxiv
@article{détienne2006reusing, title={Reusing processes and documenting processes: toward an integrated framework}, author={Franc{c}oise D'etienne (INRIA), Jean-Franc{c}ois Rouet, Jean-Marie Burkhardt (INRIA, LEI), Catherine Deleuze-Dordron}, journal={Dans ECCE8 (1996) 139-144}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612023}, primaryClass={cs.HC} }
détienne2006reusing
arxiv-675237
cs/0612024
On the Maximum Sum-rate Capacity of Cognitive Multiple Access Channel
<|reference_start|>On the Maximum Sum-rate Capacity of Cognitive Multiple Access Channel: We consider the communication scenario where multiple cognitive users wish to communicate to the same receiver, in the presence of primary transmission. The cognitive transmitters are assumed to have the side information about the primary transmission. The capacity region of cognitive users is formulated under the constraint that the capacity of primary transmission is not changed as if no cognitive users exist. Moreover, the maximum sum-rate point of the capacity region is characterized, by optimally allocating the power of each cognitive user to transmit its own information.<|reference_end|>
arxiv
@article{cheng2006on, title={On the Maximum Sum-rate Capacity of Cognitive Multiple Access Channel}, author={Peng Cheng, Guanding Yu, Zhaoyang Zhang, Hsiao-Hwa Chen and Peiliang Qiu}, journal={arXiv preprint arXiv:cs/0612024}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612024}, primaryClass={cs.IT math.IT} }
cheng2006on
arxiv-675238
cs/0612025
Registers
<|reference_start|>Registers: Entry in: Encyclopedia of Algorithms, Ming-Yang Kao, Ed., Springer, To appear. Synonyms: Wait-free registers, wait-free shared variables, asynchronous communication hardware. Problem Definition: Consider a system of asynchronous processes that communicate among themselves by only executing read and write operations on a set of shared variables (also known as shared registers). The system has no global clock or other synchronization primitives.<|reference_end|>
arxiv
@article{vitanyi2006registers, title={Registers}, author={Paul M.B. Vitanyi (CWI and University of Amsterdam)}, journal={arXiv preprint arXiv:cs/0612025}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612025}, primaryClass={cs.DC} }
vitanyi2006registers
arxiv-675239
cs/0612026
A disk-covering problem with application in optical interferometry
<|reference_start|>A disk-covering problem with application in optical interferometry: Given a disk O in the plane called the objective, we want to find n small disks P_1,...,P_n called the pupils such that $\bigcup_{i,j=1}^n P_i \ominus P_j \supseteq O$, where $\ominus$ denotes the Minkowski difference operator, while minimizing the number of pupils, the sum of the radii or the total area of the pupils. This problem is motivated by the construction of very large telescopes from several smaller ones by so-called Optical Aperture Synthesis. In this paper, we provide exact, approximate and heuristic solutions to several variations of the problem.<|reference_end|>
arxiv
@article{nguyen2006a, title={A disk-covering problem with application in optical interferometry}, author={Trung Nguyen, Jean-Daniel Boissonnat, Frederic Falzon and Christian Knauer}, journal={arXiv preprint arXiv:cs/0612026}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612026}, primaryClass={cs.CG} }
nguyen2006a
arxiv-675240
cs/0612027
Experimental Information and Statistical Modeling of Physical Laws
<|reference_start|>Experimental Information and Statistical Modeling of Physical Laws: Statistical modeling of physical laws connects experiments with mathematical descriptions of natural phenomena. The modeling is based on the probability density of measured variables expressed by experimental data via a kernel estimator. As an objective kernel the scattering function determined by calibration of the instrument is introduced. This function provides for a new definition of experimental information and redundancy of experimentation in terms of information entropy. The redundancy increases with the number of experiments, while the experimental information converges to a value that describes the complexity of the data. The difference between the redundancy and the experimental information is proposed as the model cost function. From its minimum, a proper number of data in the model is estimated. As an optimal, nonparametric estimator of the relation between measured variables the conditional average extracted from the kernel estimator is proposed. The modeling is demonstrated on noisy chaotic data.<|reference_end|>
arxiv
@article{grabec2006experimental, title={Experimental Information and Statistical Modeling of Physical Laws}, author={Igor Grabec}, journal={arXiv preprint arXiv:cs/0612027}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612027}, primaryClass={cs.IT cs.IR math.IT} }
grabec2006experimental
arxiv-675241
cs/0612028
Using Combinatorics to Prune Search Trees: Independent and Dominating Set
<|reference_start|>Using Combinatorics to Prune Search Trees: Independent and Dominating Set: This paper has been withdrawn by the author.<|reference_end|>
arxiv
@article{fomin2006using, title={Using Combinatorics to Prune Search Trees: Independent and Dominating Set}, author={Fedor V. Fomin, Serge Gaspers, Saket Saurabh, Alexey A. Stepanov}, journal={arXiv preprint arXiv:cs/0612028}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612028}, primaryClass={cs.DS cs.DM} }
fomin2006using
arxiv-675242
cs/0612029
A Classification of 6R Manipulators
<|reference_start|>A Classification of 6R Manipulators: This paper presents a classification of generic 6-revolute jointed (6R) manipulators using homotopy class of their critical point manifold. A part of classification is listed in this paper because of the complexity of homotopy class of 4-torus. The results of this classification will serve future research of the classification and topological properties of maniplators joint space and workspace.<|reference_end|>
arxiv
@article{chen2006a, title={A Classification of 6R Manipulators}, author={Ming-Zhe Chen}, journal={arXiv preprint arXiv:cs/0612029}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612029}, primaryClass={cs.RO} }
chen2006a
arxiv-675243
cs/0612030
Loop corrections for approximate inference
<|reference_start|>Loop corrections for approximate inference: We propose a method for improving approximate inference methods that corrects for the influence of loops in the graphical model. The method is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It is an alternative implementation of an idea introduced recently by Montanari and Rizzo (2005). In its simplest form, which amounts to the assumption that no loops are present, the method reduces to the minimal Cluster Variation Method approximation (which uses maximal factors as outer clusters). On the other hand, using estimates of the effect of loops (obtained by some approximate inference algorithm) and applying the Loop Correcting (LC) method usually gives significantly better results than applying the approximate inference algorithm directly without loop corrections. Indeed, we often observe that the loop corrected error is approximately the square of the error of the approximate inference method used to estimate the effect of loops. We compare different variants of the Loop Correcting method with other approximate inference methods on a variety of graphical models, including "real world" networks, and conclude that the LC approach generally obtains the most accurate results.<|reference_end|>
arxiv
@article{mooij2006loop, title={Loop corrections for approximate inference}, author={Joris Mooij and Bert Kappen}, journal={Journal of Machine Learning Research 8(May):1113-1143, 2007}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612030}, primaryClass={cs.AI cs.IT cs.LG math.IT} }
mooij2006loop
arxiv-675244
cs/0612031
Estimating Aggregate Properties on Probabilistic Streams
<|reference_start|>Estimating Aggregate Properties on Probabilistic Streams: The probabilistic-stream model was introduced by Jayram et al. \cite{JKV07}. It is a generalization of the data stream model that is suited to handling ``probabilistic'' data where each item of the stream represents a probability distribution over a set of possible events. Therefore, a probabilistic stream determines a distribution over potentially a very large number of classical "deterministic" streams where each item is deterministically one of the domain values. The probabilistic model is applicable for not only analyzing streams where the input has uncertainties (such as sensor data streams that measure physical processes) but also where the streams are derived from the input data by post-processing, such as tagging or reconciling inconsistent and poor quality data. We present streaming algorithms for computing commonly used aggregates on a probabilistic stream. We present the first known, one pass streaming algorithm for estimating the \AVG, improving results in \cite{JKV07}. We present the first known streaming algorithms for estimating the number of \DISTINCT items on probabilistic streams. Further, we present extensions to other aggregates such as the repeat rate, quantiles, etc. In all cases, our algorithms work with provable accuracy guarantees and within the space constraints of the data stream model.<|reference_end|>
arxiv
@article{mcgregor2006estimating, title={Estimating Aggregate Properties on Probabilistic Streams}, author={Andrew McGregor and S. Muthukrishnan}, journal={arXiv preprint arXiv:cs/0612031}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612031}, primaryClass={cs.DS cs.DB} }
mcgregor2006estimating
arxiv-675245
cs/0612032
Code Spectrum and Reliability Function: Binary Symmetric Channel
<|reference_start|>Code Spectrum and Reliability Function: Binary Symmetric Channel: A new approach for upper bounding the channel reliability function using the code spectrum is described. It allows to treat in a unified way both a low and a high rate cases. In particular, the earlier known upper bounds are improved, and a new derivation of the sphere-packing bound is presented.<|reference_end|>
arxiv
@article{burnashev2006code, title={Code Spectrum and Reliability Function: Binary Symmetric Channel}, author={Marat V. Burnashev}, journal={Problems of Information Transmission, vol. 42, no. 4, pp. 3-22, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612032}, primaryClass={cs.IT math.IT} }
burnashev2006code
arxiv-675246
cs/0612033
Acronym-Meaning Extraction from Corpora Using Multi-Tape Weighted Finite-State Machines
<|reference_start|>Acronym-Meaning Extraction from Corpora Using Multi-Tape Weighted Finite-State Machines: The automatic extraction of acronyms and their meaning from corpora is an important sub-task of text mining. It can be seen as a special case of string alignment, where a text chunk is aligned with an acronym. Alternative alignments have different cost, and ideally the least costly one should give the correct meaning of the acronym. We show how this approach can be implemented by means of a 3-tape weighted finite-state machine (3-WFSM) which reads a text chunk on tape 1 and an acronym on tape 2, and generates all alternative alignments on tape 3. The 3-WFSM can be automatically generated from a simple regular expression. No additional algorithms are required at any stage. Our 3-WFSM has a size of 27 states and 64 transitions, and finds the best analysis of an acronym in a few milliseconds.<|reference_end|>
arxiv
@article{kempe2006acronym-meaning, title={Acronym-Meaning Extraction from Corpora Using Multi-Tape Weighted Finite-State Machines}, author={Andr'e Kempe}, journal={arXiv preprint arXiv:cs/0612033}, year={2006}, number={2006/019 (at Xerox Research Centre Europe, France)}, archivePrefix={arXiv}, eprint={cs/0612033}, primaryClass={cs.CL cs.DS cs.SC} }
kempe2006acronym-meaning
arxiv-675247
cs/0612034
Predictable Disruption Tolerant Networks and Delivery Guarantees
<|reference_start|>Predictable Disruption Tolerant Networks and Delivery Guarantees: This article studies disruption tolerant networks (DTNs) where each node knows the probabilistic distribution of contacts with other nodes. It proposes a framework that allows one to formalize the behaviour of such a network. It generalizes extreme cases that have been studied before where (a) either nodes only know their contact frequency with each other or (b) they have a perfect knowledge of who meets who and when. This paper then gives an example of how this framework can be used; it shows how one can find a packet forwarding algorithm optimized to meet the 'delay/bandwidth consumption' trade-off: packets are duplicated so as to (statistically) guarantee a given delay or delivery probability, but not too much so as to reduce the bandwidth, energy, and memory consumption.<|reference_end|>
arxiv
@article{francois2006predictable, title={Predictable Disruption Tolerant Networks and Delivery Guarantees}, author={Jean-Marc Francois and Guy Leduc}, journal={arXiv preprint arXiv:cs/0612034}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612034}, primaryClass={cs.NI} }
francois2006predictable
arxiv-675248
cs/0612035
Distributed Slicing in Dynamic Systems
<|reference_start|>Distributed Slicing in Dynamic Systems: Peer to peer (P2P) systems are moving from application specific architectures to a generic service oriented design philosophy. This raises interesting problems in connection with providing useful P2P middleware services that are capable of dealing with resource assignment and management in a large-scale, heterogeneous and unreliable environment. One such service, the slicing service, has been proposed to allow for an automatic partitioning of P2P networks into groups (slices) that represent a controllable amount of some resource and that are also relatively homogeneous with respect to that resource, in the face of churn and other failures. In this report we propose two algorithms to solve the distributed slicing problem. The first algorithm improves upon an existing algorithm that is based on gossip-based sorting of a set of uniform random numbers. We speed up convergence via a heuristic for gossip peer selection. The second algorithm is based on a different approach: statistical approximation of the rank of nodes in the ordering. The scalability, efficiency and resilience to dynamics of both algorithms relies on their gossip-based models. We present theoretical and experimental results to prove the viability of these algorithms.<|reference_end|>
arxiv
@article{fernandez2006distributed, title={Distributed Slicing in Dynamic Systems}, author={Antonio Fernandez (LADYR), Vincent Gramoli (IRISA), Ernesto Jimenez (EUI), Anne-Marie Kermarrec (IRISA), Michel Raynal (IRISA)}, journal={arXiv preprint arXiv:cs/0612035}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612035}, primaryClass={cs.DC} }
fernandez2006distributed
arxiv-675249
cs/0612036
Revisiting Matrix Product on Master-Worker Platforms
<|reference_start|>Revisiting Matrix Product on Master-Worker Platforms: This paper is aimed at designing efficient parallel matrix-product algorithms for heterogeneous master-worker platforms. While matrix-product is well-understood for homogeneous 2D-arrays of processors (e.g., Cannon algorithm and ScaLAPACK outer product algorithm), there are three key hypotheses that render our work original and innovative: - Centralized data. We assume that all matrix files originate from, and must be returned to, the master. - Heterogeneous star-shaped platforms. We target fully heterogeneous platforms, where computational resources have different computing powers. - Limited memory. Because we investigate the parallelization of large problems, we cannot assume that full matrix panels can be stored in the worker memories and re-used for subsequent updates (as in ScaLAPACK). We have devised efficient algorithms for resource selection (deciding which workers to enroll) and communication ordering (both for input and result messages), and we report a set of numerical experiments on various platforms at Ecole Normale Superieure de Lyon and the University of Tennessee. However, we point out that in this first version of the report, experiments are limited to homogeneous platforms.<|reference_end|>
arxiv
@article{dongarra2006revisiting, title={Revisiting Matrix Product on Master-Worker Platforms}, author={Jack Dongarra, Jean-Francois Pineau, Yves Robert, Zhiao Shi, and Frederic Vivien}, journal={arXiv preprint arXiv:cs/0612036}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612036}, primaryClass={cs.DC cs.MS} }
dongarra2006revisiting
arxiv-675250
cs/0612037
Least Significant Digit First Presburger Automata
<|reference_start|>Least Significant Digit First Presburger Automata: Since 1969 \cite{C-MST69,S-SMJ77}, we know that any Presburger-definable set \cite{P-PCM29} (a set of integer vectors satisfying a formula in the first-order additive theory of the integers) can be represented by a state-based symmbolic representation, called in this paper Finite Digit Vector Automata (FDVA). Efficient algorithms for manipulating these sets have been recently developed. However, the problem of deciding if a FDVA represents such a set, is a well-known hard problem first solved by Muchnik in 1991 with a quadruply-exponential time algorithm. In this paper, we show how to determine in polynomial time whether a FDVA represents a Presburger-definable set, and we provide in this positive case a polynomial time algorithm that constructs a Presburger-formula that defines the same set.<|reference_end|>
arxiv
@article{leroux2006least, title={Least Significant Digit First Presburger Automata}, author={J'er^ome Leroux (LaBRI)}, journal={arXiv preprint arXiv:cs/0612037}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612037}, primaryClass={cs.DS} }
leroux2006least
arxiv-675251
cs/0612038
Non-Archimedean analysis, T-functions, and cryptography
<|reference_start|>Non-Archimedean analysis, T-functions, and cryptography: These are lecture notes of a 20-hour course at the International Summer School \emph{Mathematical Methods and Technologies in Computer Security} at Lomonosov Moscow State University, July 9--23, 2006. Loosely speaking, a $T$-function is a map of $n$-bit words into $n$-bit words such that each $i$-th bit of image depends only on low-order bits $0,..., i$ of the pre-image. For example, all arithmetic operations (addition, multiplication) are $T$-functions, all bitwise logical operations ($\XOR$, $\AND$, etc.) are $T$-functions. Any composition of $T$-functions is a $T$-function as well. Thus $T$-functions are natural computer word-oriented functions. It turns out that $T$-functions are continuous (and often differentiable!) functions with respect to the so-called 2-adic distance. This observation gives a powerful tool to apply 2-adic analysis to construct wide classes of $T$-functions with provable cryptographic properties (long period, balance, uniform distribution, high linear complexity, etc.); these functions currently are being used in new generation of fast stream ciphers. We consider these ciphers as specific automata that could be associated to dynamical systems on the space of 2-adic integers. From this view the lectures could be considered as a course in cryptographic applications of the non-Archimedean dynamics; the latter has recently attracted significant attention in connection with applications to physics, biology and cognitive sciences. During the course listeners study non-Archimedean machinery and its applications to stream cipher design.<|reference_end|>
arxiv
@article{anashin2006non-archimedean, title={Non-Archimedean analysis, T-functions, and cryptography}, author={Vladimir Anashin}, journal={arXiv preprint arXiv:cs/0612038}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612038}, primaryClass={cs.CR math.DS} }
anashin2006non-archimedean
arxiv-675252
cs/0612039
Computing the Equilibria of Bimatrix Games using Dominance Heuristics
<|reference_start|>Computing the Equilibria of Bimatrix Games using Dominance Heuristics: We propose a formulation of a general-sum bimatrix game as a bipartite directed graph with the objective of establishing a correspondence between the set of the relevant structures of the graph (in particular elementary cycles) and the set of the Nash equilibria of the game. We show that finding the set of elementary cycles of the graph permits the computation of the set of equilibria. For games whose graphs have a sparse adjacency matrix, this serves as a good heuristic for computing the set of equilibria. The heuristic also allows the discarding of sections of the support space that do not yield any equilibrium, thus serving as a useful pre-processing step for algorithms that compute the equilibria through support enumeration.<|reference_end|>
arxiv
@article{aras2006computing, title={Computing the Equilibria of Bimatrix Games using Dominance Heuristics}, author={Raghav Aras (INRIA Lorraine - LORIA), Alain Dutech (INRIA Lorraine - LORIA), Franc{c}ois Charpillet (INRIA Lorraine - LORIA)}, journal={arXiv preprint arXiv:cs/0612039}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612039}, primaryClass={cs.GT} }
aras2006computing
arxiv-675253
cs/0612040
The Workshop on Internet Topology (WIT) Report
<|reference_start|>The Workshop on Internet Topology (WIT) Report: Internet topology analysis has recently experienced a surge of interest in computer science, physics, and the mathematical sciences. However, researchers from these different disciplines tend to approach the same problem from different angles. As a result, the field of Internet topology analysis and modeling must untangle sets of inconsistent findings, conflicting claims, and contradicting statements. On May 10-12, 2006, CAIDA hosted the Workshop on Internet topology (WIT). By bringing together a group of researchers spanning the areas of computer science, physics, and the mathematical sciences, the workshop aimed to improve communication across these scientific disciplines, enable interdisciplinary crossfertilization, identify commonalities in the different approaches, promote synergy where it exists, and utilize the richness that results from exploring similar problems from multiple perspectives. This report describes the findings of the workshop, outlines a set of relevant open research problems identified by participants, and concludes with recommendations that can benefit all scientific communities interested in Internet topology research.<|reference_end|>
arxiv
@article{krioukov2006the, title={The Workshop on Internet Topology (WIT) Report}, author={Dmitri Krioukov, Fan Chung, kc claffy, Marina Fomenkov, Alessandro Vespignani, Walter Willinger}, journal={ACM SIGCOMM Computer Communication Review (CCR), v.37, n.1, p.69-73, 2007}, year={2006}, doi={10.1145/1198255.1198267}, archivePrefix={arXiv}, eprint={cs/0612040}, primaryClass={cs.NI} }
krioukov2006the
arxiv-675254
cs/0612041
Viterbi Algorithm Generalized for n-Tape Best-Path Search
<|reference_start|>Viterbi Algorithm Generalized for n-Tape Best-Path Search: We present a generalization of the Viterbi algorithm for identifying the path with minimal (resp. maximal) weight in a n-tape weighted finite-state machine (n-WFSM), that accepts a given n-tuple of input strings (s_1,... s_n). It also allows us to compile the best transduction of a given input n-tuple by a weighted (n+m)-WFSM (transducer) with n input and m output tapes. Our algorithm has a worst-case time complexity of O(|s|^n |E| log (|s|^n |Q|)), where n and |s| are the number and average length of the strings in the n-tuple, and |Q| and |E| the number of states and transitions in the n-WFSM, respectively. A straight forward alternative, consisting in intersection followed by classical shortest-distance search, operates in O(|s|^n (|E|+|Q|) log (|s|^n |Q|)) time.<|reference_end|>
arxiv
@article{kempe2006viterbi, title={Viterbi Algorithm Generalized for n-Tape Best-Path Search}, author={Andr'e Kempe}, journal={Proc. FSMNLP 2009, Pretoria, South Africa. July 21-24. (improved version).}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612041}, primaryClass={cs.CL cs.DS cs.SC} }
kempe2006viterbi
arxiv-675255
cs/0612042
Decentralized Maximum Likelihood Estimation for Sensor Networks Composed of Nonlinearly Coupled Dynamical Systems
<|reference_start|>Decentralized Maximum Likelihood Estimation for Sensor Networks Composed of Nonlinearly Coupled Dynamical Systems: In this paper we propose a decentralized sensor network scheme capable to reach a globally optimum maximum likelihood (ML) estimate through self-synchronization of nonlinearly coupled dynamical systems. Each node of the network is composed of a sensor and a first-order dynamical system initialized with the local measurements. Nearby nodes interact with each other exchanging their state value and the final estimate is associated to the state derivative of each dynamical system. We derive the conditions on the coupling mechanism guaranteeing that, if the network observes one common phenomenon, each node converges to the globally optimal ML estimate. We prove that the synchronized state is globally asymptotically stable if the coupling strength exceeds a given threshold. Acting on a single parameter, the coupling strength, we show how, in the case of nonlinear coupling, the network behavior can switch from a global consensus system to a spatial clustering system. Finally, we show the effect of the network topology on the scalability properties of the network and we validate our theoretical findings with simulation results.<|reference_end|>
arxiv
@article{barbarossa2006decentralized, title={Decentralized Maximum Likelihood Estimation for Sensor Networks Composed of Nonlinearly Coupled Dynamical Systems}, author={Sergio Barbarossa and Gesualdo Scutari}, journal={arXiv preprint arXiv:cs/0612042}, year={2006}, doi={10.1109/TSP.2007.893921}, archivePrefix={arXiv}, eprint={cs/0612042}, primaryClass={cs.DC cs.IT math.IT} }
barbarossa2006decentralized
arxiv-675256
cs/0612043
About the Lifespan of Peer to Peer Networks
<|reference_start|>About the Lifespan of Peer to Peer Networks: We analyze the ability of peer to peer networks to deliver a complete file among the peers. Early on we motivate a broad generalization of network behavior organizing it into one of two successive phases. According to this view the network has two main states: first centralized - few sources (roots) hold the complete file, and next distributed - peers hold some parts (chunks) of the file such that the entire network has the whole file, but no individual has it. In the distributed state we study two scenarios, first, when the peers are ``patient'', i.e, do not leave the system until they obtain the complete file; second, peers are ``impatient'' and almost always leave the network before obtaining the complete file.<|reference_end|>
arxiv
@article{cilibrasi2006about, title={About the Lifespan of Peer to Peer Networks}, author={R. Cilibrasi (CWI), Z. Lotker (CWI), A. Navarra (LaBRI - Univ. Bordeaux 1), S. Perennes (CNRS/INRIA/Univ. Nice), P. Vitanyi (CWI/Univ. Amsterdam)}, journal={arXiv preprint arXiv:cs/0612043}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612043}, primaryClass={cs.DC cs.IR} }
cilibrasi2006about
arxiv-675257
cs/0612044
The Relay-Eavesdropper Channel: Cooperation for Secrecy
<|reference_start|>The Relay-Eavesdropper Channel: Cooperation for Secrecy: This paper establishes the utility of user cooperation in facilitating secure wireless communications. In particular, the four-terminal relay-eavesdropper channel is introduced and an outer-bound on the optimal rate-equivocation region is derived. Several cooperation strategies are then devised and the corresponding achievable rate-equivocation region are characterized. Of particular interest is the novel Noise-Forwarding (NF) strategy, where the relay node sends codewords independent of the source message to confuse the eavesdropper. This strategy is used to illustrate the deaf helper phenomenon, where the relay is able to facilitate secure communications while being totally ignorant of the transmitted messages. Furthermore, NF is shown to increase the secrecy capacity in the reversely degraded scenario, where the relay node fails to offer performance gains in the classical setting. The gain offered by the proposed cooperation strategies is then proved theoretically and validated numerically in the additive White Gaussian Noise (AWGN) channel.<|reference_end|>
arxiv
@article{lai2006the, title={The Relay-Eavesdropper Channel: Cooperation for Secrecy}, author={Lifeng Lai and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0612044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612044}, primaryClass={cs.IT math.IT} }
lai2006the
arxiv-675258
cs/0612045
SIMPS: Using Sociology for Personal Mobility
<|reference_start|>SIMPS: Using Sociology for Personal Mobility: Assessing mobility in a thorough fashion is a crucial step toward more efficient mobile network design. Recent research on mobility has focused on two main points: analyzing models and studying their impact on data transport. These works investigate the consequences of mobility. In this paper, instead, we focus on the causes of mobility. Starting from established research in sociology, we propose SIMPS, a mobility model of human crowd motion. This model defines two complimentary behaviors, namely socialize and isolate, that regulate an individual with regard to her/his own sociability level. SIMPS leads to results that agree with scaling laws observed both in small-scale and large-scale human motion. Although our model defines only two simple individual behaviors, we observe many emerging collective behaviors (group formation/splitting, path formation, and evolution). To our knowledge, SIMPS is the first model in the networking community that tackles the roots governing mobility.<|reference_end|>
arxiv
@article{borrel2006simps:, title={SIMPS: Using Sociology for Personal Mobility}, author={Vincent Borrel, Franck Legendre, Marcelo Dias de Amorim, Serge Fdida}, journal={arXiv preprint arXiv:cs/0612045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612045}, primaryClass={cs.NI} }
borrel2006simps:
arxiv-675259
cs/0612046
Social Networks and Social Information Filtering on Digg
<|reference_start|>Social Networks and Social Information Filtering on Digg: The new social media sites -- blogs, wikis, Flickr and Digg, among others -- underscore the transformation of the Web to a participatory medium in which users are actively creating, evaluating and distributing information. Digg is a social news aggregator which allows users to submit links to, vote on and discuss news stories. Each day Digg selects a handful of stories to feature on its front page. Rather than rely on the opinion of a few editors, Digg aggregates opinions of thousands of its users to decide which stories to promote to the front page. Digg users can designate other users as ``friends'' and easily track friends' activities: what new stories they submitted, commented on or read. The friends interface acts as a \emph{social filtering} system, recommending to user stories his or her friends liked or found interesting. By tracking the votes received by newly submitted stories over time, we showed that social filtering is an effective information filtering approach. Specifically, we showed that (a) users tend to like stories submitted by friends and (b) users tend to like stories their friends read and liked. As a byproduct of social filtering, social networks also play a role in promoting stories to Digg's front page, potentially leading to ``tyranny of the minority'' situation where a disproportionate number of front page stories comes from the same small group of interconnected users. Despite this, social filtering is a promising new technology that can be used to personalize and tailor information to individual users: for example, through personal front pages.<|reference_end|>
arxiv
@article{lerman2006social, title={Social Networks and Social Information Filtering on Digg}, author={Kristina Lerman}, journal={arXiv preprint arXiv:cs/0612046}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612046}, primaryClass={cs.HC cs.AI cs.IR} }
lerman2006social
arxiv-675260
cs/0612047
Social Browsing on Flickr
<|reference_start|>Social Browsing on Flickr: The new social media sites - blogs, wikis, del.icio.us and Flickr, among others - underscore the transformation of the Web to a participatory medium in which users are actively creating, evaluating and distributing information. The photo-sharing site Flickr, for example, allows users to upload photographs, view photos created by others, comment on those photos, etc. As is common to other social media sites, Flickr allows users to designate others as ``contacts'' and to track their activities in real time. The contacts (or friends) lists form the social network backbone of social media sites. We claim that these social networks facilitate new ways of interacting with information, e.g., through what we call social browsing. The contacts interface on Flickr enables users to see latest images submitted by their friends. Through an extensive analysis of Flickr data, we show that social browsing through the contacts' photo streams is one of the primary methods by which users find new images on Flickr. This finding has implications for creating personalized recommendation systems based on the user's declared contacts lists.<|reference_end|>
arxiv
@article{lerman2006social, title={Social Browsing on Flickr}, author={Kristina Lerman and Laurie Jones}, journal={arXiv preprint arXiv:cs/0612047}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612047}, primaryClass={cs.HC cs.AI} }
lerman2006social
arxiv-675261
cs/0612048
Queue Model of Leaf Degree Keeping Process in Gnutella Network
<|reference_start|>Queue Model of Leaf Degree Keeping Process in Gnutella Network: Leaf degree keeping process of Gnutella is discussed in this paper. Queue system based on rules of Gnutella protocol are introduced to modeling this process. The leaf degree distributions resulted from the queue system and from our real measurement are compared. The well match of those distributions reveal that the leaf degree distribution in Gnutella network should not be power law or power law like as reported before. It is more likely a distribution driven by certain queue process specified by the protocol.<|reference_end|>
arxiv
@article{li2006queue, title={Queue Model of Leaf Degree Keeping Process in Gnutella Network}, author={chunxi li and changjia chen}, journal={arXiv preprint arXiv:cs/0612048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612048}, primaryClass={cs.NI cs.PF} }
li2006queue
arxiv-675262
cs/0612049
Power Control in Distributed Cooperative OFDMA Cellular Networks
<|reference_start|>Power Control in Distributed Cooperative OFDMA Cellular Networks: This paper has been withdrawn by the author.<|reference_end|>
arxiv
@article{pischella2006power, title={Power Control in Distributed Cooperative OFDMA Cellular Networks}, author={Mylene Pischella, Jean-Claude Belfiore}, journal={arXiv preprint arXiv:cs/0612049}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612049}, primaryClass={cs.IT math.IT} }
pischella2006power
arxiv-675263
cs/0612050
Explicit factors of some iterated resultants and discriminants
<|reference_start|>Explicit factors of some iterated resultants and discriminants: In this paper, the result of applying iterative univariate resultant constructions to multivariate polynomials is analyzed. We consider the input polynomials as generic polynomials of a given degree and exhibit explicit decompositions into irreducible factors of several constructions involving two times iterated univariate resultants and discriminants over the integer universal ring of coefficients of the entry polynomials. Cases involving from two to four generic polynomials and resultants or discriminants in one of their variables are treated. The decompositions into irreducible factors we get are obtained by exploiting fundamental properties of the univariate resultants and discriminants and induction on the degree of the polynomials. As a consequence, each irreducible factor can be separately and explicitly computed in terms of a certain multivariate resultant. With this approach, we also obtain as direct corollaries some results conjectured by Collins and McCallum which correspond to the case of polynomials whose coefficients are themselves generic polynomials in other variables. Finally, a geometric interpretation of the algebraic factorization of the iterated discriminant of a single polynomial is detailled.<|reference_end|>
arxiv
@article{busé2006explicit, title={Explicit factors of some iterated resultants and discriminants}, author={Laurent Bus'e (INRIA Sophia Antipolis), Bernard Mourrain (INRIA Sophia Antipolis)}, journal={Mathematics of Computation / Mathematics of Computation of the American Mathematical Society 78 (2009) 345--386}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612050}, primaryClass={cs.SC math.AC math.AG} }
busé2006explicit
arxiv-675264
cs/0612051
On the Decoder Error Probability of Bounded Rank-Distance Decoders for Maximum Rank Distance Codes
<|reference_start|>On the Decoder Error Probability of Bounded Rank-Distance Decoders for Maximum Rank Distance Codes: In this paper, we first introduce the concept of elementary linear subspace, which has similar properties to those of a set of coordinates. We then use elementary linear subspaces to derive properties of maximum rank distance (MRD) codes that parallel those of maximum distance separable codes. Using these properties, we show that, for MRD codes with error correction capability t, the decoder error probability of bounded rank distance decoders decreases exponentially with t^2 based on the assumption that all errors with the same rank are equally likely.<|reference_end|>
arxiv
@article{gadouleau2006on, title={On the Decoder Error Probability of Bounded Rank-Distance Decoders for Maximum Rank Distance Codes}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:cs/0612051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612051}, primaryClass={cs.IT math.IT} }
gadouleau2006on
arxiv-675265
cs/0612052
Budget Optimization in Search-Based Advertising Auctions
<|reference_start|>Budget Optimization in Search-Based Advertising Auctions: Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been a lot of attention on the auction process and its game-theoretic aspects, our focus is on the advertisers. In particular, the advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well. More precisely, this strategy gets at least 1-1/e fraction of the maximum clicks possible. Such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.<|reference_end|>
arxiv
@article{feldman2006budget, title={Budget Optimization in Search-Based Advertising Auctions}, author={Jon Feldman, S. Muthukrishnan, Martin Pal, Cliff Stein}, journal={arXiv preprint arXiv:cs/0612052}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612052}, primaryClass={cs.DS cs.CE cs.GT} }
feldman2006budget
arxiv-675266
cs/0612053
Deriving Schrodinger Equation From A Soft-Decision Iterative Decoding Algorithm
<|reference_start|>Deriving Schrodinger Equation From A Soft-Decision Iterative Decoding Algorithm: The belief propagation algorithm has been recognized in the information theory community as a soft-decision iterative decoding algorithm. It is the most powerful algorithm found so far for attacking hard optimization problems in channel decoding. Quantum mechanics is the foundation of modern physics with the time-independent Schrodinger equation being one of the most important equations. This paper shows that the equation can be derived from a generalized belief propagation algorithm. Such a connection on a mathematical basis might shed new insights into the foundations of quantum mechanics and quantum computing.<|reference_end|>
arxiv
@article{huang2006deriving, title={Deriving Schrodinger Equation From A Soft-Decision Iterative Decoding Algorithm}, author={Xiaofei Huang, Xiaowu Huang}, journal={arXiv preprint arXiv:cs/0612053}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612053}, primaryClass={cs.IT math.IT} }
huang2006deriving
arxiv-675267
cs/0612054
Lightweight security mechanism for PSTN-VoIP cooperation
<|reference_start|>Lightweight security mechanism for PSTN-VoIP cooperation: In this paper we describe a new, lightweight security mechanism for PSTN-VoIP cooperation that is based on two information hiding techniques: digital watermarking and steganography. Proposed scheme is especially suitable for PSTN-IP-PSTN (toll-by-passing) scenario which nowadays is very popular application of IP Telephony systems. With the use of this mechanism we authenticate end-to-end transmitted voice between PSTN users. Additionally we improve IP part traffic security (both media stream and VoIP signalling messages). Exemplary scenario is presented for SIP signalling protocol along with SIP-T extension and H.248/Megaco protocol.<|reference_end|>
arxiv
@article{mazurczyk2006lightweight, title={Lightweight security mechanism for PSTN-VoIP cooperation}, author={Wojciech Mazurczyk, Zbigniew Kotulski}, journal={arXiv preprint arXiv:cs/0612054}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612054}, primaryClass={cs.CR cs.MM} }
mazurczyk2006lightweight
arxiv-675268
cs/0612055
Linear Probing with Constant Independence
<|reference_start|>Linear Probing with Constant Independence: Hashing with linear probing dates back to the 1950s, and is among the most studied algorithms. In recent years it has become one of the most important hash table organizations since it uses the cache of modern computers very well. Unfortunately, previous analysis rely either on complicated and space consuming hash functions, or on the unrealistic assumption of free access to a truly random hash function. Already Carter and Wegman, in their seminal paper on universal hashing, raised the question of extending their analysis to linear probing. However, we show in this paper that linear probing using a pairwise independent family may have expected {\em logarithmic} cost per operation. On the positive side, we show that 5-wise independence is enough to ensure constant expected time per operation. This resolves the question of finding a space and time efficient hash function that provably ensures good performance for linear probing.<|reference_end|>
arxiv
@article{pagh2006linear, title={Linear Probing with Constant Independence}, author={Anna Pagh and Rasmus Pagh and Milan Ruzic}, journal={arXiv preprint arXiv:cs/0612055}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612055}, primaryClass={cs.DS cs.DB} }
pagh2006linear
arxiv-675269
cs/0612056
Conscious Intelligent Systems - Part 1 : I X I
<|reference_start|>Conscious Intelligent Systems - Part 1 : I X I: Did natural consciousness and intelligent systems arise out of a path that was co-evolutionary to evolution? Can we explain human self-consciousness as having risen out of such an evolutionary path? If so how could it have been? In this first part of a two-part paper (titled IXI), we take a learning system perspective to the problem of consciousness and intelligent systems, an approach that may look unseasonable in this age of fMRI's and high tech neuroscience. We posit conscious intelligent systems in natural environments and wonder how natural factors influence their design paths. Such a perspective allows us to explain seamlessly a variety of natural factors, factors ranging from the rise and presence of the human mind, man's sense of I, his self-consciousness and his looping thought processes to factors like reproduction, incubation, extinction, sleep, the richness of natural behavior, etc. It even allows us to speculate on a possible human evolution scenario and other natural phenomena.<|reference_end|>
arxiv
@article{gayathree2006conscious, title={Conscious Intelligent Systems - Part 1 : I X I}, author={U. Gayathree}, journal={arXiv preprint arXiv:cs/0612056}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612056}, primaryClass={cs.AI} }
gayathree2006conscious
arxiv-675270
cs/0612057
Conscious Intelligent Systems - Part II - Mind, Thought, Language and Understanding
<|reference_start|>Conscious Intelligent Systems - Part II - Mind, Thought, Language and Understanding: This is the second part of a paper on Conscious Intelligent Systems. We use the understanding gained in the first part (Conscious Intelligent Systems Part 1: IXI (arxiv id cs.AI/0612056)) to look at understanding. We see how the presence of mind affects understanding and intelligent systems; we see that the presence of mind necessitates language. The rise of language in turn has important effects on understanding. We discuss the humanoid question and how the question of self-consciousness (and by association mind/thought/language) would affect humanoids too.<|reference_end|>
arxiv
@article{gayathree2006conscious, title={Conscious Intelligent Systems - Part II - Mind, Thought, Language and Understanding}, author={U. Gayathree}, journal={arXiv preprint arXiv:cs/0612057}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612057}, primaryClass={cs.AI} }
gayathree2006conscious
arxiv-675271
cs/0612058
Adaptive Simulated Annealing: A Near-optimal Connection between Sampling and Counting
<|reference_start|>Adaptive Simulated Annealing: A Near-optimal Connection between Sampling and Counting: We present a near-optimal reduction from approximately counting the cardinality of a discrete set to approximately sampling elements of the set. An important application of our work is to approximating the partition function $Z$ of a discrete system, such as the Ising model, matchings or colorings of a graph. The typical approach to estimating the partition function $Z(\beta^*)$ at some desired inverse temperature $\beta^*$ is to define a sequence, which we call a {\em cooling schedule}, $\beta_0=0<\beta_1<...<\beta_\ell=\beta^*$ where Z(0) is trivial to compute and the ratios $Z(\beta_{i+1})/Z(\beta_i)$ are easy to estimate by sampling from the distribution corresponding to $Z(\beta_i)$. Previous approaches required a cooling schedule of length $O^*(\ln{A})$ where $A=Z(0)$, thereby ensuring that each ratio $Z(\beta_{i+1})/Z(\beta_i)$ is bounded. We present a cooling schedule of length $\ell=O^*(\sqrt{\ln{A}})$. For well-studied problems such as estimating the partition function of the Ising model, or approximating the number of colorings or matchings of a graph, our cooling schedule is of length $O^*(\sqrt{n})$, which implies an overall savings of $O^*(n)$ in the running time of the approximate counting algorithm (since roughly $\ell$ samples are needed to estimate each ratio).<|reference_end|>
arxiv
@article{stefankovic2006adaptive, title={Adaptive Simulated Annealing: A Near-optimal Connection between Sampling and Counting}, author={Daniel Stefankovic, Santosh Vempala, Eric Vigoda}, journal={arXiv preprint arXiv:cs/0612058}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612058}, primaryClass={cs.DS cs.DM} }
stefankovic2006adaptive
arxiv-675272
cs/0612059
Synchronization recovery and state model reduction for soft decoding of variable length codes
<|reference_start|>Synchronization recovery and state model reduction for soft decoding of variable length codes: Variable length codes exhibit de-synchronization problems when transmitted over noisy channels. Trellis decoding techniques based on Maximum A Posteriori (MAP) estimators are often used to minimize the error rate on the estimated sequence. If the number of symbols and/or bits transmitted are known by the decoder, termination constraints can be incorporated in the decoding process. All the paths in the trellis which do not lead to a valid sequence length are suppressed. This paper presents an analytic method to assess the expected error resilience of a VLC when trellis decoding with a sequence length constraint is used. The approach is based on the computation, for a given code, of the amount of information brought by the constraint. It is then shown that this quantity as well as the probability that the VLC decoder does not re-synchronize in a strict sense, are not significantly altered by appropriate trellis states aggregation. This proves that the performance obtained by running a length-constrained Viterbi decoder on aggregated state models approaches the one obtained with the bit/symbol trellis, with a significantly reduced complexity. It is then shown that the complexity can be further decreased by projecting the state model on two state models of reduced size.<|reference_end|>
arxiv
@article{malinowski2006synchronization, title={Synchronization recovery and state model reduction for soft decoding of variable length codes}, author={Simon Malinowski (IRISA / INRIA Rennes), Herv'e J'egou (IRISA / INRIA Rennes, INRIA Rh^one-Alpes / GRAVIR-IMAG), Christine Guillemot (IRISA / INRIA Rennes)}, journal={IEEE transactions on information theory (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612059}, primaryClass={cs.NI cs.IT math.IT} }
malinowski2006synchronization
arxiv-675273
cs/0612060
The Common Prefix Problem On Trees
<|reference_start|>The Common Prefix Problem On Trees: We present a theoretical study of a problem arising in database query optimization, which we call as The Common Prefix Problem. We present a $(1-o(1))$ factor approximation algorithm for this problem, when the underlying graph is a binary tree. We then use a result of Feige and Kogan to show that even on stars, the problem is hard to approximate.<|reference_end|>
arxiv
@article{kenkre2006the, title={The Common Prefix Problem On Trees}, author={Sreyash Kenkre, Sundar Vishwanathan}, journal={arXiv preprint arXiv:cs/0612060}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612060}, primaryClass={cs.DS cs.CC} }
kenkre2006the
arxiv-675274
cs/0612061
Trustworthy content push
<|reference_start|>Trustworthy content push: Delivery of content to mobile devices gains increasing importance in industrial environments to support employees in the field. An important application are e-mail push services like the fashionable Blackberry. These systems are facing security challenges regarding data transport to, and storage of the data on the end user equipment. The emerging Trusted Computing technology offers new answers to these open questions.<|reference_end|>
arxiv
@article{kuntze2006trustworthy, title={Trustworthy content push}, author={Nicolai Kuntze and Andreas U. Schmidt}, journal={Wireless Communications and Networking Conference, 2007.WCNC 2007. IEEE, March 2007 Page(s):2909 - 2912}, year={2006}, doi={10.1109/WCNC.2007.539}, archivePrefix={arXiv}, eprint={cs/0612061}, primaryClass={cs.CR} }
kuntze2006trustworthy
arxiv-675275
cs/0612062
Unifying Lexicons in view of a Phonological and Morphological Lexical DB
<|reference_start|>Unifying Lexicons in view of a Phonological and Morphological Lexical DB: The present work falls in the line of activities promoted by the European Languguage Resource Association (ELRA) Production Committee (PCom) and raises issues in methods, procedures and tools for the reusability, creation, and management of Language Resources. A two-fold purpose lies behind this experiment. The first aim is to investigate the feasibility, define methods and procedures for combining two Italian lexical resources that have incompatible formats and complementary information into a Unified Lexicon (UL). The adopted strategy and the procedures appointed are described together with the driving criterion of the merging task, where a balance between human and computational efforts is pursued. The coverage of the UL has been maximized, by making use of simple and fast matching procedures. The second aim is to exploit this newly obtained resource for implementing the phonological and morphological layers of the CLIPS lexical database. Implementing these new layers and linking them with the already exisitng syntactic and semantic layers is not a trivial task. The constraints imposed by the model, the impact at the architectural level and the solution adopted in order to make the whole database `speak' efficiently are presented. Advantages vs. disadvantages are discussed.<|reference_end|>
arxiv
@article{calzolari2006unifying, title={Unifying Lexicons in view of a Phonological and Morphological Lexical DB}, author={Federico Calzolari, Michele Mammini, Monica Monachini}, journal={arXiv preprint arXiv:cs/0612062}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612062}, primaryClass={cs.IR} }
calzolari2006unifying
arxiv-675276
cs/0612063
Improving Precision of Type Analysis Using Non-Discriminative Union
<|reference_start|>Improving Precision of Type Analysis Using Non-Discriminative Union: This paper presents a new type analysis for logic programs. The analysis is performed with a priori type definitions; and type expressions are formed from a fixed alphabet of type constructors. Non-discriminative union is used to join type information from different sources without loss of precision. An operation that is performed repeatedly during an analysis is to detect if a fixpoint has been reached. This is reduced to checking the emptiness of types. Due to the use of non-discriminative union, the fundamental problem of checking the emptiness of types is more complex in the proposed type analysis than in other type analyses with a priori type definitions. The experimental results, however, show that use of tabling reduces the effect to a small fraction of analysis time on a set of benchmarks. Keywords: Type analysis, Non-discriminative union, Abstract interpretation, Tabling<|reference_end|>
arxiv
@article{lu2006improving, title={Improving Precision of Type Analysis Using Non-Discriminative Union}, author={Lunjin Lu}, journal={Theory and Practice of Logic Programming, 8 (1): 33-80, 2008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612063}, primaryClass={cs.PL} }
lu2006improving
arxiv-675277
cs/0612064
Bounds on Key Appearance Equivocation for Substitution Ciphers
<|reference_start|>Bounds on Key Appearance Equivocation for Substitution Ciphers: The average conditional entropy of the key given the message and its corresponding cryptogram, H(K|M,C), which is reffer as a key appearance equivocation, was proposed as a theoretical measure of the strength of the cipher system under a known-plaintext attack by Dunham in 1980. In the same work (among other things), lower and upper bounds for H(S}_{M}|M^L,C^L) are found and its asymptotic behaviour as a function of cryptogram length L is described for simple substitution ciphers i.e. when the key space S_{M} is the symmetric group acting on a discrete alphabet M. In the present paper we consider the same problem when the key space is an arbitrary subgroup K of S_{M} and generalize Dunham's result.<|reference_end|>
arxiv
@article{borissov2006bounds, title={Bounds on Key Appearance Equivocation for Substitution Ciphers}, author={Yuri Borissov and Moon Ho Lee}, journal={arXiv preprint arXiv:cs/0612064}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612064}, primaryClass={cs.IT cs.CR math.IT} }
borissov2006bounds
arxiv-675278
cs/0612065
An equilibrium model for matching impatient demand and patient supply over time
<|reference_start|>An equilibrium model for matching impatient demand and patient supply over time: We present a simple dynamic equilibrium model for an online exchange where both buyers and sellers arrive according to a exogenously defined stochastic process. The structure of this exchange is motivated by the limit order book mechanism used in stock markets. Both buyers and sellers are elastic in the price-quantity space; however, only the sellers are assumed to be patient, i.e. only the sellers have a price - time elasticity, whereas the buyers are assumed to be impatient. Sellers select their selling price as a best response to all the other sellers' strategies. We define and establish the existence of the equilibrium in this model and show how to numerically compute this equilibrium. We also show how to compute other relevant quantities such as the equilibrium expected time to sale and equilibrium expected order density, as well as the expected order density conditioned on current selling price. We derive a closed form for the equilibrium distribution when the demand is price independent. At this equilibrium the selling (limit order) price distribution is power tailed as is empirically observed in order driven financial markets.<|reference_end|>
arxiv
@article{iyengar2006an, title={An equilibrium model for matching impatient demand and patient supply over time}, author={Garud Iyengar and Anuj Kumar}, journal={arXiv preprint arXiv:cs/0612065}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612065}, primaryClass={cs.GT q-fin.TR} }
iyengar2006an
arxiv-675279
cs/0612066
Optimal Filtering for DDoS Attacks
<|reference_start|>Optimal Filtering for DDoS Attacks: Distributed Denial-of-Service (DDoS) attacks are a major problem in the Internet today. In one form of a DDoS attack, a large number of compromised hosts send unwanted traffic to the victim, thus exhausting the resources of the victim and preventing it from serving its legitimate clients. One of the main mechanisms that have been proposed to deal with DDoS is filtering, which allows routers to selectively block unwanted traffic. Given the magnitude of DDoS attacks and the high cost of filters in the routers today, the successful mitigation of a DDoS attack using filtering crucially depends on the efficient allocation of filtering resources. In this paper, we consider a single router, typically the gateway of the victim, with a limited number of available filters. We study how to optimally allocate filters to attack sources, or entire domains of attack sources, so as to maximize the amount of good traffic preserved, under a constraint on the number of filters. We formulate the problem as an optimization problem and solve it optimally using dynamic programming, study the properties of the optimal allocation, experiment with a simple heuristic and evaluate our solutions for a range of realistic attack-scenarios. First, we look at a single-tier where the collateral damage is high due to the filtering at the granularity of domains. Second, we look at the two-tier problem where we have an additional constraint on the number of filters and the filtering is performed on the granularity of attackers and domains.<|reference_end|>
arxiv
@article{defrawy2006optimal, title={Optimal Filtering for DDoS Attacks}, author={Karim El Defrawy, Athina Markopoulou and Katerina Argyraki}, journal={arXiv preprint arXiv:cs/0612066}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612066}, primaryClass={cs.NI} }
defrawy2006optimal
arxiv-675280
cs/0612067
Retrieving Reed-Solomon coded data under interpolation-based list decoding
<|reference_start|>Retrieving Reed-Solomon coded data under interpolation-based list decoding: A transform that enables generator-matrix-based Reed-Solomon (RS) coded data to be recovered under interpolation-based list decoding is presented. The transform matrix needs to be computed only once and the transformation of an element from the output list to the desired RS coded data block incurs $k^{2}$ field multiplications, given a code of dimension $k$.<|reference_end|>
arxiv
@article{zhang2006retrieving, title={Retrieving Reed-Solomon coded data under interpolation-based list decoding}, author={Jianwen Zhang, Marc A. Armand}, journal={arXiv preprint arXiv:cs/0612067}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612067}, primaryClass={cs.IT math.IT} }
zhang2006retrieving
arxiv-675281
cs/0612068
Interactive Configuration by Regular String Constraints
<|reference_start|>Interactive Configuration by Regular String Constraints: A product configurator which is complete, backtrack free and able to compute the valid domains at any state of the configuration can be constructed by building a Binary Decision Diagram (BDD). Despite the fact that the size of the BDD is exponential in the number of variables in the worst case, BDDs have proved to work very well in practice. Current BDD-based techniques can only handle interactive configuration with small finite domains. In this paper we extend the approach to handle string variables constrained by regular expressions. The user is allowed to change the strings by adding letters at the end of the string. We show how to make a data structure that can perform fast valid domain computations given some assignment on the set of string variables. We first show how to do this by using one large DFA. Since this approach is too space consuming to be of practical use, we construct a data structure that simulates the large DFA and in most practical cases are much more space efficient. As an example a configuration problem on $n$ string variables with only one solution in which each string variable is assigned to a value of length of $k$ the former structure will use $\Omega(k^n)$ space whereas the latter only need $O(kn)$. We also show how this framework easily can be combined with the recent BDD techniques to allow both boolean, integer and string variables in the configuration problem.<|reference_end|>
arxiv
@article{hansen2006interactive, title={Interactive Configuration by Regular String Constraints}, author={Esben Rune Hansen and Henrik Reif Andersen}, journal={arXiv preprint arXiv:cs/0612068}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612068}, primaryClass={cs.AI} }
hansen2006interactive
arxiv-675282
cs/0612069
Cores of Countably Categorical Structures
<|reference_start|>Cores of Countably Categorical Structures: A relational structure is a core, if all its endomorphisms are embeddings. This notion is important for computational complexity classification of constraint satisfaction problems. It is a fundamental fact that every finite structure has a core, i.e., has an endomorphism such that the structure induced by its image is a core; moreover, the core is unique up to isomorphism. Weprove that every \omega -categorical structure has a core. Moreover, every \omega-categorical structure is homomorphically equivalent to a model-complete core, which is unique up to isomorphism, and which is finite or \omega -categorical. We discuss consequences for constraint satisfaction with \omega -categorical templates.<|reference_end|>
arxiv
@article{bodirsky2006cores, title={Cores of Countably Categorical Structures}, author={Manuel Bodirsky}, journal={Logical Methods in Computer Science, Volume 3, Issue 1 (January 25, 2007) lmcs:2224}, year={2006}, doi={10.2168/LMCS-3(1:2)2007}, archivePrefix={arXiv}, eprint={cs/0612069}, primaryClass={cs.LO} }
bodirsky2006cores
arxiv-675283
cs/0612070
Generalizations of the Hanoi Towers Problem
<|reference_start|>Generalizations of the Hanoi Towers Problem: Our theme bases on the classical Hanoi Towers Problem. In this paper we will define a new problem, permitting some positions, that were not legal in the classical problem. Our goal is to find an optimal (shortest possible) sequence of discs' moves. Besides that, we will research all versions of 3-pegs classical problem with some special constraints, when some types of moves are disallowed.<|reference_end|>
arxiv
@article{benditkis2006generalizations, title={Generalizations of the Hanoi Towers Problem}, author={Sergey Benditkis, Illya Safro}, journal={arXiv preprint arXiv:cs/0612070}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612070}, primaryClass={cs.DM} }
benditkis2006generalizations
arxiv-675284
cs/0612071
Why is Open Access Development so Successful? Stigmergic organization and the economics of information
<|reference_start|>Why is Open Access Development so Successful? Stigmergic organization and the economics of information: The explosive development of "free" or "open source" information goods contravenes the conventional wisdom that markets and commercial organizations are necessary to efficiently supply products. This paper proposes a theoretical explanation for this phenomenon, using concepts from economics and theories of self-organization. Once available on the Internet, information is intrinsically not a scarce good, as it can be replicated virtually without cost. Moreover, freely distributing information is profitable to its creator, since it improves the quality of the information, and enhances the creator's reputation. This provides a sufficient incentive for people to contribute to open access projects. Unlike traditional organizations, open access communities are open, distributed and self-organizing. Coordination is achieved through stigmergy: listings of "work-in-progress" direct potential contributors to the tasks where their contribution is most likely to be fruitful. This obviates the need both for centralized planning and for the "invisible hand" of the market.<|reference_end|>
arxiv
@article{heylighen2006why, title={Why is Open Access Development so Successful? Stigmergic organization and the economics of information}, author={Francis Heylighen}, journal={in: B. Lutterbeck, M. Baerwolff & R. A. Gehring (eds.), Open Source Jahrbuch 2007, Lehmanns Media, 2007}, year={2006}, number={ECCO Working Paper 2006-06}, archivePrefix={arXiv}, eprint={cs/0612071}, primaryClass={cs.CY cs.DL physics.soc-ph} }
heylighen2006why
arxiv-675285
cs/0612072
Stochastic Models for Budget Optimization in Search-Based Advertising
<|reference_start|>Stochastic Models for Budget Optimization in Search-Based Advertising: Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and address two questions that arise. [Evaluation] Given a solution, can we evaluate the expected value of the objective function? [Optimization] Can we find a solution that maximizes the objective function in expectation? Our main results are approximation and complexity results for these two problems in our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NP-hard.<|reference_end|>
arxiv
@article{muthukrishnan2006stochastic, title={Stochastic Models for Budget Optimization in Search-Based Advertising}, author={S. Muthukrishnan, Martin Pal and Zoya Svitkina}, journal={arXiv preprint arXiv:cs/0612072}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612072}, primaryClass={cs.DS cs.GT} }
muthukrishnan2006stochastic
arxiv-675286
cs/0612073
On the Fingerprinting Capacity Under the Marking Assumption
<|reference_start|>On the Fingerprinting Capacity Under the Marking Assumption: We address the maximum attainable rate of fingerprinting codes under the marking assumption, studying lower and upper bounds on the value of the rate for various sizes of the attacker coalition. Lower bounds are obtained by considering typical coalitions, which represents a new idea in the area of fingerprinting and enables us to improve the previously known lower bounds for coalitions of size two and three. For upper bounds, the fingerprinting problem is modelled as a communications problem. It is shown that the maximum code rate is bounded above by the capacity of a certain class of channels, which are similar to the multiple-access channel. Converse coding theorems proved in the paper provide new upper bounds on fingerprinting capacity. It is proved that capacity for fingerprinting against coalitions of size two and three over the binary alphabet satisfies $0.25 \leq C_{2,2} \leq 0.322$ and $0.083 \leq C_{3,2} \leq 0.199$ respectively. For coalitions of an arbitrary fixed size $t,$ we derive an upper bound $(t\ln2)^{-1}$ on fingerprinting capacity in the binary case. Finally, for general alphabets, we establish upper bounds on the fingerprinting capacity involving only single-letter mutual information quantities.<|reference_end|>
arxiv
@article{anthapadmanabhan2006on, title={On the Fingerprinting Capacity Under the Marking Assumption}, author={N. Prasanth Anthapadmanabhan, Alexander Barg and Ilya Dumer}, journal={arXiv preprint arXiv:cs/0612073}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612073}, primaryClass={cs.IT cs.CR math.IT} }
anthapadmanabhan2006on
arxiv-675287
cs/0612074
Energy Efficient Randomized Communication in Unknown AdHoc Networks
<|reference_start|>Energy Efficient Randomized Communication in Unknown AdHoc Networks: This paper studies broadcasting and gossiping algorithms in random and general AdHoc networks. Our goal is not only to minimise the broadcasting and gossiping time, but also to minimise the energy consumption, which is measured in terms of the total number of messages (or transmissions) sent. We assume that the nodes of the network do not know the network, and that they can only send with a fixed power, meaning they can not adjust the areas sizes that their messages cover. We believe that under these circumstances the number of transmissions is a very good measure for the overall energy consumption. For random networks, we present a broadcasting algorithm where every node transmits at most once. We show that our algorithm broadcasts in $O(\log n)$ steps, w.h.p, where $n$ is the number of nodes. We then present a $O(d \log n)$ ($d$ is the expected degree) gossiping algorithm using $O(\log n)$ messages per node. For general networks with known diameter $D$, we present a randomised broadcasting algorithm with optimal broadcasting time $O(D \log (n/D) + \log^2 n)$ that uses an expected number of $O(\log^2 n / \log (n/D))$ transmissions per node. We also show a tradeoff result between the broadcasting time and the number of transmissions: we construct a network such that any oblivious algorithmusing a time-invariant distribution requires $\Omega(\log^2 n / \log (n/D))$ messages per node in order to finish broadcasting in optimal time. This demonstrates the tightness of our upper bound. We also show that no oblivious algorithm can complete broadcasting w.h.p. using $o(\log n)$ messages per node.<|reference_end|>
arxiv
@article{berenbrink2006energy, title={Energy Efficient Randomized Communication in Unknown AdHoc Networks}, author={Petra Berenbrink and Colin Cooper and Zengjian Hu}, journal={arXiv preprint arXiv:cs/0612074}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612074}, primaryClass={cs.DC cs.DS} }
berenbrink2006energy
arxiv-675288
cs/0612075
Intermediate Performance of Rateless Codes
<|reference_start|>Intermediate Performance of Rateless Codes: Rateless/fountain codes are designed so that all input symbols can be recovered from a slightly larger number of coded symbols, with high probability using an iterative decoder. In this paper we investigate the number of input symbols that can be recovered by the same decoder, but when the number of coded symbols available is less than the total number of input symbols. Of course recovery of all inputs is not possible, and the fraction that can be recovered will depend on the output degree distribution of the code. In this paper we (a) outer bound the fraction of inputs that can be recovered for any output degree distribution of the code, and (b) design degree distributions which meet/perform close to this bound. Our results are of interest for real-time systems using rateless codes, and for Raptor-type two-stage designs.<|reference_end|>
arxiv
@article{sanghavi2006intermediate, title={Intermediate Performance of Rateless Codes}, author={Sujay Sanghavi}, journal={arXiv preprint arXiv:cs/0612075}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612075}, primaryClass={cs.IT math.IT} }
sanghavi2006intermediate
arxiv-675289
cs/0612076
A New Approach for Capacity Analysis of Large Dimensional Multi-Antenna Channels
<|reference_start|>A New Approach for Capacity Analysis of Large Dimensional Multi-Antenna Channels: This paper adresses the behaviour of the mutual information of correlated MIMO Rayleigh channels when the numbers of transmit and receive antennas converge to infinity at the same rate. Using a new and simple approach based on Poincar\'{e}-Nash inequality and on an integration by parts formula, it is rigorously established that the mutual information converges to a Gaussian random variable whose mean and variance are evaluated. These results confirm previous evaluations based on the powerful but non rigorous replica method. It is believed that the tools that are used in this paper are simple, robust, and of interest for the communications engineering community.<|reference_end|>
arxiv
@article{hachem2006a, title={A New Approach for Capacity Analysis of Large Dimensional Multi-Antenna Channels}, author={Walid Hachem (LTCI), Oleksiy Khorunzhiy, Philippe Loubaton (IGM-LabInfo), Jamal Najim (LTCI), Leonid Pastur}, journal={arXiv preprint arXiv:cs/0612076}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612076}, primaryClass={cs.IT math.IT math.PR} }
hachem2006a
arxiv-675290
cs/0612077
Algebraic Signal Processing Theory
<|reference_start|>Algebraic Signal Processing Theory: This paper presents an algebraic theory of linear signal processing. At the core of algebraic signal processing is the concept of a linear signal model defined as a triple (A, M, phi), where familiar concepts like the filter space and the signal space are cast as an algebra A and a module M, respectively, and phi generalizes the concept of the z-transform to bijective linear mappings from a vector space of, e.g., signal samples, into the module M. A signal model provides the structure for a particular linear signal processing application, such as infinite and finite discrete time, or infinite or finite discrete space, or the various forms of multidimensional linear signal processing. As soon as a signal model is chosen, basic ingredients follow, including the associated notions of filtering, spectrum, and Fourier transform. The shift operator is a key concept in the algebraic theory: it is the generator of the algebra of filters A. Once the shift is chosen, a well-defined methodology leads to the associated signal model. Different shifts correspond to infinite and finite time models with associated infinite and finite z-transforms, and to infinite and finite space models with associated infinite and finite C-transforms (that we introduce). In particular, we show that the 16 discrete cosine and sine transforms are Fourier transforms for the finite space models. Other definitions of the shift naturally lead to new signal models and to new transforms as associated Fourier transforms in one and higher dimensions, separable and non-separable. We explain in algebraic terms shift-invariance (the algebra of filters A is commutative), the role of boundary conditions and signal extensions, the connections between linear transforms and linear finite Gauss-Markov fields, and several other concepts and connections.<|reference_end|>
arxiv
@article{püschel2006algebraic, title={Algebraic Signal Processing Theory}, author={Markus P"uschel and Jos'e M. F. Moura}, journal={Algebraic Signal Processing Theory: Foundation and 1-D Time & Algebraic Signal Processing Theory: 1-D Space, both IEEE Trans. SP, 56(8), 2008; Algebraic Signal Processing Theory: 1-D Nearest-Neighbor Models in IEEE Trans. SP, 60(5), 2012}, year={2006}, doi={10.1109/TSP.2008.925261; 10.1109/TSP.2008.925259; 10.1109/TSP.2012.2186133}, archivePrefix={arXiv}, eprint={cs/0612077}, primaryClass={cs.IT math.IT} }
püschel2006algebraic
arxiv-675291
cs/0612078
Effect of Finite Rate Feedback on CDMA Signature Optimization and MIMO Beamforming Vector Selection
<|reference_start|>Effect of Finite Rate Feedback on CDMA Signature Optimization and MIMO Beamforming Vector Selection: We analyze the effect of finite rate feedback on CDMA (code-division multiple access) signature optimization and MIMO (multi-input-multi-output) beamforming vector selection. In CDMA signature optimization, for a particular user, the receiver selects a signature vector from a codebook to best avoid interference from other users, and then feeds the corresponding index back to the specified user. For MIMO beamforming vector selection, the receiver chooses a beamforming vector from a given codebook to maximize throughput, and feeds back the corresponding index to the transmitter. These two problems are dual: both can be modeled as selecting a unit norm vector from a finite size codebook to "match" a randomly generated Gaussian matrix. In signature optimization, the least match is required while the maximum match is preferred for beamforming selection. Assuming that the feedback link is rate limited, our main result is an exact asymptotic performance formulae where the length of the signature/beamforming vector, the dimensions of interference/channel matrix, and the feedback rate approach infinity with constant ratios. The proof rests on a large deviation principle over a random matrix ensemble. Further, we show that random codebooks generated from the isotropic distritution are asymptotically optimal not only on average, but also with probability one.<|reference_end|>
arxiv
@article{dai2006effect, title={Effect of Finite Rate Feedback on CDMA Signature Optimization and MIMO Beamforming Vector Selection}, author={Wei Dai, Youjian Liu, Brian Rider}, journal={arXiv preprint arXiv:cs/0612078}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612078}, primaryClass={cs.IT math.IT} }
dai2006effect
arxiv-675292
cs/0612079
Executing the same binary on several operating systems
<|reference_start|>Executing the same binary on several operating systems: We notice a way to execute a binary file on Windows and ELF-based systems. It can be used to create software installers and other applications not exceeding 64 kilo bytes.<|reference_end|>
arxiv
@article{grønneberg2006executing, title={Executing the same binary on several operating systems}, author={Steffen Gr{o}nneberg}, journal={arXiv preprint arXiv:cs/0612079}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612079}, primaryClass={cs.OS} }
grønneberg2006executing
arxiv-675293
cs/0612080
On the Decrease Rate of the Non-Gaussianness of the Sum of Independent Random Variables
<|reference_start|>On the Decrease Rate of the Non-Gaussianness of the Sum of Independent Random Variables: Several proofs of the monotonicity of the non-Gaussianness (divergence with respect to a Gaussian random variable with identical second order statistics) of the sum of n independent and identically distributed (i.i.d.) random variables were published. We give an upper bound on the decrease rate of the non-Gaussianness which is proportional to the inverse of n, for large n. The proof is based on the relationship between non-Gaussianness and minimum mean-square error (MMSE) and causal minimum mean-square error (CMMSE) in the time-continuous Gaussian channel.<|reference_end|>
arxiv
@article{binia2006on, title={On the Decrease Rate of the Non-Gaussianness of the Sum of Independent Random Variables}, author={Jacob Binia}, journal={arXiv preprint arXiv:cs/0612080}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612080}, primaryClass={cs.IT math.IT} }
binia2006on
arxiv-675294
cs/0612081
Personal Information Ecosystems and Implications for Design
<|reference_start|>Personal Information Ecosystems and Implications for Design: Today, people use multiple devices to fulfill their information needs. However, designers design each device individually, without accounting for the other devices that users may also use. In many cases, the applications on all these devices are designed to be functional replicates of each other. We argue that this results in an over-reliance on data synchronization across devices, version control nightmares, and increased burden of file management. In this paper, we present the idea of a \textit{personal information ecosystem}, an analogy to biological ecosystems, which allows us to discuss the inter-relationships among these devices to fulfill the information needs of the user. There is a need for designers to design devices as part of a complete ecosystem, not as independent devices that simply share data replicated across them. To help us understand this domain and to facilitate the dialogue and study of such systems, we present the terminology, classifications of the interdependencies among different devices, and resulting implications for design.<|reference_end|>
arxiv
@article{tungare2006personal, title={Personal Information Ecosystems and Implications for Design}, author={Manas Tungare, Pardha S. Pyla, Manuel P'erez-Qui~nones, and Steve Harrison}, journal={arXiv preprint arXiv:cs/0612081}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612081}, primaryClass={cs.HC} }
tungare2006personal
arxiv-675295
cs/0612082
Developing efficient parsers in Prolog: the CLF manual (v10)
<|reference_start|>Developing efficient parsers in Prolog: the CLF manual (v10): This document describes a couple of tools that help to quickly design and develop computer (formalized) languages. The first one use Flex to perform lexical analysis and the second is an extention of Prolog DCGs to perfom syntactical analysis. Initially designed as a new component for the Centaur system, these tools are now available independently and can be used to construct efficient Prolog parsers that can be integrated in Prolog or heterogeneous systems. This is the initial version of the CLF documentation. Updated version will be available online when necessary.<|reference_end|>
arxiv
@article{despeyroux2006developing, title={Developing efficient parsers in Prolog: the CLF manual (v1.0)}, author={Thierry Despeyroux (INRIA Rocquencourt / INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:cs/0612082}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612082}, primaryClass={cs.SE} }
despeyroux2006developing
arxiv-675296
cs/0612083
A Byzantine Fault Tolerant Distributed Commit Protocol
<|reference_start|>A Byzantine Fault Tolerant Distributed Commit Protocol: In this paper, we present a Byzantine fault tolerant distributed commit protocol for transactions running over untrusted networks. The traditional two-phase commit protocol is enhanced by replicating the coordinator and by running a Byzantine agreement algorithm among the coordinator replicas. Our protocol can tolerate Byzantine faults at the coordinator replicas and a subset of malicious faults at the participants. A decision certificate, which includes a set of registration records and a set of votes from participants, is used to facilitate the coordinator replicas to reach a Byzantine agreement on the outcome of each transaction. The certificate also limits the ways a faulty replica can use towards non-atomic termination of transactions, or semantically incorrect transaction outcomes.<|reference_end|>
arxiv
@article{zhao2006a, title={A Byzantine Fault Tolerant Distributed Commit Protocol}, author={Wenbing Zhao}, journal={arXiv preprint arXiv:cs/0612083}, year={2006}, doi={10.1109/DASC.2007.10}, archivePrefix={arXiv}, eprint={cs/0612083}, primaryClass={cs.DC cs.DB} }
zhao2006a
arxiv-675297
cs/0612084
Achievable Rates for the General Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy
<|reference_start|>Achievable Rates for the General Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy: We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed eavesdropper who is as capable as the intended receiver, but has different channel parameters. We aim to provide perfect secrecy for the transmitters in this multi-access environment. Using Gaussian codebooks, an achievable secrecy region is determined and the power allocation that maximizes the achievable sum-rate is found. Numerical results showing the new rate region are presented. It is shown that the multiple-access nature of the channel may be utilized to allow users with zero single-user secrecy capacity to be able to transmit in perfect secrecy. In addition, a new collaborative scheme is shown that may increase the achievable sum-rate. In this scheme, a user who would not transmit to maximize the sum rate can help another user who (i) has positive secrecy capacity to increase its rate, or (ii) has zero secrecy capacity to achieve a positive secrecy capacity.<|reference_end|>
arxiv
@article{tekin2006achievable, title={Achievable Rates for the General Gaussian Multiple Access Wire-Tap Channel with Collective Secrecy}, author={Ender Tekin and Aylin Yener}, journal={arXiv preprint arXiv:cs/0612084}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612084}, primaryClass={cs.IT cs.CR math.IT} }
tekin2006achievable
arxiv-675298
cs/0612085
The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems
<|reference_start|>The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems: Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the Parma Polyhedra Library has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.<|reference_end|>
arxiv
@article{bagnara2006the, title={The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems}, author={Roberto Bagnara, Patricia M. Hill, Enea Zaffanella}, journal={arXiv preprint arXiv:cs/0612085}, year={2006}, number={Quaderno 457}, archivePrefix={arXiv}, eprint={cs/0612085}, primaryClass={cs.MS cs.PL} }
bagnara2006the
arxiv-675299
cs/0612086
An asynchronous, decentralised commitment protocol for semantic optimistic replication
<|reference_start|>An asynchronous, decentralised commitment protocol for semantic optimistic replication: We study large-scale distributed cooperative systems that use optimistic replication. We represent a system as a graph of actions (operations) connected by edges that reify semantic constraints between actions. Constraint types include conflict, execution order, dependence, and atomicity. The local state is some schedule that conforms to the constraints; because of conflicts, client state is only tentative. For consistency, site schedules should converge; we designed a decentralised, asynchronous commitment protocol. Each client makes a proposal, reflecting its tentative and{\slash}or preferred schedules. Our protocol distributes the proposals, which it decomposes into semantically-meaningful units called candidates, and runs an election between comparable candidates. A candidate wins when it receives a majority or a plurality. The protocol is fully asynchronous: each site executes its tentative schedule independently, and determines locally when a candidate has won an election. The committed schedule is as close as possible to the preferences expressed by clients.<|reference_end|>
arxiv
@article{sutra2006an, title={An asynchronous, decentralised commitment protocol for semantic optimistic replication}, author={Pierre Sutra (INRIA Rocquencourt), Marc Shapiro (INRIA Rocquencourt), Jo~ao Pedro Barreto (INESC-ID)}, journal={arXiv preprint arXiv:cs/0612086}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612086}, primaryClass={cs.DB cs.NI} }
sutra2006an
arxiv-675300
cs/0612087
Statistical mechanics of neocortical interactions: Portfolio of Physiological Indicators
<|reference_start|>Statistical mechanics of neocortical interactions: Portfolio of Physiological Indicators: There are several kinds of non-invasive imaging methods that are used to collect data from the brain, e.g., EEG, MEG, PET, SPECT, fMRI, etc. It is difficult to get resolution of information processing using any one of these methods. Approaches to integrate data sources may help to get better resolution of data and better correlations to behavioral phenomena ranging from attention to diagnoses of disease. The approach taken here is to use algorithms developed for the author's Trading in Risk Dimensions (TRD) code using modern methods of copula portfolio risk management, with joint probability distributions derived from the author's model of statistical mechanics of neocortical interactions (SMNI). The author's Adaptive Simulated Annealing (ASA) code is for optimizations of training sets, as well as for importance-sampling. Marginal distributions will be evolved to determine their expected duration and stability using algorithms developed by the author, i.e., PATHTREE and PATHINT codes.<|reference_end|>
arxiv
@article{ingber2006statistical, title={Statistical mechanics of neocortical interactions: Portfolio of Physiological Indicators}, author={Lester Ingber}, journal={arXiv preprint arXiv:cs/0612087}, year={2006}, archivePrefix={arXiv}, eprint={cs/0612087}, primaryClass={cs.CE cs.IT cs.NE math.IT q-bio.QM} }
ingber2006statistical