corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-3901
0806.0080
Outer Bounds for Multiple Access Channels with Feedback using Dependence Balance
<|reference_start|>Outer Bounds for Multiple Access Channels with Feedback using Dependence Balance: We use the idea of dependence balance to obtain a new outer bound for the capacity region of the discrete memoryless multiple access channel with noiseless feedback (MAC-FB). We consider a binary additive noisy MAC-FB whose feedback capacity is not known. The binary additive noisy MAC considered in this paper can be viewed as the discrete counterpart of the Gaussian MAC-FB. Ozarow established that the capacity region of the two-user Gaussian MAC-FB is given by the cut-set bound. Our result shows that for the discrete version of the channel considered by Ozarow, this is not the case. Direct evaluation of our outer bound is intractable due to an involved auxiliary random variable whose large cardinality prohibits an exhaustive search. We overcome this difficulty by using functional analysis to explicitly evaluate our outer bound. Our outer bound is strictly less than the cut-set bound at all points on the capacity region where feedback increases capacity. In addition, we explicitly evaluate the Cover-Leung achievable rate region for the binary additive noisy MAC-FB in consideration. Furthermore, using the tools developed for the evaluation of our outer bound, we also explicitly characterize the boundary of the feedback capacity region of the binary erasure MAC, for which the Cover-Leung achievable rate region is known to be tight. This last result confirms that the feedback strategies developed by Kramer for the binary erasure MAC are capacity achieving.<|reference_end|>
arxiv
@article{tandon2008outer, title={Outer Bounds for Multiple Access Channels with Feedback using Dependence Balance}, author={Ravi Tandon, Sennur Ulukus}, journal={arXiv preprint arXiv:0806.0080}, year={2008}, doi={10.1109/TIT.2009.2027532}, archivePrefix={arXiv}, eprint={0806.0080}, primaryClass={cs.IT math.IT} }
tandon2008outer
arxiv-3902
0806.0081
Canonical calculi with (n,k)-ary quantifiers
<|reference_start|>Canonical calculi with (n,k)-ary quantifiers: Propositional canonical Gentzen-type systems, introduced in 2001 by Avron and Lev, are systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a connective is introduced and no other connective is mentioned. A constructive coherence criterion for the non-triviality of such systems was defined and it was shown that a system of this kind admits cut-elimination iff it is coherent. The semantics of such systems is provided using two-valued non-deterministic matrices (2Nmatrices). In 2005 Zamansky and Avron extended these results to systems with unary quantifiers of a very restricted form. In this paper we substantially extend the characterization of canonical systems to (n,k)-ary quantifiers, which bind k distinct variables and connect n formulas, and show that the coherence criterion remains constructive for such systems. Then we focus on the case of k&#8712;{0,1} and for a canonical calculus G show that it is coherent precisely when it has a strongly characteristic 2Nmatrix, which in turn is equivalent to admitting strong cut-elimination.<|reference_end|>
arxiv
@article{avron2008canonical, title={Canonical calculi with (n,k)-ary quantifiers}, author={Arnon Avron and Anna Zamansky}, journal={Logical Methods in Computer Science, Volume 4, Issue 3 (August 6, 2008) lmcs:1139}, year={2008}, doi={10.2168/LMCS-4(3:2)2008}, archivePrefix={arXiv}, eprint={0806.0081}, primaryClass={cs.LO} }
avron2008canonical
arxiv-3903
0806.0103
A note on clique-width and tree-width for structures
<|reference_start|>A note on clique-width and tree-width for structures: We give a simple proof that the straightforward generalisation of clique-width to arbitrary structures can be unbounded on structures of bounded tree-width. This can be corrected by allowing fusion of elements.<|reference_end|>
arxiv
@article{adler2008a, title={A note on clique-width and tree-width for structures}, author={Hans Adler and Isolde Adler}, journal={arXiv preprint arXiv:0806.0103}, year={2008}, archivePrefix={arXiv}, eprint={0806.0103}, primaryClass={cs.LO} }
adler2008a
arxiv-3904
0806.0128
QoS Challenges and Opportunities in Wireless Sensor/Actuator Networks
<|reference_start|>QoS Challenges and Opportunities in Wireless Sensor/Actuator Networks: A wireless sensor/actuator network (WSAN) is a group of sensors and actuators that are geographically distributed and interconnected by wireless networks. Sensors gather information about the state of physical world. Actuators react to this information by performing appropriate actions. WSANs thus enable cyber systems to monitor and manipulate the behavior of the physical world. WSANs are growing at a tremendous pace, just like the exploding evolution of Internet. Supporting quality of service (QoS) will be of critical importance for pervasive WSANs that serve as the network infrastructure of diverse applications. To spark new research and development interests in this field, this paper examines and discusses the requirements, critical challenges, and open research issues on QoS management in WSANs. A brief overview of recent progress is given.<|reference_end|>
arxiv
@article{xia2008qos, title={QoS Challenges and Opportunities in Wireless Sensor/Actuator Networks}, author={Feng Xia}, journal={Sensors 2008, 8(2), 1099-1110}, year={2008}, archivePrefix={arXiv}, eprint={0806.0128}, primaryClass={cs.NI} }
xia2008qos
arxiv-3905
0806.0130
Feedback Scheduling of Priority-Driven Control Networks
<|reference_start|>Feedback Scheduling of Priority-Driven Control Networks: With traditional open-loop scheduling of network resources, the quality-of-control (QoC) of networked control systems (NCSs) may degrade significantly in the presence of limited bandwidth and variable workload. The goal of this work is to maximize the overall QoC of NCSs through dynamically allocating available network bandwidth. Based on codesign of control and scheduling, an integrated feedback scheduler is developed to enable flexible QoC management in dynamic environments. It encompasses a cascaded feedback scheduling module for sampling period adjustment and a direct feedback scheduling module for priority modification. The inherent characteristics of priority-driven control networks make it feasible to implement the proposed feedback scheduler in real-world systems. Extensive simulations show that the proposed approach leads to significant QoC improvement over the traditional open-loop scheduling scheme under both underloaded and overloaded network conditions.<|reference_end|>
arxiv
@article{xia2008feedback, title={Feedback Scheduling of Priority-Driven Control Networks}, author={Feng Xia, Youxian Sun, Yu-Chu Tian}, journal={arXiv preprint arXiv:0806.0130}, year={2008}, archivePrefix={arXiv}, eprint={0806.0130}, primaryClass={cs.NI} }
xia2008feedback
arxiv-3906
0806.0132
Control-theoretic dynamic voltage scaling for embedded controllers
<|reference_start|>Control-theoretic dynamic voltage scaling for embedded controllers: For microprocessors used in real-time embedded systems, minimizing power consumption is difficult due to the timing constraints. Dynamic voltage scaling (DVS) has been incorporated into modern microprocessors as a promising technique for exploring the trade-off between energy consumption and system performance. However, it remains a challenge to realize the potential of DVS in unpredictable environments where the system workload cannot be accurately known. Addressing system-level power-aware design for DVS-enabled embedded controllers, this paper establishes an analytical model for the DVS system that encompasses multiple real-time control tasks. From this model, a feedback control based approach to power management is developed to reduce dynamic power consumption while achieving good application performance. With this approach, the unpredictability and variability of task execution times can be attacked. Thanks to the use of feedback control theory, predictable performance of the DVS system is achieved, which is favorable to real-time applications. Extensive simulations are conducted to evaluate the performance of the proposed approach.<|reference_end|>
arxiv
@article{xia2008control-theoretic, title={Control-theoretic dynamic voltage scaling for embedded controllers}, author={Feng Xia, Yu-Chu Tian, Youxian Sun, Jinxiang Dong}, journal={arXiv preprint arXiv:0806.0132}, year={2008}, archivePrefix={arXiv}, eprint={0806.0132}, primaryClass={cs.OS} }
xia2008control-theoretic
arxiv-3907
0806.0134
Fuzzy Logic Control Based QoS Management in Wireless Sensor/Actuator Networks
<|reference_start|>Fuzzy Logic Control Based QoS Management in Wireless Sensor/Actuator Networks: Wireless sensor/actuator networks (WSANs) are emerging rapidly as a new generation of sensor networks. Despite intensive research in wireless sensor networks (WSNs), limited work has been found in the open literature in the field of WSANs. In particular, quality-of-service (QoS) management in WSANs remains an important issue yet to be investigated. As an attempt in this direction, this paper develops a fuzzy logic control based QoS management (FLC-QM) scheme for WSANs with constrained resources and in dynamic and unpredictable environments. Taking advantage of the feedback control technology, this scheme deals with the impact of unpredictable changes in traffic load on the QoS of WSANs. It utilizes a fuzzy logic controller inside each source sensor node to adapt sampling period to the deadline miss ratio associated with data transmission from the sensor to the actuator. The deadline miss ratio is maintained at a pre-determined desired level so that the required QoS can be achieved. The FLC-QM has the advantages of generality, scalability, and simplicity. Simulation results show that the FLC-QM can provide WSANs with QoS support.<|reference_end|>
arxiv
@article{xia2008fuzzy, title={Fuzzy Logic Control Based QoS Management in Wireless Sensor/Actuator Networks}, author={Feng Xia, Wenhong Zhao, Youxian Sun and Yu-Chu Tian}, journal={Sensors 2007, 7(12), 3179-3191}, year={2008}, archivePrefix={arXiv}, eprint={0806.0134}, primaryClass={cs.NI} }
xia2008fuzzy
arxiv-3908
0806.0142
Regularization of Invers Problem for M-Ary Channel
<|reference_start|>Regularization of Invers Problem for M-Ary Channel: The problem of computation of parameters of m-ary channel is considered. It is demonstrated that although the problem is ill-posed, it is possible turning of the parameters of the system and transform the problem to well-posed one.<|reference_end|>
arxiv
@article{filimonova2008regularization, title={Regularization of Invers Problem for M-Ary Channel}, author={N. A. Filimonova}, journal={arXiv preprint arXiv:0806.0142}, year={2008}, archivePrefix={arXiv}, eprint={0806.0142}, primaryClass={cs.IT math.IT} }
filimonova2008regularization
arxiv-3909
0806.0172
EuSpRIG TEAM work:Tools, Education, Audit, Management
<|reference_start|>EuSpRIG TEAM work:Tools, Education, Audit, Management: Research on spreadsheet errors began over fifteen years ago. During that time, there has been ample evidence demonstrating that spreadsheet errors are common and nontrivial. Quite simply, spreadsheet error rates are comparable to error rates in other human cognitive activities and are caused by fundamental limitations in human cognition, not mere sloppiness. Nor does ordinary "being careful" eliminate errors or reduce them to acceptable levels.<|reference_end|>
arxiv
@article{chadwick2008eusprig, title={EuSpRIG TEAM work:Tools, Education, Audit, Management}, author={David Chadwick}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 1-6 ISBN 1 86166 199 1}, year={2008}, archivePrefix={arXiv}, eprint={0806.0172}, primaryClass={cs.HC cs.CY} }
chadwick2008eusprig
arxiv-3910
0806.0182
Training Gamble leads to Corporate Grumble?
<|reference_start|>Training Gamble leads to Corporate Grumble?: Fifteen years of research studies have concluded unanimously that spreadsheet errors are both common and non-trivial. Now we must seek ways to reduce spreadsheet errors. Several approaches have been suggested, some of which are promising and others, while appealing because they are easy to do, are not likely to be effective. To date, only one technique, cell-by-cell code inspection, has been demonstrated to be effective. We need to conduct further research to determine the degree to which other techniques can reduce spreadsheet errors.<|reference_end|>
arxiv
@article{chadwick2008training, title={Training Gamble leads to Corporate Grumble?}, author={David R. Chadwick}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2002 1-11 ISBN 1 86166 182}, year={2008}, archivePrefix={arXiv}, eprint={0806.0182}, primaryClass={cs.HC cs.CY} }
chadwick2008training
arxiv-3911
0806.0189
Investigating the use of Software Agents to Reduce The Risk of Undetected Errors in Strategic Spreadsheet Applications
<|reference_start|>Investigating the use of Software Agents to Reduce The Risk of Undetected Errors in Strategic Spreadsheet Applications: There is an overlooked iceberg of problems in end user computing. Spreadsheets are developed by people who are very skilled in their main job function, be it finance, procurement, or production planning, but often have had no formal training in spreadsheet use. IT auditors focus on mainstream information systems but regard spreadsheets as user problems, outside their concerns. Internal auditors review processes, but not the tools that support decision making in these processes. This paper highlights the gaps between risk management and end user awareness in spreadsheet research. In addition the potential benefits of software agent technologies to the management of risk in spreadsheets are explored. This paper discusses the current research into end user computing and spreadsheet use awareness.<|reference_end|>
arxiv
@article{cleary2008investigating, title={Investigating the use of Software Agents to Reduce The Risk of Undetected Errors in Strategic Spreadsheet Applications}, author={Pat Cleary, Dr David Ball, Mukul Madahar, Simon Thorne, Christopher Gosling, Karen Fernandez}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 147-159 ISBN 1 86166 199 1}, year={2008}, archivePrefix={arXiv}, eprint={0806.0189}, primaryClass={cs.HC} }
cleary2008investigating
arxiv-3912
0806.0250
Checking the Quality of Clinical Guidelines using Automated Reasoning Tools
<|reference_start|>Checking the Quality of Clinical Guidelines using Automated Reasoning Tools: Requirements about the quality of clinical guidelines can be represented by schemata borrowed from the theory of abductive diagnosis, using temporal logic to model the time-oriented aspects expressed in a guideline. Previously, we have shown that these requirements can be verified using interactive theorem proving techniques. In this paper, we investigate how this approach can be mapped to the facilities of a resolution-based theorem prover, Otter, and a complementary program that searches for finite models of first-order statements, Mace. It is shown that the reasoning required for checking the quality of a guideline can be mapped to such fully automated theorem-proving facilities. The medical quality of an actual guideline concerning diabetes mellitus 2 is investigated in this way.<|reference_end|>
arxiv
@article{hommersom2008checking, title={Checking the Quality of Clinical Guidelines using Automated Reasoning Tools}, author={Arjen Hommersom, Peter J.F. Lucas, and Patrick van Bommel}, journal={arXiv preprint arXiv:0806.0250}, year={2008}, archivePrefix={arXiv}, eprint={0806.0250}, primaryClass={cs.AI cs.LO cs.SC} }
hommersom2008checking
arxiv-3913
0806.0253
On collinear sets in straight line drawings
<|reference_start|>On collinear sets in straight line drawings: We consider straight line drawings of a planar graph $G$ with possible edge crossings. The \emph{untangling problem} is to eliminate all edge crossings by moving as few vertices as possible to new positions. Let $fix(G)$ denote the maximum number of vertices that can be left fixed in the worst case. In the \emph{allocation problem}, we are given a planar graph $G$ on $n$ vertices together with an $n$-point set $X$ in the plane and have to draw $G$ without edge crossings so that as many vertices as possible are located in $X$. Let $fit(G)$ denote the maximum number of points fitting this purpose in the worst case. As $fix(G)\le fit(G)$, we are interested in upper bounds for the latter and lower bounds for the former parameter. For each $\epsilon>0$, we construct an infinite sequence of graphs with $fit(G)=O(n^{\sigma+\epsilon})$, where $\sigma<0.99$ is a known graph-theoretic constant, namely the shortness exponent for the class of cubic polyhedral graphs. To the best of our knowledge, this is the first example of graphs with $fit(G)=o(n)$. On the other hand, we prove that $fix(G)\ge\sqrt{n/30}$ for all $G$ with tree-width at most 2. This extends the lower bound obtained by Goaoc et al. [Discrete and Computational Geometry 42:542-569 (2009)] for outerplanar graphs. Our upper bound for $fit(G)$ is based on the fact that the constructed graphs can have only few collinear vertices in any crossing-free drawing. To prove the lower bound for $fix(G)$, we show that graphs of tree-width 2 admit drawings that have large sets of collinear vertices with some additional special properties.<|reference_end|>
arxiv
@article{ravsky2008on, title={On collinear sets in straight line drawings}, author={Alexander Ravsky and Oleg Verbitsky}, journal={arXiv preprint arXiv:0806.0253}, year={2008}, archivePrefix={arXiv}, eprint={0806.0253}, primaryClass={cs.CG cs.DM} }
ravsky2008on
arxiv-3914
0806.0282
Local approximation algorithms for a class of 0/1 max-min linear programs
<|reference_start|>Local approximation algorithms for a class of 0/1 max-min linear programs: We study the applicability of distributed, local algorithms to 0/1 max-min LPs where the objective is to maximise ${\min_k \sum_v c_{kv} x_v}$ subject to ${\sum_v a_{iv} x_v \le 1}$ for each $i$ and ${x_v \ge 0}$ for each $v$. Here $c_{kv} \in \{0,1\}$, $a_{iv} \in \{0,1\}$, and the support sets ${V_i = \{v : a_{iv} > 0 \}}$ and ${V_k = \{v : c_{kv}>0 \}}$ have bounded size; in particular, we study the case $|V_k| \le 2$. Each agent $v$ is responsible for choosing the value of $x_v$ based on information within its constant-size neighbourhood; the communication network is the hypergraph where the sets $V_k$ and $V_i$ constitute the hyperedges. We present a local approximation algorithm which achieves an approximation ratio arbitrarily close to the theoretical lower bound presented in prior work.<|reference_end|>
arxiv
@article{floréen2008local, title={Local approximation algorithms for a class of 0/1 max-min linear programs}, author={Patrik Flor'een, Marja Hassinen, Petteri Kaski, Jukka Suomela}, journal={arXiv preprint arXiv:0806.0282}, year={2008}, archivePrefix={arXiv}, eprint={0806.0282}, primaryClass={cs.DC} }
floréen2008local
arxiv-3915
0806.0283
Model of information diffusion
<|reference_start|>Model of information diffusion: The system of cellular automata, which expresses the process of dissemination and publication of the news among separate information resources, has been described. A bell-shaped dependence of news diffusion on internet-sources (web-sites) coheres well with a real behavior of thematic data flows, and at local time spans - with noted models, e.g., exponential and logistic ones.<|reference_end|>
arxiv
@article{lande2008model, title={Model of information diffusion}, author={D.V. Lande}, journal={arXiv preprint arXiv:0806.0283}, year={2008}, archivePrefix={arXiv}, eprint={0806.0283}, primaryClass={cs.IT math.IT} }
lande2008model
arxiv-3916
0806.0311
On the Probability of the Existence of Fixed-Size Components in Random Geometric Graphs
<|reference_start|>On the Probability of the Existence of Fixed-Size Components in Random Geometric Graphs: In this work we give precise asymptotic expressions on the probability of the existence of fixed-size components at the threshold of connectivity for random geometric graphs.<|reference_end|>
arxiv
@article{diaz2008on, title={On the Probability of the Existence of Fixed-Size Components in Random Geometric Graphs}, author={J. Diaz, D. Mitsche, X. Perez}, journal={arXiv preprint arXiv:0806.0311}, year={2008}, archivePrefix={arXiv}, eprint={0806.0311}, primaryClass={cs.DM} }
diaz2008on
arxiv-3917
0806.0314
GuiLiner: A Configurable and Extensible Graphical User Interface for Scientific Analysis and Simulation Software
<|reference_start|>GuiLiner: A Configurable and Extensible Graphical User Interface for Scientific Analysis and Simulation Software: The computer programs most users interact with daily are driven by a graphical user interface (GUI). However, many scientific applications are used with a command line interface (CLI) for the ease of development and increased flexibility this mode provides. Scientific application developers would benefit from being able to provide a GUI easily for their CLI programs, thus retaining the advantages of both modes of interaction. GuiLiner is a generic, extensible and flexible front-end designed to ``host'' a wide variety of data analysis or simulation programs. Scientific application developers who produce a correctly formatted XML file describing their program's options and some of its documentation can immediately use GuiLiner to produce a carefully implemented GUI for their analysis or simulation programs.<|reference_end|>
arxiv
@article{manoukis2008guiliner:, title={GuiLiner: A Configurable and Extensible Graphical User Interface for Scientific Analysis and Simulation Software}, author={N. C. Manoukis and E. C. Anderson}, journal={arXiv preprint arXiv:0806.0314}, year={2008}, archivePrefix={arXiv}, eprint={0806.0314}, primaryClass={cs.HC cs.SE} }
manoukis2008guiliner:
arxiv-3918
0806.0341
Succinct Greedy Graph Drawing in the Hyperbolic Plane
<|reference_start|>Succinct Greedy Graph Drawing in the Hyperbolic Plane: We describe an efficient method for drawing any n-vertex simple graph G in the hyperbolic plane. Our algorithm produces greedy drawings, which support greedy geometric routing, so that a message M between any pair of vertices may be routed geometrically, simply by having each vertex that receives M pass it along to any neighbor that is closer in the hyperbolic metric to the message's eventual destination. More importantly, for networking applications, our algorithm produces succinct drawings, in that each of the vertex positions in one of our embeddings can be represented using O(log n) bits and the calculation of which neighbor to send a message to may be performed efficiently using these representations. These properties are useful, for example, for routing in sensor networks, where storage and bandwidth are limited.<|reference_end|>
arxiv
@article{eppstein2008succinct, title={Succinct Greedy Graph Drawing in the Hyperbolic Plane}, author={David Eppstein and Michael T. Goodrich}, journal={arXiv preprint arXiv:0806.0341}, year={2008}, archivePrefix={arXiv}, eprint={0806.0341}, primaryClass={cs.CG} }
eppstein2008succinct
arxiv-3919
0806.0398
Metric Structures and Probabilistic Computation
<|reference_start|>Metric Structures and Probabilistic Computation: Continuous first-order logic is used to apply model-theoretic analysis to analytic structures (e.g. Hilbert spaces, Banach spaces, probability spaces, etc.). Classical computable model theory is used to examine the algorithmic structure of mathematical objects that can be described in classical first-order logic. The present paper shows that probabilistic computation (sometimes called randomized computation) can play an analogous role for structures described in continuous first-order logic. The main result of this paper is an effective completeness theorem, showing that every decidable continuous first-order theory has a probabilistically decidable model. Later sections give examples of the application of this framework to various classes of structures, and to some problems of computational complexity theory.<|reference_end|>
arxiv
@article{calvert2008metric, title={Metric Structures and Probabilistic Computation}, author={Wesley Calvert}, journal={arXiv preprint arXiv:0806.0398}, year={2008}, archivePrefix={arXiv}, eprint={0806.0398}, primaryClass={math.LO cs.LO math.FA} }
calvert2008metric
arxiv-3920
0806.0473
Kinematic Analysis of the vertebra of an eel like robot
<|reference_start|>Kinematic Analysis of the vertebra of an eel like robot: The kinematic analysis of a spherical wrist with parallel architecture is the object of this article. This study is part of a larger French project, which aims to design and to build an eel like robot to imitate the eel swimming. To implement direct and inverse kinematics on the control law of the prototype, we need to evaluate the workspace without any collisions between the different bodies. The tilt and torsion parameters are used to represent the workspace.<|reference_end|>
arxiv
@article{chablat2008kinematic, title={Kinematic Analysis of the vertebra of an eel like robot}, author={Damien Chablat (IRCCyN)}, journal={Dans ASME Design Engineering Technical Conferences - 32nd Annual Mechanisms and Robotics Conference (MR), New-York : \'Etats-Unis d'Am\'erique (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0806.0473}, primaryClass={cs.RO physics.class-ph} }
chablat2008kinematic
arxiv-3921
0806.0478
Subresultants in Recursive Polynomial Remainder Sequence
<|reference_start|>Subresultants in Recursive Polynomial Remainder Sequence: We introduce concepts of "recursive polynomial remainder sequence (PRS)" and "recursive subresultant," and investigate their properties. In calculating PRS, if there exists the GCD (greatest common divisor) of initial polynomials, we calculate "recursively" with new PRS for the GCD and its derivative, until a constant is derived. We call such a PRS a recursive PRS. We define recursive subresultants to be determinants representing the coefficients in recursive PRS by coefficients of initial polynomials. Finally, we discuss usage of recursive subresultants in approximate algebraic computation, which motivates the present work.<|reference_end|>
arxiv
@article{terui2008subresultants, title={Subresultants in Recursive Polynomial Remainder Sequence}, author={Akira Terui}, journal={Proceedings of The 6th International Workshop on Computer Algebra in Scientific Computing: CASC 2003, Institute for Informatics, Technische Universitat Munchen, Garching, Germanay, 2003, 363-375}, year={2008}, archivePrefix={arXiv}, eprint={0806.0478}, primaryClass={math.AC cs.SC} }
terui2008subresultants
arxiv-3922
0806.0488
Recursive Polynomial Remainder Sequence and the Nested Subresultants
<|reference_start|>Recursive Polynomial Remainder Sequence and the Nested Subresultants: We give two new expressions of subresultants, nested subresultant and reduced nested subresultant, for the recursive polynomial remainder sequence (PRS) which has been introduced by the author. The reduced nested subresultant reduces the size of the subresultant matrix drastically compared with the recursive subresultant proposed by the authors before, hence it is much more useful for investigation of the recursive PRS. Finally, we discuss usage of the reduced nested subresultant in approximate algebraic computation, which motivates the present work.<|reference_end|>
arxiv
@article{terui2008recursive, title={Recursive Polynomial Remainder Sequence and the Nested Subresultants}, author={Akira Terui}, journal={Computer Algebra in Scientific Computing (Proc. CASC 2005), Lecture Notes in Computer Science 3718, Springer, 2005, 445-456}, year={2008}, doi={10.1007/11555964_38}, archivePrefix={arXiv}, eprint={0806.0488}, primaryClass={math.AC cs.SC} }
terui2008recursive
arxiv-3923
0806.0495
Recursive Polynomial Remainder Sequence and its Subresultants
<|reference_start|>Recursive Polynomial Remainder Sequence and its Subresultants: We introduce concepts of "recursive polynomial remainder sequence (PRS)" and "recursive subresultant," along with investigation of their properties. A recursive PRS is defined as, if there exists the GCD (greatest common divisor) of initial polynomials, a sequence of PRSs calculated "recursively" for the GCD and its derivative until a constant is derived, and recursive subresultants are defined by determinants representing the coefficients in recursive PRS as functions of coefficients of initial polynomials. We give three different constructions of subresultant matrices for recursive subresultants; while the first one is built-up just with previously defined matrices thus the size of the matrix increases fast as the recursion deepens, the last one reduces the size of the matrix drastically by the Gaussian elimination on the second one which has a "nested" expression, i.e. a Sylvester matrix whose elements are themselves determinants.<|reference_end|>
arxiv
@article{terui2008recursive, title={Recursive Polynomial Remainder Sequence and its Subresultants}, author={Akira Terui}, journal={Journal of Algebra, Vol. 320, No. 2, pp. 633-659, 2008}, year={2008}, doi={10.1016/j.jalgebra.2007.12.023}, archivePrefix={arXiv}, eprint={0806.0495}, primaryClass={math.AC cs.SC} }
terui2008recursive
arxiv-3924
0806.0519
Cluster Development and Knowledge Exchange in Supply Chain
<|reference_start|>Cluster Development and Knowledge Exchange in Supply Chain: Industry cluster and supply chain are in focus of every countries which rely on knowledge-based economy. Both focus on improving the competitiveness of firm in the industry in the different aspect. This paper tries to illustrate how the industry cluster can increase the supply chain performance. Then, the proposed methodology concentrates on the collaboration and knowledge exchange in supply chain. For improving the capability of the proposed methodology, information technology is applied to facilitate the communication and the exchange of knowledge between the actors of the supply chain within the cluster. The supply chain of French stool producer was used as a case study to validate the methodology and to demonstrate the result of the study.<|reference_end|>
arxiv
@article{sureephong2008cluster, title={Cluster Development and Knowledge Exchange in Supply Chain}, author={Pradorn Sureephong (LIESP, CAMT), Nopasit Chakpitak (CAMT), Laurent Buzon (LIESP), Abdelaziz Bouras (LIESP)}, journal={The Proceeding of International conference on Software Knowledge Information Management and Applications (SKIMA 2008), Katmandu : N\'epal (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0806.0519}, primaryClass={cs.CY} }
sureephong2008cluster
arxiv-3925
0806.0526
An Ontology-based Knowledge Management System for Industry Clusters
<|reference_start|>An Ontology-based Knowledge Management System for Industry Clusters: Knowledge-based economy forces companies in the nation to group together as a cluster in order to maintain their competitiveness in the world market. The cluster development relies on two key success factors which are knowledge sharing and collaboration between the actors in the cluster. Thus, our study tries to propose knowledge management system to support knowledge management activities within the cluster. To achieve the objectives of this study, ontology takes a very important role in knowledge management process in various ways; such as building reusable and faster knowledge-bases, better way for representing the knowledge explicitly. However, creating and representing ontology create difficulties to organization due to the ambiguity and unstructured of source of knowledge. Therefore, the objectives of this paper are to propose the methodology to create and represent ontology for the organization development by using knowledge engineering approach. The handicraft cluster in Thailand is used as a case study to illustrate our proposed methodology.<|reference_end|>
arxiv
@article{sureephong2008an, title={An Ontology-based Knowledge Management System for Industry Clusters}, author={Pradorn Sureephong (LIESP, CAMT), Nopasit Chakpitak (CAMT), Yacine Ouzrout (LIESP), Abdelaziz Bouras (LIESP)}, journal={Dans Proceeding of International Conference on Advanced Design and Manufacture - International Conference on Advanced Design and Manufacture (ICADAM 2008), Sanya : Chine (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0806.0526}, primaryClass={cs.AI} }
sureephong2008an
arxiv-3926
0806.0535
Sincere-Strategy Preference-Based Approval Voting Fully Resists Constructive Control and Broadly Resists Destructive Control
<|reference_start|>Sincere-Strategy Preference-Based Approval Voting Fully Resists Constructive Control and Broadly Resists Destructive Control: We study sincere-strategy preference-based approval voting (SP-AV), a system proposed by Brams and Sanver [Electoral Studies, 25(2):287-305, 2006], and here adjusted so as to coerce admissibility of the votes (rather than excluding inadmissible votes a priori), with respect to procedural control. In such control scenarios, an external agent seeks to change the outcome of an election via actions such as adding/deleting/partitioning either candidates or voters. SP-AV combines the voters' preference rankings with their approvals of candidates, where in elections with at least two candidates the voters' approval strategies are adjusted--if needed--to approve of their most-preferred candidate and to disapprove of their least-preferred candidate. This rule coerces admissibility of the votes even in the presence of control actions, and hybridizes, in effect, approval with pluralitiy voting. We prove that this system is computationally resistant (i.e., the corresponding control problems are NP-hard) to 19 out of 22 types of constructive and destructive control. Thus, SP-AV has more resistances to control than is currently known for any other natural voting system with a polynomial-time winner problem. In particular, SP-AV is (after Copeland voting, see Faliszewski et al. [AAIM-2008, Springer LNCS 5034, pp. 165-176, 2008]) the second natural voting system with an easy winner-determination procedure that is known to have full resistance to constructive control, and unlike Copeland voting it in addition displays broad resistance to destructive control.<|reference_end|>
arxiv
@article{erdelyi2008sincere-strategy, title={Sincere-Strategy Preference-Based Approval Voting Fully Resists Constructive Control and Broadly Resists Destructive Control}, author={Gabor Erdelyi, Markus Nowak, and Joerg Rothe}, journal={arXiv preprint arXiv:0806.0535}, year={2008}, archivePrefix={arXiv}, eprint={0806.0535}, primaryClass={cs.GT cs.CC cs.MA} }
erdelyi2008sincere-strategy
arxiv-3927
0806.0557
An efficient and provably secure arbitrated quantum signature scheme
<|reference_start|>An efficient and provably secure arbitrated quantum signature scheme: In this paper, an efficient arbitrated quantum signature scheme is proposed by combining quantum cryptographic techniques and some ideas in classical cryptography. In the presented scheme, the signatory and the receiver can share a long-term secret key with the arbitrator by utilizing the key together with a random number. While in previous quantum signature schemes, the key shared between the signatory and the arbitrator or between the receiver and the arbitrator could be used only once, and thus each time when a signatory needs to sign, the signatory and the receiver have to obtain a new key shared with the arbitrator through a quantum key distribution protocol. Detailed theoretical analysis shows that the proposed scheme is efficient and provably secure.<|reference_end|>
arxiv
@article{li2008an, title={An efficient and provably secure arbitrated quantum signature scheme}, author={Qin Li, Chengqing Li, Chunhui Wu, Dongyang Long, Changji Wang}, journal={arXiv preprint arXiv:0806.0557}, year={2008}, archivePrefix={arXiv}, eprint={0806.0557}, primaryClass={quant-ph cs.CR} }
li2008an
arxiv-3928
0806.0562
Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes
<|reference_start|>Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes: This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent $\alpha$. The minimax redundancy of exponentially decreasing envelope classes is proved to be equivalent to $\frac{1}{4 \alpha \log e} \log^2 n$. Then a coding strategy is proposed, with a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive algorithm is provided, whose redundancy is equivalent to the minimax redundancy<|reference_end|>
arxiv
@article{bontemps2008universal, title={Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes}, author={Dominique Bontemps (LM-Orsay)}, journal={IEEE Transactions on Information Theory 57, 3 (2011) 1466 - 1478}, year={2008}, doi={10.1109/TIT.2010.2103831}, archivePrefix={arXiv}, eprint={0806.0562}, primaryClass={cs.IT math.IT} }
bontemps2008universal
arxiv-3929
0806.0576
Steganographic Routing in Multi Agent System Environment
<|reference_start|>Steganographic Routing in Multi Agent System Environment: In this paper we present an idea of trusted communication platform for Multi-Agent Systems (MAS) called TrustMAS. Based on analysis of routing protocols suitable for MAS we have designed a new proactive hidden routing. Proposed steg-agents discovery procedure, as well as further routes updates and hidden communication, are cryptographically independent. Steganographic exchange can cover heterogeneous and geographically outlying environments using available cross-layer covert channels. Finally we have specified rules that agents have to follow to benefit the TrustMAS distributed router platform.<|reference_end|>
arxiv
@article{szczypiorski2008steganographic, title={Steganographic Routing in Multi Agent System Environment}, author={Krzysztof Szczypiorski, Igor Margasinski, Wojciech Mazurczyk}, journal={"Secured Information Systems" of Journal of Information Assurance and Security (JIAS), Dynamic Publishers Inc., Atlanta, GA 30362, USA, Volume 2, Issue 3, September 2007, pp. 235-243, ISSN 1554-1010}, year={2008}, archivePrefix={arXiv}, eprint={0806.0576}, primaryClass={cs.CR cs.MA} }
szczypiorski2008steganographic
arxiv-3930
0806.0579
Multirate Synchronous Sampling of Sparse Multiband Signals
<|reference_start|>Multirate Synchronous Sampling of Sparse Multiband Signals: Recent advances in optical systems make them ideal for undersampling multiband signals that have high bandwidths. In this paper we propose a new scheme for reconstructing multiband sparse signals using a small number of sampling channels. The scheme, which we call synchronous multirate sampling (SMRS), entails gathering samples synchronously at few different rates whose sum is significantly lower than the Nyquist sampling rate. The signals are reconstructed by solving a system of linear equations. We have demonstrated an accurate and robust reconstruction of signals using a small number of sampling channels that operate at relatively high rates. Sampling at higher rates increases the signal to noise ratio in samples. The SMRS scheme enables a significant reduction in the number of channels required when the sampling rate increases. We have demonstrated, using only three sampling channels, an accurate sampling and reconstruction of 4 real signals (8 bands). The matrices that are used to reconstruct the signals in the SMRS scheme also have low condition numbers. This indicates that the SMRS scheme is robust to noise in signals. The success of the SMRS scheme relies on the assumption that the sampled signals are sparse. As a result most of the sampled spectrum may be unaliased in at least one of the sampling channels. This is in contrast to multicoset sampling schemes in which an alias in one channel is equivalent to an alias in all channels. We have demonstrated that the SMRS scheme obtains similar performance using 3 sampling channels and a total sampling rate 8 times the Landau rate to an implementation of a multicoset sampling scheme that uses 6 sampling channels with a total sampling rate of 13 times the Landau rate.<|reference_end|>
arxiv
@article{fleyer2008multirate, title={Multirate Synchronous Sampling of Sparse Multiband Signals}, author={Michael Fleyer, Amir Rosenthal, Alex Linden, and Moshe Horowitz}, journal={arXiv preprint arXiv:0806.0579}, year={2008}, archivePrefix={arXiv}, eprint={0806.0579}, primaryClass={cs.IT math.IT} }
fleyer2008multirate
arxiv-3931
0806.0604
Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices
<|reference_start|>Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices: We study the information-theoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is high-dimensional in nature, in which the number of observations $n$, the ambient signal dimension $p$, and the signal sparsity $k$ are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (non-Gaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields sharp characterizations of when the optimal decoder can recover a signal for various scalings of the sparsity $k$ and sample size $n$, including the important special case of linear sparsity ($k = \Theta(p)$) using a linear scaling of observations ($n = \Theta(p)$). Our second contribution is to prove necessary conditions on the number of observations $n$ required for asymptotically reliable recovery using a class of $\gamma$-sparsified measurement matrices, where the measurement sparsity $\gamma(n, p, k) \in (0,1]$ corresponds to the fraction of non-zero entries per row. Our analysis allows general scaling of the quadruplet $(n, p, k, \gamma)$, and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the information-theoretic limits of the subset recovery problem.<|reference_end|>
arxiv
@article{wang2008information-theoretic, title={Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices}, author={Wei Wang, Martin J. Wainwright, Kannan Ramchandran}, journal={arXiv preprint arXiv:0806.0604}, year={2008}, archivePrefix={arXiv}, eprint={0806.0604}, primaryClass={math.ST cs.IT math.IT stat.TH} }
wang2008information-theoretic
arxiv-3932
0806.0676
On Peak versus Average Interference Power Constraints for Protecting Primary Users in Cognitive Radio Networks
<|reference_start|>On Peak versus Average Interference Power Constraints for Protecting Primary Users in Cognitive Radio Networks: This paper considers spectrum sharing for wireless communication between a cognitive radio (CR) link and a primary radio (PR) link. It is assumed that the CR protects the PR transmission by applying the so-called interference-temperature constraint, whereby the CR is allowed to transmit regardless of the PR's on/off status provided that the resultant interference power level at the PR receiver is kept below some predefined threshold. For the fading PR and CR channels, the interference-power constraint at the PR receiver is usually one of the following two types: One is to regulate the average interference power (AIP) over all the fading states, while the other is to limit the peak interference power (PIP) at each fading state. From the CR's perspective, given the same average and peak power threshold, the AIP constraint is more favorable than the PIP counterpart because of its more flexibility for dynamically allocating transmit powers over the fading states. On the contrary, from the perspective of protecting the PR, the more restrictive PIP constraint appears at a first glance to be a better option than the AIP. Some surprisingly, this paper shows that in terms of various forms of capacity limits achievable for the PR fading channel, e.g., the ergodic and outage capacities, the AIP constraint is also superior over the PIP. This result is based upon an interesting interference diversity phenomenon, i.e., randomized interference powers over the fading states in the AIP case are more advantageous over deterministic ones in the PIP case for minimizing the resultant PR capacity losses. Therefore, the AIP constraint results in larger fading channel capacities than the PIP for both the CR and PR transmissions.<|reference_end|>
arxiv
@article{zhang2008on, title={On Peak versus Average Interference Power Constraints for Protecting Primary Users in Cognitive Radio Networks}, author={Rui Zhang}, journal={arXiv preprint arXiv:0806.0676}, year={2008}, archivePrefix={arXiv}, eprint={0806.0676}, primaryClass={cs.IT math.IT} }
zhang2008on
arxiv-3933
0806.0689
Directional Cross Diamond Search Algorithm for Fast Block Motion Estimation
<|reference_start|>Directional Cross Diamond Search Algorithm for Fast Block Motion Estimation: In block-matching motion estimation (BMME), the search patterns have a significant impact on the algorithm's performance, both the search speed and the search quality. The search pattern should be designed to fit the motion vector probability (MVP) distribution characteristics of the real-world sequences. In this paper, we build a directional model of MVP distribution to describe the directional-center-biased characteristic of the MVP distribution and the directional characteristics of the conditional MVP distribution more exactly based on the detailed statistical data of motion vectors of eighteen popular sequences. Three directional search patterns are firstly designed by utilizing the directional characteristics and they are the smallest search patterns among the popular ones. A new algorithm is proposed using the horizontal cross search pattern as the initial step and the horizontal/vertical diamond search pattern as the subsequent step for the fast BMME, which is called the directional cross diamond search (DCDS) algorithm. The DCDS algorithm can obtain the motion vector with fewer search points than CDS, DS or HEXBS while maintaining the similar or even better search quality. The gain on speedup of DCDS over CDS or DS can be up to 54.9%. The simulation results show that DCDS is efficient, effective and robust, and it can always give the faster search speed on different sequences than other fast block-matching algorithm in common use.<|reference_end|>
arxiv
@article{jia2008directional, title={Directional Cross Diamond Search Algorithm for Fast Block Motion Estimation}, author={Hongjun Jia, Li Zhang}, journal={arXiv preprint arXiv:0806.0689}, year={2008}, archivePrefix={arXiv}, eprint={0806.0689}, primaryClass={cs.CV} }
jia2008directional
arxiv-3934
0806.0740
Modeling And Simulation Of Prolate Dual-Spin Satellite Dynamics In An Inclined Elliptical Orbit: Case Study Of Palapa B2R Satellite
<|reference_start|>Modeling And Simulation Of Prolate Dual-Spin Satellite Dynamics In An Inclined Elliptical Orbit: Case Study Of Palapa B2R Satellite: In response to the interest to re-use Palapa B2R satellite nearing its End of Life (EOL) time, an idea to incline the satellite orbit in order to cover a new region has emerged in the recent years. As a prolate dual-spin vehicle, Palapa B2R has to be stabilized against its internal energy dissipation effect. This work is focused on analyzing the dynamics of the reusable satellite in its inclined orbit. The study discusses in particular the stability of the prolate dual-spin satellite under the effect of perturbed field of gravitation due to the inclination of its elliptical orbit. Palapa B2R physical data was substituted into the dual-spin's equation of motion. The coefficient of zonal harmonics J2 was induced into the gravity-gradient moment term that affects the satellite attitude. The satellite's motion and attitude were then simulated in the perturbed gravitational field by J2, with the variation of orbit's eccentricity and inclination. The analysis of the satellite dynamics and its stability was conducted for designing a control system for the vehicle in its new inclined orbit.<|reference_end|>
arxiv
@article{muliadi2008modeling, title={Modeling And Simulation Of Prolate Dual-Spin Satellite Dynamics In An Inclined Elliptical Orbit: Case Study Of Palapa B2R Satellite}, author={J. Muliadi, S.D. Jenie, A. Budiyono}, journal={Symposium on Aerospace Science and Technology, Jakarta, 2005}, year={2008}, archivePrefix={arXiv}, eprint={0806.0740}, primaryClass={cs.RO} }
muliadi2008modeling
arxiv-3935
0806.0743
Onboard Multivariable Controller Design for a Small Scale Helicopter Using Coefficient Diagram Method
<|reference_start|>Onboard Multivariable Controller Design for a Small Scale Helicopter Using Coefficient Diagram Method: A mini scale helicopter exhibits not only increased sensitivity to control inputs and disturbances, but also higher bandwidth of its dynamics. These properties make model helicopters, as a flying robot, more difficult to control. The dynamics model accuracy will determine the performance of the designed controller. It is attractive in this regards to have a controller that can accommodate the unmodeled dynamics or parameter changes and perform well in such situations. Coefficient Diagram Method (CDM) is chosen as the candidate to synthesize such a controller due to its simplicity and convenience in demonstrating integrated performance measures including equivalent time constant, stability indices and robustness. In this study, CDM is implemented for a design of multivariable controller for a small scale helicopter during hover and cruise flight. In the synthesis of MIMO CDM, good design common sense based on hands-on experience is necessary. The low level controller algorithm is designed as part of hybrid supervisory control architecture to be implemented on an onboard computer system. Its feasibility and performance are evaluated based on its robustness, desired time domain system responses and compliance to hard-real time requirements.<|reference_end|>
arxiv
@article{budiyono2008onboard, title={Onboard Multivariable Controller Design for a Small Scale Helicopter Using Coefficient Diagram Method}, author={A. Budiyono}, journal={Proceedings of the ICEST, Seoul, Korea, May 2005}, year={2008}, archivePrefix={arXiv}, eprint={0806.0743}, primaryClass={cs.RO} }
budiyono2008onboard
arxiv-3936
0806.0784
Collaborative model of interaction and Unmanned Vehicle Systems' interface
<|reference_start|>Collaborative model of interaction and Unmanned Vehicle Systems' interface: The interface for the next generation of Unmanned Vehicle Systems should be an interface with multi-modal displays and input controls. Then, the role of the interface will not be restricted to be a support of the interactions between the ground operator and vehicles. Interface must take part in the interaction management too. In this paper, we show that recent works in pragmatics and philosophy provide a suitable theoretical framework for the next generation of UV System's interface. We concentrate on two main aspects of the collaborative model of interaction based on acceptance: multi-strategy approach for communicative act generation and interpretation and communicative alignment.<|reference_end|>
arxiv
@article{saget2008collaborative, title={Collaborative model of interaction and Unmanned Vehicle Systems' interface}, author={Sylvie Saget, Francois Legras, Gilles Coppin}, journal={Dans HCP workshop on "Supervisory Control in Critical Systems Management" - 3rd International Conference on Human Centered Processes (HCP-2008), Delft : Pays-Bas (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0806.0784}, primaryClass={cs.AI cs.HC cs.MA} }
saget2008collaborative
arxiv-3937
0806.0837
Upper and Lower Bounds on Black-Box Steganography
<|reference_start|>Upper and Lower Bounds on Black-Box Steganography: We study the limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it. On the negative side, we show that the number of samples the sender must obtain from the channel is exponential in the rate of the stegosystem. On the positive side, we present the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy of the underlying channel. Furthermore, for high-entropy channels, we present the first secret-key stegosystem that matches this lower bound statelessly (i.e., without requiring synchronized state between sender and receiver).<|reference_end|>
arxiv
@article{dedić2008upper, title={Upper and Lower Bounds on Black-Box Steganography}, author={Nenad Dedi'c, Gene Itkis, Leonid Reyzin, Scott Russell}, journal={arXiv preprint arXiv:0806.0837}, year={2008}, doi={10.1007/s00145-008-9020-3}, archivePrefix={arXiv}, eprint={0806.0837}, primaryClass={cs.CR cs.CC cs.IT math.IT} }
dedić2008upper
arxiv-3938
0806.0838
Performance Analysis of Multiple Antenna Multi-User Detection
<|reference_start|>Performance Analysis of Multiple Antenna Multi-User Detection: We derive the diversity order of some multiple antenna multi-user cancellation and detection schemes. The common property of these detection methods is the usage of Alamouti and quasi-orthogonal space-time block codes. For detecting $J$ users each having $N$ transmit antennas, these schemes require only $J$ antennas at the receiver. Our analysis shows that when having $M$ receive antennas, the array-processing schemes provide the diversity order of $N(M-J+1)$. In addition, our results prove that regardless of the number of users or receive antennas, when using maximum-likelihood decoding we get the full transmit and receive diversities, i.e. $NM$, similar to the no-interference scenario.<|reference_end|>
arxiv
@article{kazemitabar2008performance, title={Performance Analysis of Multiple Antenna Multi-User Detection}, author={Javad Kazemitabar and Hamid Jafarkhani}, journal={arXiv preprint arXiv:0806.0838}, year={2008}, archivePrefix={arXiv}, eprint={0806.0838}, primaryClass={cs.IT math.IT} }
kazemitabar2008performance
arxiv-3939
0806.0840
A Dynamic Programming Framework for Combinatorial Optimization Problems on Graphs with Bounded Pathwidth
<|reference_start|>A Dynamic Programming Framework for Combinatorial Optimization Problems on Graphs with Bounded Pathwidth: In this paper we present an algorithmic framework for solving a class of combinatorial optimization problems on graphs with bounded pathwidth. The problems are NP-hard in general, but solvable in linear time on this type of graphs. The problems are relevant for assessing network reliability and improving the network's performance and fault tolerance. The main technique considered in this paper is dynamic programming.<|reference_end|>
arxiv
@article{andreica2008a, title={A Dynamic Programming Framework for Combinatorial Optimization Problems on Graphs with Bounded Pathwidth}, author={Mugurel Ionut Andreica}, journal={arXiv preprint arXiv:0806.0840}, year={2008}, archivePrefix={arXiv}, eprint={0806.0840}, primaryClass={cs.DS cs.DM} }
andreica2008a
arxiv-3940
0806.0860
On the Security of Liaw et al's Scheme
<|reference_start|>On the Security of Liaw et al's Scheme: Recently, Liaw et al. proposed a remote user authentication scheme using smartcards. They claimed a number of features of their scheme, e.g. a dictionary of verification tables is not required to authenticate users; users can choose their password freely; mutual authentication is provided between the user and the remote system; the communication cost and the computational cost are very low; users can update their password after the registration phase; a session key agreed by the user and the remote system is generated in every session; and the nonce-based scheme which does not require a timestamp (to solve the serious time synchronization problem) etc. In this paper We show that Liaw et al.'s scheme does not stand with various security requirements and is completely insecure. Keywords: Authentication, Smartcards, Remote system, Attack.<|reference_end|>
arxiv
@article{awasthi2008on, title={On the Security of Liaw et al.'s Scheme}, author={Amit K Awasthi}, journal={arXiv preprint arXiv:0806.0860}, year={2008}, archivePrefix={arXiv}, eprint={0806.0860}, primaryClass={cs.CR} }
awasthi2008on
arxiv-3941
0806.0870
The Euler-Poincare theory of Metamorphosis
<|reference_start|>The Euler-Poincare theory of Metamorphosis: In the pattern matching approach to imaging science, the process of ``metamorphosis'' is template matching with dynamical templates. Here, we recast the metamorphosis equations of into the Euler-Poincare variational framework of and show that the metamorphosis equations contain the equations for a perfect complex fluid \cite{Ho2002}. This result connects the ideas underlying the process of metamorphosis in image matching to the physical concept of order parameter in the theory of complex fluids. After developing the general theory, we reinterpret various examples, including point set, image and density metamorphosis. We finally discuss the issue of matching measures with metamorphosis, for which we provide existence theorems for the initial and boundary value problems.<|reference_end|>
arxiv
@article{holm2008the, title={The Euler-Poincare theory of Metamorphosis}, author={Darryl D. Holm, Alain Trouve and Laurent Younes}, journal={arXiv preprint arXiv:0806.0870}, year={2008}, archivePrefix={arXiv}, eprint={0806.0870}, primaryClass={cs.CV nlin.CD} }
holm2008the
arxiv-3942
0806.0874
Towards sustainable transport: wireless detection of passenger trips on public transport buses
<|reference_start|>Towards sustainable transport: wireless detection of passenger trips on public transport buses: An important problem in creating efficient public transport is obtaining data about the set of trips that passengers make, usually referred to as an Origin/Destination (OD) matrix. Obtaining this data is problematic and expensive in general, especially in the case of buses because on-board ticketing systems do not record where and when passengers get off a bus. In this paper we describe a novel and inexpensive system that uses off-the-shelf Bluetooth hardware to accurately record passenger journeys. Here we show how our system can be used to derive passenger OD matrices, and additionally we show how our data can be used to further improve public transport services.<|reference_end|>
arxiv
@article{kostakos2008towards, title={Towards sustainable transport: wireless detection of passenger trips on public transport buses}, author={Vassilis Kostakos}, journal={Personal and Ubiquitous Computing, 17(8):1807-1816, 2013}, year={2008}, doi={10.1007/s00779-013-0652-4}, archivePrefix={arXiv}, eprint={0806.0874}, primaryClass={cs.CY} }
kostakos2008towards
arxiv-3943
0806.0899
A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images - I in Memory of WP Dayawansa
<|reference_start|>A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images - I in Memory of WP Dayawansa: In this article, for the first time, one develops a nonparametric methodology for an analysis of shapes of configurations of landmarks on real 3D objects from regular camera photographs, thus making 3D shape analysis very accessible. A fundamental result in computer vision by Faugeras (1992), Hartley, Gupta and Chang (1992) is that generically, a finite 3D configuration of points can be retrieved up to a projective transformation, from corresponding configurations in a pair of camera images. Consequently, the projective shape of a 3D configuration can be retrieved from two of its planar views. Given the inherent registration errors, the 3D projective shape can be estimated from a sample of photos of the scene containing that configuration. Projective shapes are here regarded as points on projective shape manifolds. Using large sample and nonparametric bootstrap methodology for extrinsic means on manifolds, one gives confidence regions and tests for the mean projective shape of a 3D configuration from its 2D camera images.<|reference_end|>
arxiv
@article{patrangenaru2008a, title={A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images - I. in Memory of W.P. Dayawansa}, author={V. Patrangenaru, X. Liu, S. Sugathadasa}, journal={arXiv preprint arXiv:0806.0899}, year={2008}, archivePrefix={arXiv}, eprint={0806.0899}, primaryClass={stat.ME cs.CV math.ST stat.TH} }
patrangenaru2008a
arxiv-3944
0806.0903
On the Capacity of Gaussian Relay Channels
<|reference_start|>On the Capacity of Gaussian Relay Channels: This paper has been withdrawn due to that the same conclusion has been proposed before.<|reference_end|>
arxiv
@article{huang2008on, title={On the Capacity of Gaussian Relay Channels}, author={Shao-Lun Huang, Ming-Yang Chen, Kwang-Cheng Chen, John M. Cioffi}, journal={arXiv preprint arXiv:0806.0903}, year={2008}, archivePrefix={arXiv}, eprint={0806.0903}, primaryClass={cs.IT math.IT} }
huang2008on
arxiv-3945
0806.0905
Channel Capacity Limits of Cognitive Radio in Asymmetric Fading Environments
<|reference_start|>Channel Capacity Limits of Cognitive Radio in Asymmetric Fading Environments: Cognitive radio technology is an innovative radio design concept which aims to increase spectrum utilization by exploiting unused spectrum in dynamically changing environments. By extending previous results, we investigate the capacity gains achievable with this dynamic spectrum approach in asymmetric fading channels. More specifically, we allow the secondary-to-primary and secondary-to-secondary user channels to undergo Rayleigh or Rician fading, with arbitrary link power. In order to compute the capacity, we derive the distributions of ratios of Rayleigh and Rician variables. Compared to the symmetric fading scenario, our results indicate several interesting features of the capacity behaviour under both average and peak received power constraints. Finally, the impact of multiple primary users on the capacity under asymmetric fading has also been studied.<|reference_end|>
arxiv
@article{suraweera2008channel, title={Channel Capacity Limits of Cognitive Radio in Asymmetric Fading Environments}, author={Himal A. Suraweera, Jason Gao, Peter J. Smith, Mansoor Shafi and Michael Faulkner}, journal={arXiv preprint arXiv:0806.0905}, year={2008}, doi={10.1109/ICC.2008.760}, archivePrefix={arXiv}, eprint={0806.0905}, primaryClass={cs.IT math.IT} }
suraweera2008channel
arxiv-3946
0806.0909
Outage and Local Throughput and Capacity of Random Wireless Networks
<|reference_start|>Outage and Local Throughput and Capacity of Random Wireless Networks: Outage probabilities and single-hop throughput are two important performance metrics that have been evaluated for certain specific types of wireless networks. However, there is a lack of comprehensive results for larger classes of networks, and there is no systematic approach that permits the convenient comparison of the performance of networks with different geometries and levels of randomness. The uncertainty cube is introduced to categorize the uncertainty present in a network. The three axes of the cube represent the three main potential sources of uncertainty in interference-limited networks: the node distribution, the channel gains (fading), and the channel access (set of transmitting nodes). For the performance analysis, a new parameter, the so-called {\em spatial contention}, is defined. It measures the slope of the outage probability in an ALOHA network as a function of the transmit probability $p$ at $p=0$. Outage is defined as the event that the signal-to-interference ratio (SIR) is below a certain threshold in a given time slot. It is shown that the spatial contention is sufficient to characterize outage and throughput in large classes of wireless networks, corresponding to different positions on the uncertainty cube. Existing results are placed in this framework, and new ones are derived. Further, interpreting the outage probability as the SIR distribution, the ergodic capacity of unit-distance links is determined and compared to the throughput achievable for fixed (yet optimized) transmission rates.<|reference_end|>
arxiv
@article{haenggi2008outage, title={Outage and Local Throughput and Capacity of Random Wireless Networks}, author={Martin Haenggi}, journal={arXiv preprint arXiv:0806.0909}, year={2008}, doi={10.1109/TWC.2009.090105}, archivePrefix={arXiv}, eprint={0806.0909}, primaryClass={cs.IT cs.NI math.IT} }
haenggi2008outage
arxiv-3947
0806.0920
Drawing (Complete) Binary Tanglegrams: Hardness, Approximation, Fixed-Parameter Tractability
<|reference_start|>Drawing (Complete) Binary Tanglegrams: Hardness, Approximation, Fixed-Parameter Tractability: A \emph{binary tanglegram} is a drawing of a pair of rooted binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example, in phylogenetics, it is essential that both trees are drawn without edge crossings and that the inter-tree edges have as few crossings as possible. It is known that finding a tanglegram with the minimum number of crossings is NP-hard and that the problem is fixed-parameter tractable with respect to that number. We prove that under the Unique Games Conjecture there is no constant-factor approximation for binary trees. We show that the problem is NP-hard even if both trees are complete binary trees. For this case we give an $O(n^3)$-time 2-approximation and a new, simple fixed-parameter algorithm. We show that the maximization version of the dual problem for binary trees can be reduced to a version of MaxCut for which the algorithm of Goemans and Williamson yields a 0.878-approximation.<|reference_end|>
arxiv
@article{buchin2008drawing, title={Drawing (Complete) Binary Tanglegrams: Hardness, Approximation, Fixed-Parameter Tractability}, author={Kevin Buchin, Maike Buchin, Jaroslaw Byrka, Martin N"ollenburg, Yoshio Okamoto, Rodrigo I. Silveira, Alexander Wolff}, journal={arXiv preprint arXiv:0806.0920}, year={2008}, archivePrefix={arXiv}, eprint={0806.0920}, primaryClass={cs.CG cs.CC} }
buchin2008drawing
arxiv-3948
0806.0921
The Mixing Time of Glauber Dynamics for Colouring Regular Trees
<|reference_start|>The Mixing Time of Glauber Dynamics for Colouring Regular Trees: We consider Metropolis Glauber dynamics for sampling proper $q$-colourings of the $n$-vertex complete $b$-ary tree when $3\leq q\leq b/2\ln(b)$. We give both upper and lower bounds on the mixing time. For fixed $q$ and $b$, our upper bound is $n^{O(b/\log b)}$ and our lower bound is $n^{\Omega(b/q \log(b))}$, where the constants implicit in the $O()$ and $\Omega()$ notation do not depend upon $n$, $q$ or $b$.<|reference_end|>
arxiv
@article{goldberg2008the, title={The Mixing Time of Glauber Dynamics for Colouring Regular Trees}, author={Leslie Ann Goldberg and Mark Jerrum and Marek Karpinski}, journal={arXiv preprint arXiv:0806.0921}, year={2008}, archivePrefix={arXiv}, eprint={0806.0921}, primaryClass={cs.CC cs.DM} }
goldberg2008the
arxiv-3949
0806.0928
Drawing Binary Tanglegrams: An Experimental Evaluation
<|reference_start|>Drawing Binary Tanglegrams: An Experimental Evaluation: A binary tanglegram is a pair <S,T> of binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example in phylogenetics or software engineering, it is required that the individual trees are drawn crossing-free. A natural optimization problem, denoted tanglegram layout problem, is thus to minimize the number of crossings between inter-tree edges. The tanglegram layout problem is NP-hard and is currently considered both in application domains and theory. In this paper we present an experimental comparison of a recursive algorithm of Buchin et al., our variant of their algorithm, the algorithm hierarchy sort of Holten and van Wijk, and an integer quadratic program that yields optimal solutions.<|reference_end|>
arxiv
@article{nöllenburg2008drawing, title={Drawing Binary Tanglegrams: An Experimental Evaluation}, author={Martin N"ollenburg, Danny Holten, Markus V"olker, Alexander Wolff}, journal={Proceedings of the 11th Workshop on Algorithm Engineering and Experiments (ALENEX'09), pages 106-119. SIAM, April 2009}, year={2008}, archivePrefix={arXiv}, eprint={0806.0928}, primaryClass={cs.DS cs.CG} }
nöllenburg2008drawing
arxiv-3950
0806.0936
On convergence-sensitive bisimulation and the embedding of CCS in timed CCS
<|reference_start|>On convergence-sensitive bisimulation and the embedding of CCS in timed CCS: We propose a notion of convergence-sensitive bisimulation that is built just over the notions of (internal) reduction and of (static) context. In the framework of timed CCS, we characterise this notion of `contextual' bisimulation via the usual labelled transition system. We also remark that it provides a suitable semantic framework for a fully abstract embedding of untimed processes into timed ones. Finally, we show that the notion can be refined to include sensitivity to divergence.<|reference_end|>
arxiv
@article{amadio2008on, title={On convergence-sensitive bisimulation and the embedding of CCS in timed CCS}, author={Roberto Amadio (PPS)}, journal={Proceedings of the 15th Workshop on Expressiveness in Concurrency (EXPRESS 2008), Canada (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0806.0936}, primaryClass={cs.LO} }
amadio2008on
arxiv-3951
0806.0974
Universality of citation distributions: towards an objective measure of scientific impact
<|reference_start|>Universality of citation distributions: towards an objective measure of scientific impact: We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited $c$ times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator $c_f=c/c_0$ is considered, where $c_0$ is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of $c_f$ as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.<|reference_end|>
arxiv
@article{radicchi2008universality, title={Universality of citation distributions: towards an objective measure of scientific impact}, author={Filippo Radicchi, Santo Fortunato, Claudio Castellano}, journal={Proc. Natl. Acad. Sci. USA 105, 17268-17272 (2008)}, year={2008}, doi={10.1073/pnas.0806977105}, archivePrefix={arXiv}, eprint={0806.0974}, primaryClass={physics.soc-ph cond-mat.stat-mech cs.DL physics.data-an} }
radicchi2008universality
arxiv-3952
0806.0983
A Comparison of Performance Measures for Online Algorithms
<|reference_start|>A Comparison of Performance Measures for Online Algorithms: This paper provides a systematic study of several proposed measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to balance greediness and adaptability. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order Analysis, and determine how these measures compare the Greedy Algorithm, Double Coverage, and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the best of the three algorithms. Under the other measures, Double Coverage and Lazy Double Coverage are better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Only Bijective Analysis and Relative Worst Order Analysis indicate that Lazy Double Coverage is better than Double Coverage. Our results also provide the first proof of optimality of an algorithm under Relative Worst Order Analysis.<|reference_end|>
arxiv
@article{boyar2008a, title={A Comparison of Performance Measures for Online Algorithms}, author={Joan Boyar, Sandy Irani, Kim S. Larsen}, journal={arXiv preprint arXiv:0806.0983}, year={2008}, archivePrefix={arXiv}, eprint={0806.0983}, primaryClass={cs.DS} }
boyar2008a
arxiv-3953
0806.1006
The VO-Neural project: recent developments and some applications
<|reference_start|>The VO-Neural project: recent developments and some applications: VO-Neural is the natural evolution of the Astroneural project which was started in 1994 with the aim to implement a suite of neural tools for data mining in astronomical massive data sets. At a difference with its ancestor, which was implemented under Matlab, VO-Neural is written in C++, object oriented, and it is specifically tailored to work in distributed computing architectures. We discuss the current status of implementation of VO-Neural, present an application to the classification of Active Galactic Nuclei, and outline the ongoing work to improve the functionalities of the package.<|reference_end|>
arxiv
@article{brescia2008the, title={The VO-Neural project: recent developments and some applications}, author={M. Brescia, S. Cavuoti, G. d'Angelo, R. D'Abrusco, N. Deniskina, M. Garofalo, O. Laurino, G. Longo, A. Nocella, B. Skordovski}, journal={arXiv preprint arXiv:0806.1006}, year={2008}, archivePrefix={arXiv}, eprint={0806.1006}, primaryClass={astro-ph cs.CE} }
brescia2008the
arxiv-3954
0806.1034
Analysis of a procedure for inserting steganographic data into VoIP calls
<|reference_start|>Analysis of a procedure for inserting steganographic data into VoIP calls: The paper concerns performance analysis of a steganographic method, dedicated primarily for VoIP, which was recently filed for patenting under the name LACK. The performance of the method depends on the procedure of inserting covert data into the stream of audio packets. After a brief presentation of the LACK method, the paper focuses on analysis of the dependence of the insertion procedure on the probability distribution of VoIP call duration.<|reference_end|>
arxiv
@article{mazurczyk2008analysis, title={Analysis of a procedure for inserting steganographic data into VoIP calls}, author={Wojciech Mazurczyk and Jozef Lubacz}, journal={arXiv preprint arXiv:0806.1034}, year={2008}, archivePrefix={arXiv}, eprint={0806.1034}, primaryClass={cs.CR} }
mazurczyk2008analysis
arxiv-3955
0806.1041
3-connected Planar Graph Isomorphism is in Log-space
<|reference_start|>3-connected Planar Graph Isomorphism is in Log-space: We show that the isomorphism of 3-connected planar graphs can be decided in deterministic log-space. This improves the previously known bound UL$\cap$coUL of Thierauf and Wagner.<|reference_end|>
arxiv
@article{datta20083-connected, title={3-connected Planar Graph Isomorphism is in Log-space}, author={Samir Datta, Nutan Limaye, Prajakta Nimbhorkar}, journal={arXiv preprint arXiv:0806.1041}, year={2008}, archivePrefix={arXiv}, eprint={0806.1041}, primaryClass={cs.CC} }
datta20083-connected
arxiv-3956
0806.1062
Capacity of Block-Memoryless Channels with Causal Channel Side Information
<|reference_start|>Capacity of Block-Memoryless Channels with Causal Channel Side Information: The capacity of a time-varying block-memoryless channel in which the transmitter and the receiver have access to (possibly different) noisy causal channel side information (CSI) is obtained. It is shown that the capacity formula obtained in this correspondence reduces to the capacity formula reported in \cite{Gold07} for the special case where the transmitter CSI is a deterministic function of the receiver CSI.<|reference_end|>
arxiv
@article{farmanbar2008capacity, title={Capacity of Block-Memoryless Channels with Causal Channel Side Information}, author={Hamid Farmanbar and Amir K. Khandani}, journal={arXiv preprint arXiv:0806.1062}, year={2008}, archivePrefix={arXiv}, eprint={0806.1062}, primaryClass={cs.IT math.IT} }
farmanbar2008capacity
arxiv-3957
0806.1071
Histograms and Wavelets on Probabilistic Data
<|reference_start|>Histograms and Wavelets on Probabilistic Data: There is a growing realization that uncertain information is a first-class citizen in modern database management. As such, we need techniques to correctly and efficiently process uncertain data in database systems. In particular, data reduction techniques that can produce concise, accurate synopses of large probabilistic relations are crucial. Similar to their deterministic relation counterparts, such compact probabilistic data synopses can form the foundation for human understanding and interactive data exploration, probabilistic query planning and optimization, and fast approximate query processing in probabilistic database systems. In this paper, we introduce definitions and algorithms for building histogram- and wavelet-based synopses on probabilistic data. The core problem is to choose a set of histogram bucket boundaries or wavelet coefficients to optimize the accuracy of the approximate representation of a collection of probabilistic tuples under a given error metric. For a variety of different error metrics, we devise efficient algorithms that construct optimal or near optimal B-term histogram and wavelet synopses. This requires careful analysis of the structure of the probability distributions, and novel extensions of known dynamic-programming-based techniques for the deterministic domain. Our experiments show that this approach clearly outperforms simple ideas, such as building summaries for samples drawn from the data distribution, while taking equal or less time.<|reference_end|>
arxiv
@article{cormode2008histograms, title={Histograms and Wavelets on Probabilistic Data}, author={Graham Cormode and Minos Garofalakis}, journal={arXiv preprint arXiv:0806.1071}, year={2008}, archivePrefix={arXiv}, eprint={0806.1071}, primaryClass={cs.DB} }
cormode2008histograms
arxiv-3958
0806.1089
Fair and Efficient TCP Access in the IEEE 80211 Infrastructure Basic Service Set
<|reference_start|>Fair and Efficient TCP Access in the IEEE 80211 Infrastructure Basic Service Set: When the stations in an IEEE 802.11 infrastructure Basic Service Set (BSS) employ Transmission Control Protocol (TCP) in the transport layer, this exacerbates per-flow unfair access which is a direct result of uplink/downlink bandwidth asymmetry in the BSS. We propose a novel and simple analytical model to approximately calculate the per-flow TCP congestion window limit that provides fair and efficient TCP access in a heterogeneous wired-wireless scenario. The proposed analysis is unique in that it considers the effects of varying number of uplink and downlink TCP flows, differing Round Trip Times (RTTs) among TCP connections, and the use of delayed TCP Acknowledgment (ACK) mechanism. Motivated by the findings of this analysis, we design a link layer access control block to be employed only at the Access Point (AP) in order to resolve the unfair access problem. The novel and simple idea of the proposed link layer access control block is employing a congestion control and filtering algorithm on TCP ACK packets of uplink flows, thereby prioritizing the access of TCP data packets of downlink flows at the AP. Via simulations, we show that short- and long-term fair access can be provisioned with the introduction of the proposed link layer access control block to the protocol stack of the AP while improving channel utilization and access delay.<|reference_end|>
arxiv
@article{keceli2008fair, title={Fair and Efficient TCP Access in the IEEE 802.11 Infrastructure Basic Service Set}, author={Feyza Keceli, Inanc Inan, Ender Ayanoglu}, journal={arXiv preprint arXiv:0806.1089}, year={2008}, archivePrefix={arXiv}, eprint={0806.1089}, primaryClass={cs.NI} }
keceli2008fair
arxiv-3959
0806.1093
Fair Access Provisioning through Contention Parameter Adaptation in the IEEE 80211e Infrastructure Basic Service Set
<|reference_start|>Fair Access Provisioning through Contention Parameter Adaptation in the IEEE 80211e Infrastructure Basic Service Set: We present the station-based unfair access problem among the uplink and the downlink stations in the IEEE 802.11e infrastructure Basic Service Set (BSS) when the default settings of the Enhanced Distributed Channel Access (EDCA) parameters are used. We discuss how the transport layer protocol characteristics alleviate the unfairness problem. We design a simple, practical, and standard-compliant framework to be employed at the Access Point (AP) for fair and efficient access provisioning. A dynamic measurement-based EDCA parameter adaptation block lies in the core of this framework. The proposed framework is unique in the sense that it considers the characteristic differences of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows and the coexistence of stations with varying bandwidth or Quality-of-Service (QoS) requirements. Via simulations, we show that our solution provides short- and long-term fair access for all stations in the uplink and downlink employing TCP and UDP flows with non-uniform packet rates in a wired-wireless heterogeneous network. In the meantime, the QoS requirements of coexisting real-time flows are also maintained.<|reference_end|>
arxiv
@article{keceli2008fair, title={Fair Access Provisioning through Contention Parameter Adaptation in the IEEE 802.11e Infrastructure Basic Service Set}, author={Feyza Keceli, Inanc Inan, Ender Ayanoglu}, journal={arXiv preprint arXiv:0806.1093}, year={2008}, archivePrefix={arXiv}, eprint={0806.1093}, primaryClass={cs.NI} }
keceli2008fair
arxiv-3960
0806.1100
Design Patterns for Complex Event Processing
<|reference_start|>Design Patterns for Complex Event Processing: Currently engineering efficient and successful event-driven applications based on the emerging Complex Event Processing (CEP) technology, is a laborious trial and error process. The proposed CEP design pattern approach should support CEP engineers in their design decisions to build robust and efficient CEP solutions with well understood tradeoffs and should enable an interdisciplinary and efficient communication process about successful CEP solutions in different application domains.<|reference_end|>
arxiv
@article{paschke2008design, title={Design Patterns for Complex Event Processing}, author={Adrian Paschke}, journal={arXiv preprint arXiv:0806.1100}, year={2008}, archivePrefix={arXiv}, eprint={0806.1100}, primaryClass={cs.SE} }
paschke2008design
arxiv-3961
0806.1103
On the Approximability of Comparing Genomes with Duplicates
<|reference_start|>On the Approximability of Comparing Genomes with Duplicates: A central problem in comparative genomics consists in computing a (dis-)similarity measure between two genomes, e.g. in order to construct a phylogeny. All the existing measures are defined on genomes without duplicates. However, we know that genes can be duplicated within the same genome. One possible approach to overcome this difficulty is to establish a one-to-one correspondence (i.e. a matching) between genes of both genomes, where the correspondence is chosen in order to optimize the studied measure. In this paper, we are interested in three measures (number of breakpoints, number of common intervals and number of conserved intervals) and three models of matching (exemplar, intermediate and maximum matching models). We prove that, for each model and each measure M, computing a matching between two genomes that optimizes M is APX-hard. We also study the complexity of the following problem: is there an exemplarization (resp. an intermediate/maximum matching) that induces no breakpoint? We prove the problem to be NP-Complete in the exemplar model for a new class of instances, and we show that the problem is in P in the maximum matching model. We also focus on a fourth measure: the number of adjacencies, for which we give several approximation algorithms in the maximum matching model, in the case where genomes contain the same number of duplications of each gene.<|reference_end|>
arxiv
@article{angibaud2008on, title={On the Approximability of Comparing Genomes with Duplicates}, author={S'ebastien Angibaud (LINA), Guillaume Fertin (LINA), Irena Rusu (LINA), Annelyse Thevenin (LRI), St'ephane Vialette (IGM)}, journal={arXiv preprint arXiv:0806.1103}, year={2008}, archivePrefix={arXiv}, eprint={0806.1103}, primaryClass={q-bio.QM cs.CC} }
angibaud2008on
arxiv-3962
0806.1139
Significant Diagnostic Counterexamples in Probabilistic Model Checking
<|reference_start|>Significant Diagnostic Counterexamples in Probabilistic Model Checking: This paper presents a novel technique for counterexample generation in probabilistic model checking of Markov Chains and Markov Decision Processes. (Finite) paths in counterexamples are grouped together in witnesses that are likely to provide similar debugging information to the user. We list five properties that witnesses should satisfy in order to be useful as debugging aid: similarity, accuracy, originality, significance, and finiteness. Our witnesses contain paths that behave similar outside strongly connected components. This papers shows how to compute these witnesses by reducing the problem of generating counterexamples for general properties over Markov Decision Processes, in several steps, to the easy problem of generating counterexamples for reachability properties over acyclic Markov Chains.<|reference_end|>
arxiv
@article{andres2008significant, title={Significant Diagnostic Counterexamples in Probabilistic Model Checking}, author={Miguel E. Andres, Pedro D'Argenio, Peter van Rossum}, journal={arXiv preprint arXiv:0806.1139}, year={2008}, archivePrefix={arXiv}, eprint={0806.1139}, primaryClass={cs.LO cs.PF} }
andres2008significant
arxiv-3963
0806.1144
GRID-Launcher v10
<|reference_start|>GRID-Launcher v10: GRID-launcher-1.0 was built within the VO-Tech framework, as a software interface between the UK-ASTROGRID and a generic GRID infrastructures in order to allow any ASTROGRID user to launch on the GRID computing intensive tasks from the ASTROGRID Workbench or Desktop. Even though of general application, so far the Grid-Launcher has been tested on a few selected softwares (VONeural-MLP, VONeural-SVM, Sextractor and SWARP) and on the SCOPE-GRID.<|reference_end|>
arxiv
@article{deniskina2008grid-launcher, title={GRID-Launcher v.1.0}, author={N. Deniskina, M. Brescia, S. Cavuoti, G. d'Angelo, O. Laurino, G. Longo}, journal={arXiv preprint arXiv:0806.1144}, year={2008}, archivePrefix={arXiv}, eprint={0806.1144}, primaryClass={astro-ph cs.CE} }
deniskina2008grid-launcher
arxiv-3964
0806.1148
Unsatisfiable CNF Formulas need many Conflicts
<|reference_start|>Unsatisfiable CNF Formulas need many Conflicts: A pair of clauses in a CNF formula constitutes a conflict if there is a variable that occurs positively in one clause and negatively in the other. A CNF formula without any conflicts is satisfiable. The Lovasz Local Lemma implies that a k-CNF formula is satisfiable if each clause conflicts with at most 2^k/e-1 clauses. It does not, however, give any good bound on how many conflicts an unsatisfiable formula has globally. We show here that every unsatisfiable k-CNF formula requires 2.69^k conflicts and there exist unsatisfiable k-CNF formulas with 3.51^k conflicts.<|reference_end|>
arxiv
@article{scheder2008unsatisfiable, title={Unsatisfiable CNF Formulas need many Conflicts}, author={Dominik Scheder and Philipp Zumstein}, journal={arXiv preprint arXiv:0806.1148}, year={2008}, archivePrefix={arXiv}, eprint={0806.1148}, primaryClass={cs.DM} }
scheder2008unsatisfiable
arxiv-3965
0806.1156
Utilisation des grammaires probabilistes dans les t\^aches de segmentation et d'annotation prosodique
<|reference_start|>Utilisation des grammaires probabilistes dans les t\^aches de segmentation et d'annotation prosodique: Nous pr\'esentons dans cette contribution une approche \`a la fois symbolique et probabiliste permettant d'extraire l'information sur la segmentation du signal de parole \`a partir d'information prosodique. Nous utilisons pour ce faire des grammaires probabilistes poss\'edant une structure hi\'erarchique minimale. La phase de construction des grammaires ainsi que leur pouvoir de pr\'ediction sont \'evalu\'es qualitativement ainsi que quantitativement. ----- Methodologically oriented, the present work sketches an approach for prosodic information retrieval and speech segmentation, based on both symbolic and probabilistic information. We have recourse to probabilistic grammars, within which we implement a minimal hierarchical structure. Both the stages of probabilistic grammar building and its testing in prediction are explored and quantitatively and qualitatively evaluated.<|reference_end|>
arxiv
@article{nesterenko2008utilisation, title={Utilisation des grammaires probabilistes dans les t\^aches de segmentation et d'annotation prosodique}, author={Irina Nesterenko (LPL), St'ephane Rauzy (LPL)}, journal={Journ\'ees d'Etudes sur la Parole, Avignon : France (2008)}, year={2008}, number={3267}, archivePrefix={arXiv}, eprint={0806.1156}, primaryClass={cs.LG} }
nesterenko2008utilisation
arxiv-3966
0806.1199
Belief Propagation and Beyond for Particle Tracking
<|reference_start|>Belief Propagation and Beyond for Particle Tracking: We describe a novel approach to statistical learning from particles tracked while moving in a random environment. The problem consists in inferring properties of the environment from recorded snapshots. We consider here the case of a fluid seeded with identical passive particles that diffuse and are advected by a flow. Our approach rests on efficient algorithms to estimate the weighted number of possible matchings among particles in two consecutive snapshots, the partition function of the underlying graphical model. The partition function is then maximized over the model parameters, namely diffusivity and velocity gradient. A Belief Propagation (BP) scheme is the backbone of our algorithm, providing accurate results for the flow parameters we want to learn. The BP estimate is additionally improved by incorporating Loop Series (LS) contributions. For the weighted matching problem, LS is compactly expressed as a Cauchy integral, accurately estimated by a saddle point approximation. Numerical experiments show that the quality of our improved BP algorithm is comparable to the one of a fully polynomial randomized approximation scheme, based on the Markov Chain Monte Carlo (MCMC) method, while the BP-based scheme is substantially faster than the MCMC scheme.<|reference_end|>
arxiv
@article{chertkov2008belief, title={Belief Propagation and Beyond for Particle Tracking}, author={Michael Chertkov, Lukas Kroc, Massimo Vergassola}, journal={arXiv preprint arXiv:0806.1199}, year={2008}, number={LA-UR-08-3645}, archivePrefix={arXiv}, eprint={0806.1199}, primaryClass={cs.IT cond-mat.stat-mech cs.AI cs.LG math.IT physics.flu-dyn} }
chertkov2008belief
arxiv-3967
0806.1215
Performance of LDPC Codes Under Faulty Iterative Decoding
<|reference_start|>Performance of LDPC Codes Under Faulty Iterative Decoding: Departing from traditional communication theory where decoding algorithms are assumed to perform without error, a system where noise perturbs both computational devices and communication channels is considered here. This paper studies limits in processing noisy signals with noisy circuits by investigating the effect of noise on standard iterative decoders for low-density parity-check codes. Concentration of decoding performance around its average is shown to hold when noise is introduced into message-passing and local computation. Density evolution equations for simple faulty iterative decoders are derived. In one model, computing nonlinear estimation thresholds shows that performance degrades smoothly as decoder noise increases, but arbitrarily small probability of error is not achievable. Probability of error may be driven to zero in another system model; the decoding threshold again decreases smoothly with decoder noise. As an application of the methods developed, an achievability result for reliable memory systems constructed from unreliable components is provided.<|reference_end|>
arxiv
@article{varshney2008performance, title={Performance of LDPC Codes Under Faulty Iterative Decoding}, author={Lav R. Varshney}, journal={arXiv preprint arXiv:0806.1215}, year={2008}, archivePrefix={arXiv}, eprint={0806.1215}, primaryClass={cs.IT math.IT} }
varshney2008performance
arxiv-3968
0806.1231
Improving Classical Authentication with Quantum Communication
<|reference_start|>Improving Classical Authentication with Quantum Communication: We propose a quantum-enhanced protocol to authenticate classical messages, with improved security with respect to the classical scheme introduced by Brassard in 1983. In that protocol, the shared key is the seed of a pseudo-random generator (PRG) and a hash function is used to create the authentication tag of a public message. We show that a quantum encoding of secret bits offers more security than the classical XOR function introduced by Brassard. Furthermore, we establish the relationship between the bias of a PRG and the amount of information about the key that the attacker can retrieve from a block of authenticated messages. Finally, we prove that quantum resources can improve both the secrecy of the key generated by the PRG and the secrecy of the tag obtained with a hidden hash function.<|reference_end|>
arxiv
@article{assis2008improving, title={Improving Classical Authentication with Quantum Communication}, author={F. M. Assis, P. Mateus, Y. Omar}, journal={arXiv preprint arXiv:0806.1231}, year={2008}, archivePrefix={arXiv}, eprint={0806.1231}, primaryClass={cs.IT cs.CR math.IT} }
assis2008improving
arxiv-3969
0806.1246
ICT, Community Memory and Technological Appropriation
<|reference_start|>ICT, Community Memory and Technological Appropriation: The core mission of universities and higher education institutions is to make public the results of their work and to preserve the collective memory of the institution. This includes the effective use of information and communication technologies (ICT) to systematically compile academic and research achievements as well as disseminate effectively this accumulated knowledge to society at large. Current efforts in Latin America and Venezuela in particular, are limited but provide some valuable insights to pave the road to this important goal. The institutional repository of Universidad de Los Andes (ULA) in Venezuela (www.saber.ula.ve) is such an example of ICT usage to store, manage and disseminate digital material produced by our University. In this paper we elaborate on the overall process of promoting a culture of content creation, publishing and preservation within ULA.<|reference_end|>
arxiv
@article{torrens2008ict,, title={ICT, Community Memory and Technological Appropriation}, author={Rodrigo Torrens, Luis A. Nunez and Raisa Urribarri}, journal={arXiv preprint arXiv:0806.1246}, year={2008}, archivePrefix={arXiv}, eprint={0806.1246}, primaryClass={cs.DL cs.CY} }
torrens2008ict,
arxiv-3970
0806.1256
Understanding individual human mobility patterns
<|reference_start|>Understanding individual human mobility patterns: Despite their importance for urban planning, traffic forecasting, and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited thanks to the lack of tools to monitor the time resolved location of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six month period. We find that in contrast with the random trajectories predicted by the prevailing Levy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time independent characteristic length scale and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent based modeling.<|reference_end|>
arxiv
@article{gonzalez2008understanding, title={Understanding individual human mobility patterns}, author={M.C. Gonzalez, C.A. Hidalgo, and A.-L. Barabasi}, journal={Nature 453, 479-482 (2008)}, year={2008}, doi={10.1038/nature06958}, archivePrefix={arXiv}, eprint={0806.1256}, primaryClass={physics.soc-ph cond-mat.stat-mech cs.CY physics.bio-ph} }
gonzalez2008understanding
arxiv-3971
0806.1271
Scheduling Sensors by Tiling Lattices
<|reference_start|>Scheduling Sensors by Tiling Lattices: Suppose that wirelessly communicating sensors are placed in a regular fashion on the points of a lattice. Common communication protocols allow the sensors to broadcast messages at arbitrary times, which can lead to problems should two sensors broadcast at the same time. It is shown that one can exploit a tiling of the lattice to derive a deterministic periodic schedule for the broadcast communication of sensors that is guaranteed to be collision-free. The proposed schedule is shown to be optimal in the number of time slots.<|reference_end|>
arxiv
@article{klappenecker2008scheduling, title={Scheduling Sensors by Tiling Lattices}, author={Andreas Klappenecker, Hyunyoung Lee, Jennifer L. Welch (Texas A&M University)}, journal={arXiv preprint arXiv:0806.1271}, year={2008}, archivePrefix={arXiv}, eprint={0806.1271}, primaryClass={cs.NI} }
klappenecker2008scheduling
arxiv-3972
0806.1280
The Role of Artificial Intelligence Technologies in Crisis Response
<|reference_start|>The Role of Artificial Intelligence Technologies in Crisis Response: Crisis response poses many of the most difficult information technology in crisis management. It requires information and communication-intensive efforts, utilized for reducing uncertainty, calculating and comparing costs and benefits, and managing resources in a fashion beyond those regularly available to handle routine problems. In this paper, we explore the benefits of artificial intelligence technologies in crisis response. This paper discusses the role of artificial intelligence technologies; namely, robotics, ontology and semantic web, and multi-agent systems in crisis response.<|reference_end|>
arxiv
@article{khalil2008the, title={The Role of Artificial Intelligence Technologies in Crisis Response}, author={Khaled M. Khalil, M. Abdel-Aziz, Taymour T. Nazmy and Abdel-Badeeh M. Salem}, journal={arXiv preprint arXiv:0806.1280}, year={2008}, archivePrefix={arXiv}, eprint={0806.1280}, primaryClass={cs.AI} }
khalil2008the
arxiv-3973
0806.1281
Extracting Programs from Constructive HOL Proofs via IZF Set-Theoretic<br> Semantics
<|reference_start|>Extracting Programs from Constructive HOL Proofs via IZF Set-Theoretic<br> Semantics: Church's Higher Order Logic is a basis for influential proof assistants -- HOL and PVS. Church's logic has a simple set-theoretic semantics, making it trustworthy and extensible. We factor HOL into a constructive core plus axioms of excluded middle and choice. We similarly factor standard set theory, ZFC, into a constructive core, IZF, and axioms of excluded middle and choice. Then we provide the standard set-theoretic semantics in such a way that the constructive core of HOL is mapped into IZF. We use the disjunction, numerical existence and term existence properties of IZF to provide a program extraction capability from proofs in the constructive core. We can implement the disjunction and numerical existence properties in two different ways: one using Rathjen's realizability for IZF and the other using a new direct weak normalization result for IZF by Moczydlowski. The latter can also be used for the term existence property.<|reference_end|>
arxiv
@article{constable2008extracting, title={Extracting Programs from Constructive HOL Proofs via IZF Set-Theoretic<br> Semantics}, author={Robert Constable and Wojciech Moczydlowski}, journal={Logical Methods in Computer Science, Volume 4, Issue 3 (September 9, 2008) lmcs:1140}, year={2008}, doi={10.2168/LMCS-4(3:5)2008}, archivePrefix={arXiv}, eprint={0806.1281}, primaryClass={cs.LO} }
constable2008extracting
arxiv-3974
0806.1284
The Separation of Duty with Privilege Calculus
<|reference_start|>The Separation of Duty with Privilege Calculus: This paper presents Privilege Calculus (PC) as a new approach of knowledge representation for Separation of Duty (SD) in the view of process and intents to improve the reconfigurability and traceability of SD. PC presumes that the structure of SD should be reduced to the structure of privilege and then the regulation of system should be analyzed with the help of forms of privilege.<|reference_end|>
arxiv
@article{lv2008the, title={The Separation of Duty with Privilege Calculus}, author={Chenggong Lv, Jun Wang, Lu Liu, and Weijia You}, journal={arXiv preprint arXiv:0806.1284}, year={2008}, archivePrefix={arXiv}, eprint={0806.1284}, primaryClass={cs.CR cs.LO} }
lv2008the
arxiv-3975
0806.1316
The end of Sleeping Beauty's nightmare
<|reference_start|>The end of Sleeping Beauty's nightmare: The way a rational agent changes her belief in certain propositions/hypotheses in the light of new evidence lies at the heart of Bayesian inference. The basic natural assumption, as summarized in van Fraassen's Reflection Principle ([1984]), would be that in the absence of new evidence the belief should not change. Yet, there are examples that are claimed to violate this assumption. The apparent paradox presented by such examples, if not settled, would demonstrate the inconsistency and/or incompleteness of the Bayesian approach and without eliminating this inconsistency, the approach cannot be regarded as scientific. The Sleeping Beauty Problem is just such an example. The existing attempts to solve the problem fall into three categories. The first two share the view that new evidence is absent, but differ about the conclusion of whether Sleeping Beauty should change her belief or not, and why. The third category is characterized by the view that, after all, new evidence (although hidden from the initial view) is involved. My solution is radically different and does not fall in either of these categories. I deflate the paradox by arguing that the two different degrees of belief presented in the Sleeping Beauty Problem are in fact beliefs in two different propositions, i.e. there is no need to explain the (un)change of belief.<|reference_end|>
arxiv
@article{groisman2008the, title={The end of Sleeping Beauty's nightmare}, author={Berry Groisman}, journal={arXiv preprint arXiv:0806.1316}, year={2008}, archivePrefix={arXiv}, eprint={0806.1316}, primaryClass={cs.AI math.PR} }
groisman2008the
arxiv-3976
0806.1340
Steiner trees and spanning trees in six-pin soap films
<|reference_start|>Steiner trees and spanning trees in six-pin soap films: We have studied the Steiner tree problem using six-pin soap films in detail. We extend the existing method of experimental realisation of Steiner trees in $n$-terminal problem through soap films to observe new non-minimal Steiner trees. We also produced spanning tree configurations for the first time by our method. Experimentally, by varying the pin diameter, we have achieved these new stable soap film configurations. A new algorithm is presented for creating these Steiner trees theoretically. Exact lengths of these Steiner tree configurations are calculated using a geometrical method. An exact two-parameter empirical formula is proposed for estimating the lengths of these soap film configurations in six-pin soap film problem.<|reference_end|>
arxiv
@article{dutta2008steiner, title={Steiner trees and spanning trees in six-pin soap films}, author={Prasun Dutta, S. Pratik Khastgir and Anushree Roy}, journal={arXiv preprint arXiv:0806.1340}, year={2008}, archivePrefix={arXiv}, eprint={0806.1340}, primaryClass={cs.CG cond-mat.other physics.class-ph} }
dutta2008steiner
arxiv-3977
0806.1343
Temporized Equilibria
<|reference_start|>Temporized Equilibria: This paper has been withdrawn by the author due to a crucial error in the submission action.<|reference_end|>
arxiv
@article{alberti2008temporized, title={Temporized Equilibria}, author={Riccardo Alberti}, journal={arXiv preprint arXiv:0806.1343}, year={2008}, archivePrefix={arXiv}, eprint={0806.1343}, primaryClass={cs.GT cs.AI} }
alberti2008temporized
arxiv-3978
0806.1355
From a set of parts to an indivisible whole Part III: Holistic space of multi-object relations
<|reference_start|>From a set of parts to an indivisible whole Part III: Holistic space of multi-object relations: The previously described methodology for hierarchical grouping of objects through iterative averaging has been used for simulation of cooperative interactions between objects of a system with the purpose of investigation of the conformational organization of the system. Interactions between objects were analyzed within the space of an isotropic field of one of the objects (drifter). Such an isotropic field of an individual object can be viewed as a prototype of computer ego. It allows visualization of a holistic space of multi-object relations (HSMOR) which has a complex structure depending on the number of objects, their mutual arrangement in space, and the type of metric used for assessment of (dis)similarities between the objects. In the course of computer simulation of cooperative interactions between the objects, only those points of the space were registered which corresponded to transitions in hierarchical grouping. Such points appeared to aggregate into complex spatial structures determining a unique internal organization of a respective HSMOR. We describe some of the peculiarities of such structures, referred to by us as attractor membranes, and discuss their properties. We also demonstrate the peculiarities of the changing of intergroup similarities which occurs when a drifter infinitely moves away from the fixed objects.<|reference_end|>
arxiv
@article{andreev2008from, title={From a set of parts to an indivisible whole. Part III: Holistic space of multi-object relations}, author={Leonid Andreev}, journal={arXiv preprint arXiv:0806.1355}, year={2008}, archivePrefix={arXiv}, eprint={0806.1355}, primaryClass={cs.OH} }
andreev2008from
arxiv-3979
0806.1361
VPOET: Using a Distributed Collaborative Platform for Semantic Web Applications
<|reference_start|>VPOET: Using a Distributed Collaborative Platform for Semantic Web Applications: This paper describes a distributed collaborative wiki-based platform that has been designed to facilitate the development of Semantic Web applications. The applications designed using this platform are able to build semantic data through the cooperation of different developers and to exploit that semantic data. The paper shows a practical case study on the application VPOET, and how an application based on Google Gadgets has been designed to test VPOET and let human users exploit the semantic data created. This practical example can be used to show how different Semantic Web technologies can be integrated into a particular Web application, and how the knowledge can be cooperatively improved.<|reference_end|>
arxiv
@article{rico2008vpoet:, title={VPOET: Using a Distributed Collaborative Platform for Semantic Web Applications}, author={Mariano Rico, David Camacho, and Oscar Corcho}, journal={arXiv preprint arXiv:0806.1361}, year={2008}, archivePrefix={arXiv}, eprint={0806.1361}, primaryClass={cs.SE cs.NI} }
rico2008vpoet:
arxiv-3980
0806.1372
Robust Cognitive Beamforming With Partial Channel State Information
<|reference_start|>Robust Cognitive Beamforming With Partial Channel State Information: This paper considers a spectrum sharing based cognitive radio (CR) communication system, which consists of a secondary user (SU) having multiple transmit antennas and a single receive antenna and a primary user (PU) having a single receive antenna. The channel state information (CSI) on the link of the SU is assumed to be perfectly known at the SU transmitter (SU-Tx). However, due to loose cooperation between the SU and the PU, only partial CSI of the link between the SU-Tx and the PU is available at the SU-Tx. With the partial CSI and a prescribed transmit power constraint, our design objective is to determine the transmit signal covariance matrix that maximizes the rate of the SU while keeping the interference power to the PU below a threshold for all the possible channel realization within an uncertainty set. This problem, termed the robust cognitive beamforming problem, can be naturally formulated as a semi-infinite programming (SIP) problem with infinitely many constraints. This problem is first transformed into the second order cone programming (SOCP) problem and then solved via a standard interior point algorithm. Then, an analytical solution with much reduced complexity is developed from a geometric perspective. It is shown that both algorithms obtain the same optimal solution. Simulation examples are presented to validate the effectiveness of the proposed algorithms.<|reference_end|>
arxiv
@article{zhang2008robust, title={Robust Cognitive Beamforming With Partial Channel State Information}, author={Lan Zhang, Ying-Chang Liang, Yan Xin, H. Vincent Poor}, journal={arXiv preprint arXiv:0806.1372}, year={2008}, archivePrefix={arXiv}, eprint={0806.1372}, primaryClass={cs.IT math.IT} }
zhang2008robust
arxiv-3981
0806.1377
An Identity Based Strong Bi-Designated Verifier (t, n) Threshold Proxy Signature Scheme
<|reference_start|>An Identity Based Strong Bi-Designated Verifier (t, n) Threshold Proxy Signature Scheme: Proxy signature schemes have been invented to delegate signing rights. The paper proposes a new concept of Identify Based Strong Bi-Designated Verifier threshold proxy signature (ID-SBDVTPS) schemes. Such scheme enables an original signer to delegate the signature authority to a group of 'n' proxy signers with the condition that 't' or more proxy signers can cooperatively sign messages on behalf of the original signer and the signatures can only be verified by any two designated verifiers and that they cannot convince anyone else of this fact.<|reference_end|>
arxiv
@article{lal2008an, title={An Identity Based Strong Bi-Designated Verifier (t, n) Threshold Proxy Signature Scheme}, author={Sunder Lal and Vandani Verma}, journal={arXiv preprint arXiv:0806.1377}, year={2008}, archivePrefix={arXiv}, eprint={0806.1377}, primaryClass={cs.CR} }
lal2008an
arxiv-3982
0806.1381
Feedback Scheduling: An Event-Driven Paradigm
<|reference_start|>Feedback Scheduling: An Event-Driven Paradigm: Embedded computing systems today increasingly feature resource constraints and workload variability, which lead to uncertainty in resource availability. This raises great challenges to software design and programming in multitasking environments. In this paper, the emerging methodology of feedback scheduling is introduced to address these challenges. As a closed-loop approach to resource management, feedback scheduling promises to enhance the flexibility and resource efficiency of various software programs through dynamically distributing available resources among concurrent tasks based on feedback information about the actual usage of the resources. With emphasis on the behavioral design of feedback schedulers, we describe a general framework of feedback scheduling in the context of real-time control applications. A simple yet illustrative feedback scheduling algorithm is given. From a programming perspective, we describe how to modify the implementation of control tasks to facilitate the application of feedback scheduling. An event-driven paradigm that combines time-triggered and event-triggered approaches is proposed for programming of the feedback scheduler. Simulation results argue that the proposed event-driven paradigm yields better performance than time-triggered paradigm in dynamic environments where the workload varies irregularly and unpredictably.<|reference_end|>
arxiv
@article{xia2008feedback, title={Feedback Scheduling: An Event-Driven Paradigm}, author={Feng Xia, Guosong Tian, Youxian Sun}, journal={ACM SIGPLAN Notices, vol.42, no.12, pp. 7-14, Dec. 2007}, year={2008}, archivePrefix={arXiv}, eprint={0806.1381}, primaryClass={cs.OS} }
xia2008feedback
arxiv-3983
0806.1385
Control-Scheduling Codesign: A Perspective on Integrating Control and Computing
<|reference_start|>Control-Scheduling Codesign: A Perspective on Integrating Control and Computing: Despite rapid evolution, embedded computing systems increasingly feature resource constraints and workload uncertainties. To achieve much better system performance in unpredictable environments than traditional design approaches, a novel methodology, control-scheduling codesign, is emerging in the context of integrating feedback control and real-time computing. The aim of this work is to provide a better understanding of this emerging methodology and to spark new interests and developments in both the control and computer science communities. The state of the art of control-scheduling codesign is captured. Relevant research efforts in the literature are discussed under two categories, i.e., control of computing systems and codesign for control systems. Critical open research issues on integrating control and computing are also outlined.<|reference_end|>
arxiv
@article{xia2008control-scheduling, title={Control-Scheduling Codesign: A Perspective on Integrating Control and Computing}, author={Feng Xia, Youxian Sun}, journal={Dynamics of Continuous, Discrete and Impulsive Systems - Series B, vol. 13, no. S1, pp. 1352-1358, 2006}, year={2008}, archivePrefix={arXiv}, eprint={0806.1385}, primaryClass={cs.OH} }
xia2008control-scheduling
arxiv-3984
0806.1397
The Improvement of the Bound on Hash Family
<|reference_start|>The Improvement of the Bound on Hash Family: In this paper, we study the bound on three kinds of hash family using the Singleton bound. To $\epsilon-U(N; n, m)$ hash family, in the caes of $n>m^2>1$ and $1\geq\epsilon\geq \epsilon_1(n, m)$, we get that the new bound is better. To $\epsilon-\bigtriangleup U(N; n, m)$ hash family, in the case of $n>m>1$ and $1\geq\epsilon\geq\epsilon_3(n,m)$, the new bound is better. To $\epsilon-SU(N; n, m)$ hash family, in the case of $n>2^m>2$ and $1\geq\epsilon\geq \epsilon_4(n, m)$, we get that the new bound is better.<|reference_end|>
arxiv
@article{ming2008the, title={The Improvement of the Bound on Hash Family}, author={Xianmin Ming, Jiansheng Yang}, journal={arXiv preprint arXiv:0806.1397}, year={2008}, archivePrefix={arXiv}, eprint={0806.1397}, primaryClass={cs.IT math.IT} }
ming2008the
arxiv-3985
0806.1413
Topological Complexity of Context-Free omega-Languages: A Survey
<|reference_start|>Topological Complexity of Context-Free omega-Languages: A Survey: We survey recent results on the topological complexity of context-free omega-languages which form the second level of the Chomsky hierarchy of languages of infinite words. In particular, we consider the Borel hierarchy and the Wadge hierarchy of non-deterministic or deterministic context-free omega-languages. We study also decision problems, the links with the notions of ambiguity and of degrees of ambiguity, and the special case of omega-powers.<|reference_end|>
arxiv
@article{finkel2008topological, title={Topological Complexity of Context-Free omega-Languages: A Survey}, author={Olivier Finkel (ELM, IMJ, LIP)}, journal={arXiv preprint arXiv:0806.1413}, year={2008}, number={LIP Research Report RR 2008-17}, archivePrefix={arXiv}, eprint={0806.1413}, primaryClass={cs.LO cs.CC math.LO} }
finkel2008topological
arxiv-3986
0806.1416
Highway Hull Revisited
<|reference_start|>Highway Hull Revisited: A highway H is a line in the plane on which one can travel at a greater speed than in the remaining plane. One can choose to enter and exit H at any point. The highway time distance between a pair of points is the minimum time required to move from one point to the other, with optional use of H. The highway hull HH(S,H) of a point set S is the minimal set containing S as well as the shortest paths between all pairs of points in HH(S,H), using the highway time distance. We provide a Theta(n log n) worst-case time algorithm to find the highway hull under the L_1 metric, as well as an O(n log^2 n) time algorithm for the L_2 metric which improves the best known result of O(n^2). We also define and construct the useful region of the plane: the region that a highway must intersect in order that the shortest path between at least one pair of points uses the highway.<|reference_end|>
arxiv
@article{aloupis2008highway, title={Highway Hull Revisited}, author={Greg Aloupis and Jean Cardinal and Sebastien Collette and Ferran Hurtado and Stefan Langerman and Joseph O'Rourke and Belen Palop}, journal={arXiv preprint arXiv:0806.1416}, year={2008}, archivePrefix={arXiv}, eprint={0806.1416}, primaryClass={cs.CG} }
aloupis2008highway
arxiv-3987
0806.1438
On Mean Distance and Girth
<|reference_start|>On Mean Distance and Girth: We bound the mean distance in a connected graph which is not a tree in function of its order $n$ and its girth $g$. On one hand, we show that mean distance is at most $\frac{n+1}{3}-\frac{g(g^2-4)}{12n(n-1)}$ if $g$ is even and at most $\frac{n+1}{3}-\frac{g(g^2-1)}{12n(n-1)}$ if $g$ is odd. On the other hand, we prove that mean distance is at least $\frac{ng}{4(n-1)}$ unless $G$ is an odd cycle.<|reference_end|>
arxiv
@article{bekkai2008on, title={On Mean Distance and Girth}, author={Siham Bekkai and Mekkia Kouider}, journal={arXiv preprint arXiv:0806.1438}, year={2008}, archivePrefix={arXiv}, eprint={0806.1438}, primaryClass={cs.DM} }
bekkai2008on
arxiv-3988
0806.1439
Dynamic Network of Concepts from Web-Publications
<|reference_start|>Dynamic Network of Concepts from Web-Publications: The network, the nodes of which are concepts (people's names, companies' names, etc.), extracted from web-publications, is considered. A working algorithm of extracting such concepts is presented. Edges of the network under consideration refer to the reference frequency which depends on the fact how many times the concepts, which correspond to the nodes, are mentioned in the same documents. Web-documents being published within a period of time together form an information flow, which defines the dynamics of the network studied. The phenomenon of its structure stability, when the number of web-publications, constituting its formation bases, increases, is discussed<|reference_end|>
arxiv
@article{lande2008dynamic, title={Dynamic Network of Concepts from Web-Publications}, author={D. V. Lande, A. A. Snarskii}, journal={arXiv preprint arXiv:0806.1439}, year={2008}, archivePrefix={arXiv}, eprint={0806.1439}, primaryClass={cs.IT math.IT} }
lande2008dynamic
arxiv-3989
0806.1446
Fast Wavelet-Based Visual Classification
<|reference_start|>Fast Wavelet-Based Visual Classification: We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.<|reference_end|>
arxiv
@article{yu2008fast, title={Fast Wavelet-Based Visual Classification}, author={Guoshen Yu and Jean-Jacques Slotine}, journal={arXiv preprint arXiv:0806.1446}, year={2008}, archivePrefix={arXiv}, eprint={0806.1446}, primaryClass={cs.CV} }
yu2008fast
arxiv-3990
0806.1494
Posets and Permutations in the Duplication-Loss Model: Minimal Permutations with d Descents
<|reference_start|>Posets and Permutations in the Duplication-Loss Model: Minimal Permutations with d Descents: In this paper, we are interested in the combinatorial analysis of the whole genome duplication - random loss model of genome rearrangement initiated in a paper of Chaudhuri, Chen, Mihaescu, and Rao in SODA 2006 and continued by Bouvel and Rossin in 2007. In this model, genomes composed of n genes are modeled by permutations of the set of integers [1..n], that can evolve through duplication-loss steps. It was previously shown that the class of permutations obtained in this model after a given number p of steps is a class of pattern-avoiding permutations of finite basis. The excluded patterns were described as the minimal permutations with d=2^p descents, minimal being intended in the sense of the pattern-involvement relation on permutations. Here, we give a local and simpler characterization of the set B_d of minimal permutations with d descents. We also provide a more detailed analysis - characterization, bijection and enumeration - of two particular subsets of B_d, namely the patterns in B_d of size d+2 and 2d.<|reference_end|>
arxiv
@article{bouvel2008posets, title={Posets and Permutations in the Duplication-Loss Model: Minimal Permutations with d Descents}, author={Mathilde Bouvel (LIAFA), Elisa Pergola (DSI)}, journal={arXiv preprint arXiv:0806.1494}, year={2008}, archivePrefix={arXiv}, eprint={0806.1494}, primaryClass={math.CO cs.DM} }
bouvel2008posets
arxiv-3991
0806.1543
On the Superdistribution of Digital Goods
<|reference_start|>On the Superdistribution of Digital Goods: Business models involving buyers of digital goods in the distribution process are called superdistribution schemes. We review the state-of-the art of research and application of superdistribution and propose systematic approach to market mechanisms using super-distribution and technical system architectures supporting it. The limiting conditions on such markets are of economic, legal, technical, and psychological nature.<|reference_end|>
arxiv
@article{schmidt2008on, title={On the Superdistribution of Digital Goods}, author={Andreas U. Schmidt}, journal={arXiv preprint arXiv:0806.1543}, year={2008}, archivePrefix={arXiv}, eprint={0806.1543}, primaryClass={cs.MM cs.CR cs.CY} }
schmidt2008on
arxiv-3992
0806.1549
Bits through ARQs
<|reference_start|>Bits through ARQs: A fundamental problem in dynamic frequency reuse is that the cognitive radio is ignorant of the amount of interference it inflicts on the primary license holder. A model for such a situation is proposed and analyzed. The primary sends packets across an erasure channel and employs simple ACK/NAK feedback (ARQs) to retransmit erased packets. Furthermore, its erasure probabilities are influenced by the cognitive radio's activity. While the cognitive radio does not know these interference characteristics, it can eavesdrop on the primary's ARQs. The model leads to strategies in which the cognitive radio adaptively adjusts its input based on the primary's ARQs thereby guaranteeing the primary exceeds a target packet rate. A relatively simple strategy whereby the cognitive radio transmits only when the primary's empirical packet rate exceeds a threshold is shown to have interesting universal properties in the sense that for unknown time-varying interference characteristics, the primary is guaranteed to meet its target rate. Furthermore, a more intricate version of this strategy is shown to be capacity-achieving for the cognitive radio when the interference characteristics are time-invariant.<|reference_end|>
arxiv
@article{eswaran2008bits, title={Bits through ARQs}, author={Krishnan Eswaran, Michael Gastpar, Kannan Ramchandran}, journal={arXiv preprint arXiv:0806.1549}, year={2008}, archivePrefix={arXiv}, eprint={0806.1549}, primaryClass={cs.IT math.IT} }
eswaran2008bits
arxiv-3993
0806.1565
Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View
<|reference_start|>Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View: This paper considers the noncooperative maximization of mutual information in the Gaussian interference channel in a fully distributed fashion via game theory. This problem has been studied in a number of papers during the past decade for the case of frequency-selective channels. A variety of conditions guaranteeing the uniqueness of the Nash Equilibrium (NE) and convergence of many different distributed algorithms have been derived. In this paper we provide a unified view of the state-of-the-art results, showing that most of the techniques proposed in the literature to study the game, even though apparently different, can be unified using our recent interpretation of the waterfilling operator as a projection onto a proper polyhedral set. Based on this interpretation, we then provide a mathematical framework, useful to derive a unified set of sufficient conditions guaranteeing the uniqueness of the NE and the global convergence of waterfilling based asynchronous distributed algorithms. The proposed mathematical framework is also instrumental to study the extension of the game to the more general MIMO case, for which only few results are available in the current literature. The resulting algorithm is, similarly to the frequency-selective case, an iterative asynchronous MIMO waterfilling algorithm. The proof of convergence hinges again on the interpretation of the MIMO waterfilling as a matrix projection, which is the natural generalization of our results obtained for the waterfilling mapping in the frequency-selective case.<|reference_end|>
arxiv
@article{scutari2008competitive, title={Competitive Design of Multiuser MIMO Systems based on Game Theory: A Unified View}, author={Gesualdo Scutari, Daniel P. Palomar, Sergio Barbarossa}, journal={arXiv preprint arXiv:0806.1565}, year={2008}, doi={10.1109/JSAC.2008.080907}, archivePrefix={arXiv}, eprint={0806.1565}, primaryClass={cs.IT cs.GT math.IT math.OC} }
scutari2008competitive
arxiv-3994
0806.1567
Flexible Time-Triggered Sampling in Smart Sensor-Based Wireless Control Systems
<|reference_start|>Flexible Time-Triggered Sampling in Smart Sensor-Based Wireless Control Systems: Wireless control systems (WCSs) often have to operate in dynamic environments where the network traffic load may vary unpredictably over time. The sampling in sensors is conventionally time triggered with fixed periods. In this context, only worse-than-possible quality of control (QoC) can be achieved when the network is underloaded, while overloaded conditions may significantly degrade the QoC, even causing system instability. This is particularly true when the bandwidth of the wireless network is limited and shared by multiple control loops. To address these problems, a flexible time-triggered sampling scheme is presented in this work. Smart sensors are used to facilitate dynamic adjustment of sampling periods, which enhances the flexibility and resource efficiency of the system based on time-triggered sampling. Feedback control technology is exploited for adapting sampling periods in a periodic manner. The deadline miss ratio in each control loop is maintained at/around a desired level, regardless of workload variations. Simulation results show that the proposed sampling scheme is able to deal with dynamic and unpredictable variations in network traffic load. Compared to conventional time-triggered sampling, it leads to much better QoC in WCSs operating in dynamic environments.<|reference_end|>
arxiv
@article{xia2008flexible, title={Flexible Time-Triggered Sampling in Smart Sensor-Based Wireless Control Systems}, author={Feng Xia, Wenhong Zhao}, journal={Sensors, vol.7, no.11, pp. 2548-2564, 2007}, year={2008}, archivePrefix={arXiv}, eprint={0806.1567}, primaryClass={cs.NI} }
xia2008flexible
arxiv-3995
0806.1569
Wireless Sensor/Actuator Network Design for Mobile Control Applications
<|reference_start|>Wireless Sensor/Actuator Network Design for Mobile Control Applications: Wireless sensor/actuator networks (WSANs) are emerging as a new generation of sensor networks. Serving as the backbone of control applications, WSANs will enable an unprecedented degree of distributed and mobile control. However, the unreliability of wireless communications and the real-time requirements of control applications raise great challenges for WSAN design. With emphasis on the reliability issue, this paper presents an application-level design methodology for WSANs in mobile control applications. The solution is generic in that it is independent of the underlying platforms, environment, control system models, and controller design. To capture the link quality characteristics in terms of packet loss rate, experiments are conducted on a real WSAN system. From the experimental observations, a simple yet efficient method is proposed to deal with unpredictable packet loss on actuator nodes. Trace-based simulations give promising results, which demonstrate the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{xia2008wireless, title={Wireless Sensor/Actuator Network Design for Mobile Control Applications}, author={Feng Xia, Yu-Chu Tian, Yanjun Li, Youxian Sun}, journal={Sensors, vol.7, no.10, pp.2157-2173, 2007}, year={2008}, archivePrefix={arXiv}, eprint={0806.1569}, primaryClass={cs.NI} }
xia2008wireless
arxiv-3996
0806.1577
Co-ordinate Interleaved Distributed Space-Time Coding for Two-Antenna-Relays Networks
<|reference_start|>Co-ordinate Interleaved Distributed Space-Time Coding for Two-Antenna-Relays Networks: Distributed space time coding for wireless relay networks when the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set-up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. A new class of distributed space time block codes called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are introduced where, in the first phase, the source transmits a T-length complex vector to all the relays and in the second phase, at each relay, the in-phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination. Compared to the scheme proposed by Jing-Hassibi, for $T \geq 4R$, while providing the same asymptotic diversity order of 2R, CIDSTC scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays. However, for moderate and large values of P, CIDSTC scheme is shown to provide more diversity than that of the scheme proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the information symbols take value from an appropriate multi-dimensional signal set.<|reference_end|>
arxiv
@article{harshan2008co-ordinate, title={Co-ordinate Interleaved Distributed Space-Time Coding for Two-Antenna-Relays Networks}, author={J. Harshan, B. Sundar Rajan}, journal={arXiv preprint arXiv:0806.1577}, year={2008}, doi={10.1109/TWC.2009.071303}, archivePrefix={arXiv}, eprint={0806.1577}, primaryClass={cs.IT math.IT} }
harshan2008co-ordinate
arxiv-3997
0806.1602
The injectivity of the global function of a cellular automaton in the hyperbolic plane is undecidable
<|reference_start|>The injectivity of the global function of a cellular automaton in the hyperbolic plane is undecidable: In this paper, we look at the following question. We consider cellular automata in the hyperbolic plane and we consider the global function defined on all possible configurations. Is the injectivity of this function undecidable? The problem was answered positively in the case of the Euclidean plane by Jarkko Kari, in 1994. In the present paper, we show that the answer is also positive for the hyperbolic plane: the problem is undecidable.<|reference_end|>
arxiv
@article{maurice2008the, title={The injectivity of the global function of a cellular automaton in the hyperbolic plane is undecidable}, author={Margenstern Maurice}, journal={The Injectivity of the Global Function of a Cellular Automaton in the Hyperbolic Plane is Undecidable, Fundamenta Informaticae, 94(1), (2009), 63-99}, year={2008}, archivePrefix={arXiv}, eprint={0806.1602}, primaryClass={cs.CG cs.LO} }
maurice2008the
arxiv-3998
0806.1610
SPAM over Internet Telephony and how to deal with it
<|reference_start|>SPAM over Internet Telephony and how to deal with it: In our modern society telephony has developed to an omnipresent service. People are available at anytime and anywhere. Furthermore the Internet has emerged to an important communication medium. These facts and the raising availability of broadband internet access has led to the fusion of these two services. Voice over IP or short VoIP is the keyword, that describes this combination. The advantages of VoIP in comparison to classic telephony are location independence, simplification of transport networks, ability to establish multimedia communications and the low costs. Nevertheless one can easily see, that combining two technologies, always brings up new challenges and problems that have to be solved. It is undeniable that one of the most annoying facet of the Internet nowadays is email spam. According to different sources email spam is considered to be 80 to 90 percent of the email traffic produced. The threat of so called voice spam or Spam over Internet Telephony (SPIT) is even more fatal, for the annoyance and disturbance factor is much higher. As instance an email that hits the inbox at 4 p.m. is useless but will not disturb the user much. In contrast a ringing phone at 4 p.m. will lead to a much higher disturbance. From the providers point of view both email spam and voice spam produce unwanted traffic and loss of trust of customers into the service. In order to mitigate this threat different approaches from different parties have been developed. This paper focuses on state of the art anti voice spam solutions, analyses them and reveals their weak points. In the end a SPIT producing benchmark tool will be introduced, that attacks the presented anti voice spam solutions. With this tool it is possible for an administrator of a VoIP network to test how vulnerable his system is.<|reference_end|>
arxiv
@article{schmidt2008spam, title={SPAM over Internet Telephony and how to deal with it}, author={Andreas U. Schmidt, Nicolai Kuntze, Rachid El Khayari}, journal={arXiv preprint arXiv:0806.1610}, year={2008}, archivePrefix={arXiv}, eprint={0806.1610}, primaryClass={cs.CR cs.HC} }
schmidt2008spam
arxiv-3999
0806.1636
Data-Complexity of the Two-Variable Fragment with Counting Quantifiers
<|reference_start|>Data-Complexity of the Two-Variable Fragment with Counting Quantifiers: The data-complexity of both satisfiability and finite satisfiability for the two-variable fragment with counting is NP-complete; the data-complexity of both query-answering and finite query-answering for the two-variable guarded fragment with counting is co-NP-complete.<|reference_end|>
arxiv
@article{pratt-hartmann2008data-complexity, title={Data-Complexity of the Two-Variable Fragment with Counting Quantifiers}, author={Ian Pratt-Hartmann}, journal={Information and Computation, 207(8), 2009, pp. 867--888}, year={2008}, doi={10.1016/j.ic.2009.02.004}, archivePrefix={arXiv}, eprint={0806.1636}, primaryClass={cs.LO cs.AI cs.CC} }
pratt-hartmann2008data-complexity
arxiv-4000
0806.1640
Toward a combination rule to deal with partial conflict and specificity in belief functions theory
<|reference_start|>Toward a combination rule to deal with partial conflict and specificity in belief functions theory: We present and discuss a mixed conjunctive and disjunctive rule, a generalization of conflict repartition rules, and a combination of these two rules. In the belief functions theory one of the major problem is the conflict repartition enlightened by the famous Zadeh's example. To date, many combination rules have been proposed in order to solve a solution to this problem. Moreover, it can be important to consider the specificity of the responses of the experts. Since few year some unification rules are proposed. We have shown in our previous works the interest of the proportional conflict redistribution rule. We propose here a mixed combination rule following the proportional conflict redistribution rule modified by a discounting procedure. This rule generalizes many combination rules.<|reference_end|>
arxiv
@article{martin2008toward, title={Toward a combination rule to deal with partial conflict and specificity in belief functions theory}, author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)}, journal={arXiv preprint arXiv:0806.1640}, year={2008}, archivePrefix={arXiv}, eprint={0806.1640}, primaryClass={cs.AI} }
martin2008toward