corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672201 | cs/0410017 | Automated Pattern Detection--An Algorithm for Constructing Optimally Synchronizing Multi-Regular Language Filters | <|reference_start|>Automated Pattern Detection--An Algorithm for Constructing Optimally Synchronizing Multi-Regular Language Filters: In the computational-mechanics structural analysis of one-dimensional cellular automata the following automata-theoretic analogue of the \emph{change-point problem} from time series analysis arises: \emph{Given a string $\sigma$ and a collection $\{\mc{D}_i\}$ of finite automata, identify the regions of $\sigma$ that belong to each $\mc{D}_i$ and, in particular, the boundaries separating them.} We present two methods for solving this \emph{multi-regular language filtering problem}. The first, although providing the ideal solution, requires a stack, has a worst-case compute time that grows quadratically in $\sigma$'s length and conditions its output at any point on arbitrarily long windows of future input. The second method is to algorithmically construct a transducer that approximates the first algorithm. In contrast to the stack-based algorithm, however, the transducer requires only a finite amount of memory, runs in linear time, and gives immediate output for each letter read; it is, moreover, the best possible finite-state approximation with these three features.<|reference_end|> | arxiv | @article{mctague2004automated,
title={Automated Pattern Detection--An Algorithm for Constructing Optimally
Synchronizing Multi-Regular Language Filters},
author={Carl S. McTague and James P. Crutchfield},
journal={arXiv preprint arXiv:cs/0410017},
year={2004},
number={Santa Fe Institute 04-09-027},
archivePrefix={arXiv},
eprint={cs/0410017},
primaryClass={cs.CV cond-mat.stat-mech cs.CL cs.DS cs.IR cs.LG nlin.AO nlin.CG nlin.PS physics.comp-ph q-bio.GN}
} | mctague2004automated |
arxiv-672202 | cs/0410018 | Utilitarian resource assignment | <|reference_start|>Utilitarian resource assignment: This paper studies a resource allocation problem introduced by Koutsoupias and Papadimitriou. The scenario is modelled as a multiple-player game in which each player selects one of a finite number of known resources. The cost to the player is the total weight of all players who choose that resource, multiplied by the ``delay'' of that resource. Recent papers have studied the Nash equilibria and social optima of this game in terms of the $L_\infty$ cost metric, in which the social cost is taken to be the maximum cost to any player. We study the $L_1$ variant of this game, in which the social cost is taken to be the sum of the costs to the individual players, rather than the maximum of these costs. We give bounds on the size of the coordination ratio, which is the ratio between the social cost incurred by selfish behavior and the optimal social cost; we also study the algorithmic problem of finding optimal (lowest-cost) assignments and Nash Equilibria. Additionally, we obtain bounds on the ratio between alternative Nash equilibria for some special cases of the problem.<|reference_end|> | arxiv | @article{berenbrink2004utilitarian,
title={Utilitarian resource assignment},
author={Petra Berenbrink, Leslie Ann Goldberg, Paul Goldberg, Russell Martin},
journal={arXiv preprint arXiv:cs/0410018},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410018},
primaryClass={cs.GT math.GM}
} | berenbrink2004utilitarian |
arxiv-672203 | cs/0410019 | Finite-Length Scaling and Finite-Length Shift for Low-Density Parity-Check Codes | <|reference_start|>Finite-Length Scaling and Finite-Length Shift for Low-Density Parity-Check Codes: Consider communication over the binary erasure channel BEC using random low-density parity-check codes with finite-blocklength n from `standard' ensembles. We show that large error events is conveniently described within a scaling theory, and explain how to estimate heuristically their effect. Among other quantities, we consider the finite length threshold e(n), defined by requiring a block error probability P_B = 1/2. For ensembles with minimum variable degree larger than two, the following expression is argued to hold e(n) = e -e_1 n^{-2/3} +\Theta(n^{-1}) with a calculable shift} parameter e_1>0.<|reference_end|> | arxiv | @article{amraoui2004finite-length,
title={Finite-Length Scaling and Finite-Length Shift for Low-Density
Parity-Check Codes},
author={Abdelaziz Amraoui, Andrea Montanari, Tom Richardson and Rudiger
Urbanke},
journal={arXiv preprint arXiv:cs/0410019},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410019},
primaryClass={cs.IT cond-mat.dis-nn math.IT}
} | amraoui2004finite-length |
arxiv-672204 | cs/0410020 | Adaptive Cluster Expansion (ACE): A Hierarchical Bayesian Network | <|reference_start|>Adaptive Cluster Expansion (ACE): A Hierarchical Bayesian Network: Using the maximum entropy method, we derive the "adaptive cluster expansion" (ACE), which can be trained to estimate probability density functions in high dimensional spaces. The main advantage of ACE over other Bayesian networks is its ability to capture high order statistics after short training times, which it achieves by making use of a hierarchical vector quantisation of the input data. We derive a scheme for representing the state of an ACE network as a "probability image", which allows us to identify statistically anomalous regions in an otherwise statistically homogeneous image, for instance. Finally, we present some probability images that we obtained after training ACE on some Brodatz texture images - these demonstrate the ability of ACE to detect subtle textural anomalies.<|reference_end|> | arxiv | @article{luttrell2004adaptive,
title={Adaptive Cluster Expansion (ACE): A Hierarchical Bayesian Network},
author={Stephen Luttrell},
journal={arXiv preprint arXiv:cs/0410020},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410020},
primaryClass={cs.NE cs.CV}
} | luttrell2004adaptive |
arxiv-672205 | cs/0410021 | Complexity Results in Graph Reconstruction | <|reference_start|>Complexity Results in Graph Reconstruction: We investigate the relative complexity of the graph isomorphism problem (GI) and problems related to the reconstruction of a graph from its vertex-deleted or edge-deleted subgraphs (in particular, deck checking (DC) and legitimate deck (LD) problems). We show that these problems are closely related for all amounts $c \geq 1$ of deletion: 1) $GI \equiv^{l}_{iso} VDC_{c}$, $GI \equiv^{l}_{iso} EDC_{c}$, $GI \leq^{l}_{m} LVD_c$, and $GI \equiv^{p}_{iso} LED_c$. 2) For all $k \geq 2$, $GI \equiv^{p}_{iso} k-VDC_c$ and $GI \equiv^{p}_{iso} k-EDC_c$. 3) For all $k \geq 2$, $GI \leq^{l}_{m} k-LVD_c$. 4)$GI \equiv^{p}_{iso} 2-LVC_c$. 5) For all $k \geq 2$, $GI \equiv^{p}_{iso} k-LED_c$. For many of these results, even the $c = 1$ case was not previously known. Similar to the definition of reconstruction numbers $vrn_{\exists}(G)$ [HP85] and $ern_{\exists}(G)$ (see page 120 of [LS03]), we introduce two new graph parameters, $vrn_{\forall}(G)$ and $ern_{\forall}(G)$, and give an example of a family $\{G_n\}_{n \geq 4}$ of graphs on $n$ vertices for which $vrn_{\exists}(G_n) < vrn_{\forall}(G_n)$. For every $k \geq 2$ and $n \geq 1$, we show that there exists a collection of $k$ graphs on $(2^{k-1}+1)n+k$ vertices with $2^{n}$ 1-vertex-preimages, i.e., one has families of graph collections whose number of 1-vertex-preimages is huge relative to the size of the graphs involved.<|reference_end|> | arxiv | @article{hemaspaandra2004complexity,
title={Complexity Results in Graph Reconstruction},
author={Edith Hemaspaandra, Lane A. Hemaspaandra, Stanislaw P. Radziszowski,
Rahul Tripathi},
journal={arXiv preprint arXiv:cs/0410021},
year={2004},
number={URCS-TR-2004-852},
archivePrefix={arXiv},
eprint={cs/0410021},
primaryClass={cs.CC cs.DM}
} | hemaspaandra2004complexity |
arxiv-672206 | cs/0410022 | RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA | <|reference_start|>RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA: In this paper, we describe the Rich Representation Language (RRL) which is used in the NECA system. The NECA system generates interactions between two or more animated characters. The RRL is an XML compliant framework for representing the information that is exchanged at the interfaces between the various NECA system modules. The full XML Schemas for the RRL are available at http://www.ai.univie.ac.at/NECA/RRL<|reference_end|> | arxiv | @article{piwek2004rrl:,
title={RRL: A Rich Representation Language for the Description of Agent
Behaviour in NECA},
author={P. Piwek, B. Krenn, M. Schroeder, M. Grice, S. Baumann and H. Pirker},
journal={In Proceedings of the AAMAS-02 Workshop ``Embodied conversational
agents - let's specify and evaluate them!'', July 16 2002, Bologna, Italy.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410022},
primaryClass={cs.MM cs.MA}
} | piwek2004rrl: |
arxiv-672207 | cs/0410023 | All Superlinear Inverse Schemes are coNP-Hard | <|reference_start|>All Superlinear Inverse Schemes are coNP-Hard: How hard is it to invert NP-problems? We show that all superlinearly certified inverses of NP problems are coNP-hard. To do so, we develop a novel proof technique that builds diagonalizations against certificates directly into a circuit.<|reference_end|> | arxiv | @article{hemaspaandra2004all,
title={All Superlinear Inverse Schemes are coNP-Hard},
author={Edith Hemaspaandra, Lane A. Hemaspaandra, Harald Hempel},
journal={arXiv preprint arXiv:cs/0410023},
year={2004},
number={URCS-TR-2004-841},
archivePrefix={arXiv},
eprint={cs/0410023},
primaryClass={cs.CC cs.CR}
} | hemaspaandra2004all |
arxiv-672208 | cs/0410024 | The Key Authority - Secure Key Management in Hierarchical Public Key Infrastructures | <|reference_start|>The Key Authority - Secure Key Management in Hierarchical Public Key Infrastructures: We model a private key`s life cycle as a finite state machine. The states are the key`s phases of life and the transition functions describe tasks to be done with the key. Based on this we define and describe the key authority, a trust center module, which potentiates the easy enforcement of secure management of private keys in hierarchical public key infrastructures. This is done by assembling all trust center tasks concerning the crucial handling of private keys within one centralized module. As this module resides under full control of the trust center`s carrier it can easily be protected by well-known organizational and technical measures.<|reference_end|> | arxiv | @article{wiesmaier2004the,
title={The Key Authority - Secure Key Management in Hierarchical Public Key
Infrastructures},
author={A. Wiesmaier (1), M. Lippert (1), V. Karatsiolis (1) ((1) TU
Darmstadt)},
journal={in Proceedings of the International Conference on Security and
Management. CSREA Press, June 2004, pp. 89-93},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410024},
primaryClass={cs.CR}
} | wiesmaier2004the |
arxiv-672209 | cs/0410025 | Outflanking and securely using the PIN/TAN-System | <|reference_start|>Outflanking and securely using the PIN/TAN-System: The PIN/TAN-system is an authentication and authorization scheme used in e-business. Like other similar schemes it is successfully attacked by criminals. After shortly classifying the various kinds of attacks we accomplish malicious code attacks on real World Wide Web transaction systems. In doing so we find that it is really easy to outflank these systems. This is even supported by the users' behavior. We give a few simple behavior rules to improve this situation. But their impact is limited. Also the providers support the attacks by having implementation flaws in their installations. Finally we show that the PIN/TAN-system is not suitable for usage in highly secure applications.<|reference_end|> | arxiv | @article{wiesmaier2004outflanking,
title={Outflanking and securely using the PIN/TAN-System},
author={A. Wiesmaier, M. Fischer, M. Lippert, J. Buchmann},
journal={Proceedings of the 2005 International Conference on Security and
Management (SAM'05); June 2005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410025},
primaryClass={cs.CR}
} | wiesmaier2004outflanking |
arxiv-672210 | cs/0410026 | Lattice QCD Data and Metadata Archives at Fermilab and the International Lattice Data Grid | <|reference_start|>Lattice QCD Data and Metadata Archives at Fermilab and the International Lattice Data Grid: The lattice gauge theory community produces large volumes of data. Because the data produced by completed computations form the basis for future work, the maintenance of archives of existing data and metadata describing the provenance, generation parameters, and derived characteristics of that data is essential not only as a reference, but also as a basis for future work. Development of these archives according to uniform standards both in the data and metadata formats provided and in the software interfaces to the component services could greatly simplify collaborations between institutions and enable the dissemination of meaningful results. This paper describes the progress made in the development of a set of such archives at the Fermilab lattice QCD facility. We are coordinating the development of the interfaces to these facilities and the formats of the data and metadata they provide with the efforts of the international lattice data grid (ILDG) metadata and middleware working groups, whose goals are to develop standard formats for lattice QCD data and metadata and a uniform interface to archive facilities that store them. Services under development include those commonly associate with data grids: a service registry, a metadata database, a replica catalog, and an interface to a mass storage system. All services provide GSI authenticated web service interfaces following modern standards, including WSDL and SOAP, and accept and provide data and metadata following recent XML based formats proposed by the ILDG metadata working group.<|reference_end|> | arxiv | @article{neilsen2004lattice,
title={Lattice QCD Data and Metadata Archives at Fermilab and the International
Lattice Data Grid},
author={Eric H. Neilsen Jr, James Simone},
journal={arXiv preprint arXiv:cs/0410026},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410026},
primaryClass={cs.DC hep-lat}
} | neilsen2004lattice |
arxiv-672211 | cs/0410027 | Detecting User Engagement in Everyday Conversations | <|reference_start|>Detecting User Engagement in Everyday Conversations: This paper presents a novel application of speech emotion recognition: estimation of the level of conversational engagement between users of a voice communication system. We begin by using machine learning techniques, such as the support vector machine (SVM), to classify users' emotions as expressed in individual utterances. However, this alone fails to model the temporal and interactive aspects of conversational engagement. We therefore propose the use of a multilevel structure based on coupled hidden Markov models (HMM) to estimate engagement levels in continuous natural speech. The first level is comprised of SVM-based classifiers that recognize emotional states, which could be (e.g.) discrete emotion types or arousal/valence levels. A high-level HMM then uses these emotional states as input, estimating users' engagement in conversation by decoding the internal states of the HMM. We report experimental results obtained by applying our algorithms to the LDC Emotional Prosody and CallFriend speech corpora.<|reference_end|> | arxiv | @article{yu2004detecting,
title={Detecting User Engagement in Everyday Conversations},
author={Chen Yu, Paul M. Aoki, Allison Woodruff},
journal={Proc. 8th Int'l Conf. on Spoken Language Processing (ICSLP) (Vol.
2), Jeju Island, Republic of Korea, Oct. 2004, 1329-1332. ISCA.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410027},
primaryClass={cs.SD cs.CL cs.HC}
} | yu2004detecting |
arxiv-672212 | cs/0410028 | Life Above Threshold: From List Decoding to Area Theorem and MSE | <|reference_start|>Life Above Threshold: From List Decoding to Area Theorem and MSE: We consider communication over memoryless channels using low-density parity-check code ensembles above the iterative (belief propagation) threshold. What is the computational complexity of decoding (i.e., of reconstructing all the typical input codewords for a given channel output) in this regime? We define an algorithm accomplishing this task and analyze its typical performance. The behavior of the new algorithm can be expressed in purely information-theoretical terms. Its analysis provides an alternative proof of the area theorem for the binary erasure channel. Finally, we explain how the area theorem is generalized to arbitrary memoryless channels. We note that the recently discovered relation between mutual information and minimal square error is an instance of the area theorem in the setting of Gaussian channels.<|reference_end|> | arxiv | @article{measson2004life,
title={Life Above Threshold: From List Decoding to Area Theorem and MSE},
author={Cyril Measson, Andrea Montanari, Tom Richardson, Rudiger Urbanke},
journal={arXiv preprint arXiv:cs/0410028},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410028},
primaryClass={cs.IT cond-mat.dis-nn math.IT}
} | measson2004life |
arxiv-672213 | cs/0410029 | Nondeterministic Linear Logic | <|reference_start|>Nondeterministic Linear Logic: In this paper, we introduce Linear Logic with a nondeterministic facility, which has a self-dual additive connective. In the system the proof net technology is available in a natural way. The important point is that nondeterminism in the system is expressed by the process of normalization, not by proof search. Moreover we can incorporate the system into Light Linear Logic and Elementary Linear Logic developed by J.-Y.Girard recently: Nondeterministic Light Linear Logic and Nondeterministic Elementary Linear Logic are defined in a very natural way.<|reference_end|> | arxiv | @article{matsuoka2004nondeterministic,
title={Nondeterministic Linear Logic},
author={Satoshi Matsuoka},
journal={IPSJ SIGNotes PROgramming No.12, 1996},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410029},
primaryClass={cs.LO}
} | matsuoka2004nondeterministic |
arxiv-672214 | cs/0410030 | Weak Typed Boehm Theorem on IMLL | <|reference_start|>Weak Typed Boehm Theorem on IMLL: In the Boehm theorem workshop on Crete island, Zoran Petric called Statman's ``Typical Ambiguity theorem'' typed Boehm theorem. Moreover, he gave a new proof of the theorem based on set-theoretical models of the simply typed lambda calculus. In this paper, we study the linear version of the typed Boehm theorem on a fragment of Intuitionistic Linear Logic. We show that in the multiplicative fragment of intuitionistic linear logic without the multiplicative unit 1 (for short IMLL) weak typed Boehm theorem holds. The system IMLL exactly corresponds to the linear lambda calculus without exponentials, additives and logical constants. The system IMLL also exactly corresponds to the free symmetric monoidal closed category without the unit object. As far as we know, our separation result is the first one with regard to these systems in a purely syntactical manner.<|reference_end|> | arxiv | @article{matsuoka2004weak,
title={Weak Typed Boehm Theorem on IMLL},
author={Satoshi Matsuoka},
journal={arXiv preprint arXiv:cs/0410030},
year={2004},
doi={10.1016/j.apal.2006.06.001},
archivePrefix={arXiv},
eprint={cs/0410030},
primaryClass={cs.LO}
} | matsuoka2004weak |
arxiv-672215 | cs/0410031 | Similarity-Based Supervisory Control of Discrete Event Systems | <|reference_start|>Similarity-Based Supervisory Control of Discrete Event Systems: Due to the appearance of uncontrollable events in discrete event systems, one may wish to replace the behavior leading to the uncontrollability of pre-specified language by some quite similar one. To capture this similarity, we introduce metric to traditional supervisory control theory and generalize the concept of original controllability to $\ld$-controllability, where $\ld$ indicates the similarity degree of two languages. A necessary and sufficient condition for a language to be $\ld$-controllable is provided. We then examine some properties of $\ld$-controllable languages and present an approach to optimizing a realization.<|reference_end|> | arxiv | @article{cao2004similarity-based,
title={Similarity-Based Supervisory Control of Discrete Event Systems},
author={Yongzhi Cao and Mingsheng Ying},
journal={A short version has been published in the IEEE Transactions on
Automatic Control, 51(2), pp. 325-330, February 2006.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410031},
primaryClass={cs.DM}
} | cao2004similarity-based |
arxiv-672216 | cs/0410032 | The state complexity of L^2 and L^k | <|reference_start|>The state complexity of L^2 and L^k: We show that if M is a DFA with n states over an arbitrary alphabet and L = L(M), then the worst-case state complexity of L^2 is n*2^n - 2^{n-1}. If, however, M is a DFA over a unary alphabet, then the worst-case state complexity of L^k is kn-k+1 for all k >= 2.<|reference_end|> | arxiv | @article{rampersad2004the,
title={The state complexity of L^2 and L^k},
author={Narad Rampersad},
journal={arXiv preprint arXiv:cs/0410032},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410032},
primaryClass={cs.CC cs.FL}
} | rampersad2004the |
arxiv-672217 | cs/0410033 | An In-Depth Look at Information Fusion Rules & the Unification of Fusion Theories | <|reference_start|>An In-Depth Look at Information Fusion Rules & the Unification of Fusion Theories: This paper may look like a glossary of the fusion rules and we also introduce new ones presenting their formulas and examples: Conjunctive, Disjunctive, Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule, Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache classical and hybrid rules, Murphy's average rule, Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as particular cases: Iganaki's parameterized rule, Weighting Average Operator, minC (M. Daniel), and newly Proportional Conflict Redistribution rules (Smarandache-Dezert) among which PCR5 is the most exact way of redistribution of the conflicting mass to non-empty sets following the path of the conjunctive rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to information fusion (Tchamova-Smarandache). Introducing the degree of union and degree of inclusion with respect to the cardinal of sets not with the fuzzy set point of view, besides that of intersection, many fusion rules can be improved. There are corner cases where each rule might have difficulties working or may not get an expected result.<|reference_end|> | arxiv | @article{smarandache2004an,
title={An In-Depth Look at Information Fusion Rules & the Unification of Fusion
Theories},
author={Florentin Smarandache},
journal={Partially published in Review of the Air Force Academy (The
Scientific Informative Review), Brasov, No. 2, pp. 31-40, 2006.},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410033},
primaryClass={cs.AI}
} | smarandache2004an |
arxiv-672218 | cs/0410034 | P-time Completeness of Light Linear Logic and its Nondeterministic Extension | <|reference_start|>P-time Completeness of Light Linear Logic and its Nondeterministic Extension: In CSL'99 Roversi pointed out that the Turing machine encoding of Girard's seminal paper "Light Linear Logic" has a flaw. Moreover he presented a working version of the encoding in Light Affine Logic, but not in Light Linear Logic. In this paper we present a working version of the encoding in Light Linear Logic. The idea of the encoding is based on a remark of Girard's tutorial paper on Linear Logic. The encoding is also an example which shows usefulness of additive connectives. Moreover we also consider a nondeterministic extension of Light Linear Logic. We show that the extended system is NP-complete in the same meaning as P-completeness of Light Linear Logic.<|reference_end|> | arxiv | @article{matsuoka2004p-time,
title={P-time Completeness of Light Linear Logic and its Nondeterministic
Extension},
author={Satoshi Matsuoka},
journal={arXiv preprint arXiv:cs/0410034},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410034},
primaryClass={cs.LO}
} | matsuoka2004p-time |
arxiv-672219 | cs/0410035 | Overhead-Free Computation, DCFLs, and CFLs | <|reference_start|>Overhead-Free Computation, DCFLs, and CFLs: We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such machines, we show that all context-free languages can be accepted nondeterministically in polynomial time with absolutely no space overhead, and that all deterministic context-free languages can be accepted deterministically in polynomial time with absolutely no space overhead.<|reference_end|> | arxiv | @article{hemaspaandra2004overhead-free,
title={Overhead-Free Computation, DCFLs, and CFLs},
author={Lane A. Hemaspaandra, Proshanto Mukherji, and Till Tantau},
journal={arXiv preprint arXiv:cs/0410035},
year={2004},
number={URCS-TR-2004-844},
archivePrefix={arXiv},
eprint={cs/0410035},
primaryClass={cs.CC}
} | hemaspaandra2004overhead-free |
arxiv-672220 | cs/0410036 | Self-Organised Factorial Encoding of a Toroidal Manifold | <|reference_start|>Self-Organised Factorial Encoding of a Toroidal Manifold: It is shown analytically how a neural network can be used optimally to encode input data that is derived from a toroidal manifold. The case of a 2-layer network is considered, where the output is assumed to be a set of discrete neural firing events. The network objective function measures the average Euclidean error that occurs when the network attempts to reconstruct its input from its output. This optimisation problem is solved analytically for a toroidal input manifold, and two types of solution are obtained: a joint encoder in which the network acts as a soft vector quantiser, and a factorial encoder in which the network acts as a pair of soft vector quantisers (one for each of the circular subspaces of the torus). The factorial encoder is favoured for small network sizes when the number of observed firing events is large. Such self-organised factorial encoding may be used to restrict the size of network that is required to perform a given encoding task, and will decompose an input manifold into its constituent submanifolds.<|reference_end|> | arxiv | @article{luttrell2004self-organised,
title={Self-Organised Factorial Encoding of a Toroidal Manifold},
author={Stephen Luttrell},
journal={arXiv preprint arXiv:cs/0410036},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410036},
primaryClass={cs.LG cs.CV}
} | luttrell2004self-organised |
arxiv-672221 | cs/0410037 | Hardware-Oriented Group Solutions for Hard Problems | <|reference_start|>Hardware-Oriented Group Solutions for Hard Problems: Group and individual solutions are considered for hard problems such as satisfiability problem. Time-space trade-off in a structured active memory provides means to achieve lower time complexity for solutions of these problems.<|reference_end|> | arxiv | @article{burgin2004hardware-oriented,
title={Hardware-Oriented Group Solutions for Hard Problems},
author={Mark Burgin},
journal={arXiv preprint arXiv:cs/0410037},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410037},
primaryClass={cs.CC}
} | burgin2004hardware-oriented |
arxiv-672222 | cs/0410038 | Frequent Knot Discovery | <|reference_start|>Frequent Knot Discovery: We explore the possibility of applying the framework of frequent pattern mining to a class of continuous objects appearing in nature, namely knots. We introduce the frequent knot mining problem and present a solution. The key observation is that a database consisting of knots can be transformed into a transactional database. This observation is based on the Prime Decomposition Theorem of knots.<|reference_end|> | arxiv | @article{geerts2004frequent,
title={Frequent Knot Discovery},
author={Floris Geerts},
journal={arXiv preprint arXiv:cs/0410038},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410038},
primaryClass={cs.DB}
} | geerts2004frequent |
arxiv-672223 | cs/0410039 | Generating All Maximal Induced Subgraphs for Hereditary, Connected-Hereditary and Rooted-Hereditary Properties | <|reference_start|>Generating All Maximal Induced Subgraphs for Hereditary, Connected-Hereditary and Rooted-Hereditary Properties: The problem of computing all maximal induced subgraphs of a graph G that have a graph property P, also called the maximal P-subgraphs problem, is considered. This problem is studied for hereditary, connected-hereditary and rooted-hereditary graph properties. The maximal P-subgraphs problem is reduced to restricted versions of this problem by providing algorithms that solve the general problem, assuming that an algorithm for a restricted version is given. The complexity of the algorithms are analyzed in terms of total polynomial time, incremental polynomial time and the complexity class P-enumerable. The general results presented allow simple proofs that the maximal P-subgraphs problem can be solved efficiently (in terms of the input and output) for many different properties.<|reference_end|> | arxiv | @article{cohen2004generating,
title={Generating All Maximal Induced Subgraphs for Hereditary,
Connected-Hereditary and Rooted-Hereditary Properties},
author={Sara Cohen and Yehoshua Sagiv},
journal={arXiv preprint arXiv:cs/0410039},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410039},
primaryClass={cs.DS cs.DM}
} | cohen2004generating |
arxiv-672224 | cs/0410040 | Two Methods for Decreasing the Computational Complexity of the MIMO ML Decoder | <|reference_start|>Two Methods for Decreasing the Computational Complexity of the MIMO ML Decoder: We propose use of QR factorization with sort and Dijkstra's algorithm for decreasing the computational complexity of the sphere decoder that is used for ML detection of signals on the multi-antenna fading channel. QR factorization with sort decreases the complexity of searching part of the decoder with small increase in the complexity required for preprocessing part of the decoder. Dijkstra's algorithm decreases the complexity of searching part of the decoder with increase in the storage complexity. The computer simulation demonstrates that the complexity of the decoder is reduced by the proposed methods significantly.<|reference_end|> | arxiv | @article{fukatani2004two,
title={Two Methods for Decreasing the Computational Complexity of the MIMO ML
Decoder},
author={Takayuki Fukatani, Ryutaroh Matsumoto, Tomohoko Uyematsu},
journal={IEICE Trans. Fundamentals, vol. E87-A, no. 10, pp. 2571-2576, Oct.
2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410040},
primaryClass={cs.IT math.IT}
} | fukatani2004two |
arxiv-672225 | cs/0410041 | Maximum Mutual Information of Space-Time Block Codes with Symbolwise Decodability | <|reference_start|>Maximum Mutual Information of Space-Time Block Codes with Symbolwise Decodability: In this paper, we analyze the performance of space-time block codes which enable symbolwise maximum likelihood decoding. We derive an upper bound of maximum mutual information (MMI) on space-time block codes that enable symbolwise maximum likelihood decoding for a frequency non-selective quasi-static fading channel. MMI is an upper bound on how much one can send information with vanishing error probability by using the target code.<|reference_end|> | arxiv | @article{tanaka2004maximum,
title={Maximum Mutual Information of Space-Time Block Codes with Symbolwise
Decodability},
author={Kenji Tanaka, Ryutaroh Matsumoto, Tomohiko Uyematsu},
journal={arXiv preprint arXiv:cs/0410041},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410041},
primaryClass={cs.IT math.IT}
} | tanaka2004maximum |
arxiv-672226 | cs/0410042 | Neural Architectures for Robot Intelligence | <|reference_start|>Neural Architectures for Robot Intelligence: We argue that the direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our lab in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.<|reference_end|> | arxiv | @article{ritter2004neural,
title={Neural Architectures for Robot Intelligence},
author={H. Ritter, J.J. Steil, C. Noelker, F. Roethling, P.C. McGuire},
journal={Reviews in the Neurosciences, vol. 14, no. 1-2, pp. 121-143 (2003)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410042},
primaryClass={cs.RO cs.CV cs.HC cs.LG cs.NE q-bio.NC}
} | ritter2004neural |
arxiv-672227 | cs/0410043 | Strategy in Ulam's Game and Tree Code Give Error-Resistant Protocols | <|reference_start|>Strategy in Ulam's Game and Tree Code Give Error-Resistant Protocols: We present a new approach to construction of protocols which are proof against communication errors. The construction is based on a generalization of the well known Ulam's game. We show equivalence between winning strategies in this game and robust protocols for multi-party computation. We do not give any complete theory. We want rather to describe a new fresh idea. We use a tree code defined by Schulman. The tree code is the most important part of the interactive version of Shannon's Coding Theorem proved by Schulman. He uses probabilistic argument for the existence of a tree code without giving any effective construction. We show another proof yielding a randomized construction which in contrary to his proof almost surely gives a good code. Moreover our construction uses much smaller alphabet.<|reference_end|> | arxiv | @article{peczarski2004strategy,
title={Strategy in Ulam's Game and Tree Code Give Error-Resistant Protocols},
author={Marcin Peczarski},
journal={arXiv preprint arXiv:cs/0410043},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410043},
primaryClass={cs.DC cs.IT math.IT}
} | peczarski2004strategy |
arxiv-672228 | cs/0410044 | An Example of Clifford Algebras Calculations with GiNaC | <|reference_start|>An Example of Clifford Algebras Calculations with GiNaC: This example of Clifford algebras calculations uses GiNaC (http://www.ginac.de/) library, which includes a support for generic Clifford algebra starting from version~1.3.0. Both symbolic and numeric calculation are possible and can be blended with other functions of GiNaC. This calculations was made for the paper math.CV/0410399. Described features of GiNaC are already available at PyGiNaC (http://sourceforge.net/projects/pyginac/) and due to course should propagate into other software like GNU Octave (http://www.octave.org/), gTybalt (http://www.fis.unipr.it/~stefanw/gtybalt.html), which use GiNaC library as their back-end.<|reference_end|> | arxiv | @article{kisil2004an,
title={An Example of Clifford Algebras Calculations with GiNaC},
author={Vladimir V. Kisil},
journal={Advances in Applied Clifford Algebras, 15(2005), no. 2, pp.
239-269},
year={2004},
number={LEEDS-MATH-PURE-2004-18},
archivePrefix={arXiv},
eprint={cs/0410044},
primaryClass={cs.MS cs.CG cs.GR cs.SC}
} | kisil2004an |
arxiv-672229 | cs/0410045 | Analysis of and workarounds for element reversal for a finite element-based algorithm for warping triangular and tetrahedral meshes | <|reference_start|>Analysis of and workarounds for element reversal for a finite element-based algorithm for warping triangular and tetrahedral meshes: We consider an algorithm called FEMWARP for warping triangular and tetrahedral finite element meshes that computes the warping using the finite element method itself. The algorithm takes as input a two- or three-dimensional domain defined by a boundary mesh (segments in one dimension or triangles in two dimensions) that has a volume mesh (triangles in two dimensions or tetrahedra in three dimensions) in its interior. It also takes as input a prescribed movement of the boundary mesh. It computes as output updated positions of the vertices of the volume mesh. The first step of the algorithm is to determine from the initial mesh a set of local weights for each interior vertex that describes each interior vertex in terms of the positions of its neighbors. These weights are computed using a finite element stiffness matrix. After a boundary transformation is applied, a linear system of equations based upon the weights is solved to determine the final positions of the interior vertices. The FEMWARP algorithm has been considered in the previous literature (e.g., in a 2001 paper by Baker). FEMWARP has been succesful in computing deformed meshes for certain applications. However, sometimes FEMWARP reverses elements; this is our main concern in this paper. We analyze the causes for this undesirable behavior and propose several techniques to make the method more robust against reversals. The most successful of the proposed methods includes combining FEMWARP with an optimization-based untangler.<|reference_end|> | arxiv | @article{shontz2004analysis,
title={Analysis of and workarounds for element reversal for a finite
element-based algorithm for warping triangular and tetrahedral meshes},
author={Suzanne M. Shontz, Stephen A. Vavasis},
journal={BIT, Numerical Mathematics, Vol. 50, Issue 4, 2010, p. 863-884},
year={2004},
doi={10.1007/s10543-010-0283-3},
archivePrefix={arXiv},
eprint={cs/0410045},
primaryClass={cs.NA}
} | shontz2004analysis |
arxiv-672230 | cs/0410046 | A Note on Scheduling Equal-Length Jobs to Maximize Throughput | <|reference_start|>A Note on Scheduling Equal-Length Jobs to Maximize Throughput: We study the problem of scheduling equal-length jobs with release times and deadlines, where the objective is to maximize the number of completed jobs. Preemptions are not allowed. In Graham's notation, the problem is described as 1|r_j;p_j=p|\sum U_j. We give the following results: (1) We show that the often cited algorithm by Carlier from 1981 is not correct. (2) We give an algorithm for this problem with running time O(n^5).<|reference_end|> | arxiv | @article{chrobak2004a,
title={A Note on Scheduling Equal-Length Jobs to Maximize Throughput},
author={Marek Chrobak, Christoph Durr, Wojciech Jawor, Lukasz Kowalik, Maciej
Kurowski},
journal={arXiv preprint arXiv:cs/0410046},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410046},
primaryClass={cs.DS}
} | chrobak2004a |
arxiv-672231 | cs/0410047 | Simple Distributed Weighted Matchings | <|reference_start|>Simple Distributed Weighted Matchings: Wattenhofer [WW04] derive a complicated distributed algorithm to compute a weighted matching of an arbitrary weighted graph, that is at most a factor 5 away from the maximum weighted matching of that graph. We show that a variant of the obvious sequential greedy algorithm [Pre99], that computes a weighted matching at most a factor 2 away from the maximum, is easily distributed. This yields the best known distributed approximation algorithm for this problem so far.<|reference_end|> | arxiv | @article{hoepman2004simple,
title={Simple Distributed Weighted Matchings},
author={Jaap-Henk Hoepman},
journal={arXiv preprint arXiv:cs/0410047},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410047},
primaryClass={cs.DC cs.DM}
} | hoepman2004simple |
arxiv-672232 | cs/0410048 | Worst-Case Optimal Tree Layout in External Memory | <|reference_start|>Worst-Case Optimal Tree Layout in External Memory: Consider laying out a fixed-topology tree of N nodes into external memory with block size B so as to minimize the worst-case number of block memory transfers required to traverse a path from the root to a node of depth D. We prove that the optimal number of memory transfers is $$ \cases{ \displaystyle \Theta\left( {D \over \lg (1{+}B)} \right) & when $D = O(\lg N)$, \cr \displaystyle \Theta\left( {\lg N \over \lg \left(1{+}{B \lg N \over D}\right)} \right) & when $D = \Omega(\lg N)$ and $D = O(B \lg N)$, \cr \displaystyle \Theta\left( {D \over B} \right) & when $D = \Omega(B \lg N)$. } $$<|reference_end|> | arxiv | @article{demaine2004worst-case,
title={Worst-Case Optimal Tree Layout in External Memory},
author={Erik D. Demaine and John Iacono and Stefan Langerman},
journal={arXiv preprint arXiv:cs/0410048},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410048},
primaryClass={cs.DS}
} | demaine2004worst-case |
arxiv-672233 | cs/0410049 | Intransitivity and Vagueness | <|reference_start|>Intransitivity and Vagueness: There are many examples in the literature that suggest that indistinguishability is intransitive, despite the fact that the indistinguishability relation is typically taken to be an equivalence relation (and thus transitive). It is shown that if the uncertainty perception and the question of when an agent reports that two things are indistinguishable are both carefully modeled, the problems disappear, and indistinguishability can indeed be taken to be an equivalence relation. Moreover, this model also suggests a logic of vagueness that seems to solve many of the problems related to vagueness discussed in the philosophical literature. In particular, it is shown here how the logic can handle the sorites paradox.<|reference_end|> | arxiv | @article{halpern2004intransitivity,
title={Intransitivity and Vagueness},
author={Joseph Y. Halpern},
journal={arXiv preprint arXiv:cs/0410049},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410049},
primaryClass={cs.AI}
} | halpern2004intransitivity |
arxiv-672234 | cs/0410050 | Sleeping Beauty Reconsidered: Conditioning and Reflection in Asynchronous Systems | <|reference_start|>Sleeping Beauty Reconsidered: Conditioning and Reflection in Asynchronous Systems: A careful analysis of conditioning in the Sleeping Beauty problem is done, using the formal model for reasoning about knowledge and probability developed by Halpern and Tuttle. While the Sleeping Beauty problem has been viewed as revealing problems with conditioning in the presence of imperfect recall, the analysis done here reveals that the problems are not so much due to imperfect recall as to asynchrony. The implications of this analysis for van Fraassen's Reflection Principle and Savage's Sure-Thing Principle are considered.<|reference_end|> | arxiv | @article{halpern2004sleeping,
title={Sleeping Beauty Reconsidered: Conditioning and Reflection in
Asynchronous Systems},
author={Joseph Y. Halpern},
journal={arXiv preprint arXiv:cs/0410050},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410050},
primaryClass={cs.AI}
} | halpern2004sleeping |
arxiv-672235 | cs/0410051 | Turing Machine with Faults, Failures and Recovery | <|reference_start|>Turing Machine with Faults, Failures and Recovery: A Turing machine with faults, failures and recovery (TMF) is described. TMF is (weakly) non-deterministic Turing machine consisting of five semi-infinite tapes (Master Tape, Synchro Tape, Backup Tape, Backup Synchro Tape, User Tape) and four controlling components (Program, Daemon, Apparatus, User). Computational process consists of three phases (Program Phase, Failure Phase, Repair Phase). C++ Simulator of a Turing machine with faults, failures and recovery has been developed.<|reference_end|> | arxiv | @article{vinokur2004turing,
title={Turing Machine with Faults, Failures and Recovery},
author={Alex Vinokur},
journal={arXiv preprint arXiv:cs/0410051},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410051},
primaryClass={cs.LO}
} | vinokur2004turing |
arxiv-672236 | cs/0410052 | A 2-chain can interlock with a k-chain | <|reference_start|>A 2-chain can interlock with a k-chain: One of the open problems posed in [3] is: what is the minimal number k such that an open, flexible k-chain can interlock with a flexible 2-chain? In this paper, we establish the assumption behind this problem, that there is indeed some k that achieves interlocking. We prove that a flexible 2-chain can interlock with a flexible, open 16-chain.<|reference_end|> | arxiv | @article{glass2004a,
title={A 2-chain can interlock with a k-chain},
author={Julie Glass, Stefan Langerman, Joseph O'Rourke, Jack Snoeyink,
Jianyuan K. Zhong},
journal={arXiv preprint arXiv:cs/0410052},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410052},
primaryClass={cs.CG cs.DM}
} | glass2004a |
arxiv-672237 | cs/0410053 | An Extended Generalized Disjunctive Paraconsistent Data Model for Disjunctive Information | <|reference_start|>An Extended Generalized Disjunctive Paraconsistent Data Model for Disjunctive Information: This paper presents an extension of generalized disjunctive paraconsistent relational data model in which pure disjunctive positive and negative information as well as mixed disjunctive positive and negative information can be represented explicitly and manipulated. We consider explicit mixed disjunctive information in the context of disjunctive databases as there is an interesting interplay between these two types of information. Extended generalized disjunctive paraconsistent relation is introduced as the main structure in this model. The relational algebra is appropriately generalized to work on extended generalized disjunctive paraconsistent relations and their correctness is established.<|reference_end|> | arxiv | @article{wang2004an,
title={An Extended Generalized Disjunctive Paraconsistent Data Model for
Disjunctive Information},
author={Haibin Wang, Hao Tian, Rajshekhar Sunderraman},
journal={arXiv preprint arXiv:cs/0410053},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410053},
primaryClass={cs.DB}
} | wang2004an |
arxiv-672238 | cs/0410054 | Paraconsistent Intuitionistic Fuzzy Relational Data Model | <|reference_start|>Paraconsistent Intuitionistic Fuzzy Relational Data Model: In this paper, we present a generalization of the relational data model based on paraconsistent intuitionistic fuzzy sets. Our data model is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or intuitionistic fuzzy relation can only handle incomplete information. Associated with each relation are two membership functions one is called truth-membership function $T$ which keeps track of the extent to which we believe the tuple is in the relation, another is called false-membership function which keeps track of the extent to which we believe that it is not in the relation. A paraconsistent intuitionistic fuzzy relation is inconsistent if there exists one tuple $a$ such that $T(a) + F(a) > 1$. In order to handle inconsistent situation, we propose an operator called split to transform inconsistent paraconsistent intuitionistic fuzzy relations into pseudo-consistent paraconsistent intuitionistic fuzzy relations and do the set-theoretic and relation-theoretic operations on them and finally use another operator called combine to transform the result back to paraconsistent intuitionistic fuzzy relation. For this model, we define algebraic operators that are generalisations of the usual operators such as union, selection, join on fuzzy relations. Our data model can underlie any database and knowledge-base management system that deals with incomplete and inconsistent information.<|reference_end|> | arxiv | @article{sunderraman2004paraconsistent,
title={Paraconsistent Intuitionistic Fuzzy Relational Data Model},
author={Rajshekhar Sunderraman, Haibin Wang},
journal={arXiv preprint arXiv:cs/0410054},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410054},
primaryClass={cs.DB}
} | sunderraman2004paraconsistent |
arxiv-672239 | cs/0410055 | Mathematical knowledge management is needed | <|reference_start|>Mathematical knowledge management is needed: In this lecture I discuss some aspects of MKM, Mathematical Knowledge Management, with particuar emphasis on information storage and information retrieval.<|reference_end|> | arxiv | @article{hazewinkel2004mathematical,
title={Mathematical knowledge management is needed},
author={Michiel Hazewinkel},
journal={arXiv preprint arXiv:cs/0410055},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410055},
primaryClass={cs.IR}
} | hazewinkel2004mathematical |
arxiv-672240 | cs/0410056 | Interval Neutrosophic Logics: Theory and Applications | <|reference_start|>Interval Neutrosophic Logics: Theory and Applications: In this paper, we present the interval neutrosophic logics which generalizes the fuzzy logic, paraconsistent logic, intuitionistic fuzzy logic and many other non-classical and non-standard logics. We will give the formal definition of interval neutrosophic propositional calculus and interval neutrosophic predicate calculus. Then we give one application of interval neutrosophic logics to do approximate reasoning.<|reference_end|> | arxiv | @article{wang2004interval,
title={Interval Neutrosophic Logics: Theory and Applications},
author={Haibin Wang, Florentin Smarandache, Yanqing Zhang, Rajshekhar
Sunderraman},
journal={arXiv preprint arXiv:cs/0410056},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410056},
primaryClass={cs.LO}
} | wang2004interval |
arxiv-672241 | cs/0410057 | Generalized Counters and Reversal Complexity | <|reference_start|>Generalized Counters and Reversal Complexity: We generalize the definition of a counter and counter reversal complexity and investigate the power of generalized deterministic counter automata in terms of language recognition.<|reference_end|> | arxiv | @article{rao2004generalized,
title={Generalized Counters and Reversal Complexity},
author={M. V. Panduranga Rao},
journal={pp.318-326, Proceedings of TAMC 2006, Beijing, China, Springer
LNCS, 3959},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410057},
primaryClass={cs.CC}
} | rao2004generalized |
arxiv-672242 | cs/0410058 | Robust Dialogue Understanding in HERALD | <|reference_start|>Robust Dialogue Understanding in HERALD: We tackle the problem of robust dialogue processing from the perspective of language engineering. We propose an agent-oriented architecture that allows us a flexible way of composing robust processors. Our approach is based on Shoham's Agent Oriented Programming (AOP) paradigm. We will show how the AOP agent model can be enriched with special features and components that allow us to deal with classical problems of dialogue understanding.<|reference_end|> | arxiv | @article{pallotta2004robust,
title={Robust Dialogue Understanding in HERALD},
author={Vincenzo Pallotta, Afzal Ballim},
journal={Proceedings of RANLP 2001 - EuroConference on Recent Advances in
Natural Language Processing, September 5-7, 2001, Tzigov-Chark, Bulgaria},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410058},
primaryClass={cs.CL cs.AI cs.HC cs.MA cs.SE}
} | pallotta2004robust |
arxiv-672243 | cs/0410059 | A knowledge-based approach to semi-automatic annotation of multimedia documents via user adaptation | <|reference_start|>A knowledge-based approach to semi-automatic annotation of multimedia documents via user adaptation: Current approaches to the annotation process focus on annotation schemas, languages for annotation, or are very application driven. In this paper it is proposed that a more flexible architecture for annotation requires a knowledge component to allow for flexible search and navigation of the annotated material. In particular, it is claimed that a general approach must take into account the needs, competencies, and goals of the producers, annotators, and consumers of the annotated material. We propose that a user-model based approach is, therefore, necessary.<|reference_end|> | arxiv | @article{ballim2004a,
title={A knowledge-based approach to semi-automatic annotation of multimedia
documents via user adaptation},
author={Afzal Ballim, Nastaran Fatemi, Hatem Ghorbel, Vincenzo Pallotta},
journal={First EAGLES/ISLE Workshop on Meta-Descriptions and Annotation
Schemas for Multimodal/Multimedia Language Resources (LREC 2000
Pre-Conference Workshop), 29-30 May 2000, Athens, Greece},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410059},
primaryClass={cs.DL cs.CL cs.IR}
} | ballim2004a |
arxiv-672244 | cs/0410060 | Semantic filtering by inference on domain knowledge in spoken dialogue systems | <|reference_start|>Semantic filtering by inference on domain knowledge in spoken dialogue systems: General natural dialogue processing requires large amounts of domain knowledge as well as linguistic knowledge in order to ensure acceptable coverage and understanding. There are several ways of integrating lexical resources (e.g. dictionaries, thesauri) and knowledge bases or ontologies at different levels of dialogue processing. We concentrate in this paper on how to exploit domain knowledge for filtering interpretation hypotheses generated by a robust semantic parser. We use domain knowledge to semantically constrain the hypothesis space. Moreover, adding an inference mechanism allows us to complete the interpretation when information is not explicitly available. Further, we discuss briefly how this can be generalized towards a predictive natural interactive system.<|reference_end|> | arxiv | @article{ballim2004semantic,
title={Semantic filtering by inference on domain knowledge in spoken dialogue
systems},
author={Afzal Ballim, Vincenzo Pallotta},
journal={Proceedings of the LREC 2000 Workshop "From spoken dialogue to
full natural interactive dialogue. Theory, empirical analysis and
evaluation", May 29th, 2000 Athen, Greece},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410060},
primaryClass={cs.CL cs.AI cs.HC cs.IR}
} | ballim2004semantic |
arxiv-672245 | cs/0410061 | An argumentative annotation schema for meeting discussions | <|reference_start|>An argumentative annotation schema for meeting discussions: In this article, we are interested in the annotation of transcriptions of human-human dialogue taken from meeting records. We first propose a meeting content model where conversational acts are interpreted with respect to their argumentative force and their role in building the argumentative structure of the meeting discussion. Argumentation in dialogue describes the way participants take part in the discussion and argue their standpoints. Then, we propose an annotation scheme based on such an argumentative dialogue model as well as the evaluation of its adequacy. The obtained higher-level semantic annotations are exploited in the conceptual indexing of the information contained in meeting discussions.<|reference_end|> | arxiv | @article{pallotta2004an,
title={An argumentative annotation schema for meeting discussions},
author={Vincenzo Pallotta, Hatem Ghorbel, Patrick Ruch, Giovanni Coray},
journal={Procedings of the LREC 2004 international conference, 26-28 May
2004, Lisbon, Portugal. Pages 1003-1006},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410061},
primaryClass={cs.CL cs.DL cs.IR}
} | pallotta2004an |
arxiv-672246 | cs/0410062 | Automatic Keyword Extraction from Spoken Text A Comparison of two Lexical Resources: the EDR and WordNet | <|reference_start|>Automatic Keyword Extraction from Spoken Text A Comparison of two Lexical Resources: the EDR and WordNet: Lexical resources such as WordNet and the EDR electronic dictionary have been used in several NLP tasks. Probably, partly due to the fact that the EDR is not freely available, WordNet has been used far more often than the EDR. We have used both resources on the same task in order to make a comparison possible. The task is automatic assignment of keywords to multi-party dialogue episodes (i.e. thematically coherent stretches of spoken text). We show that the use of lexical resources in such a task results in slightly higher performances than the use of a purely statistically based method.<|reference_end|> | arxiv | @article{van der plas2004automatic,
title={Automatic Keyword Extraction from Spoken Text. A Comparison of two
Lexical Resources: the EDR and WordNet},
author={Lonneke van der Plas, Vincenzo Pallotta, Martin Rajman, Hatem Ghorbel},
journal={Procedings of the LREC 2004 international conference, 26-28 May
2004, Lisbon, Portugal. Pages 2205-2208},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410062},
primaryClass={cs.CL cs.DL cs.IR}
} | van der plas2004automatic |
arxiv-672247 | cs/0410063 | INSPIRE: Evaluation of a Smart-Home System for Infotainment Management and Device Control | <|reference_start|>INSPIRE: Evaluation of a Smart-Home System for Infotainment Management and Device Control: This paper gives an overview of the assessment and evaluation methods which have been used to determine the quality of the INSPIRE smart home system. The system allows different home appliances to be controlled via speech, and consists of speech and speaker recognition, speech understanding, dialogue management, and speech output components. The performance of these components is first assessed individually, and then the entire system is evaluated in an interaction experiment with test users. Initial results of the assessment and evaluation are given, in particular with respect to the transmission channel impact on speech and speaker recognition, and the assessment of speech output for different system metaphors.<|reference_end|> | arxiv | @article{moeller2004inspire:,
title={INSPIRE: Evaluation of a Smart-Home System for Infotainment Management
and Device Control},
author={Sebastian Moeller, Jan Krebber, Alexander Raake, Paula Smeele, Martin
Rajman, Mirek Melichar, Vincenzo Pallotta, Gianna Tsakou, Basilis Kladis,
Anestis Vovos, Jettie Hoonhout, Dietmar Schuchardt, Nikos Fakotakis, Todor
Ganchev, Ilyas Potamitis},
journal={Procedings of the LREC 2004 international conference, 26-28 May
2004, Lisbon, Portugal. Pages 1603-1606},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410063},
primaryClass={cs.HC cs.CL}
} | moeller2004inspire: |
arxiv-672248 | cs/0410064 | Intelligent Computer Numerical Control unit for machine tools | <|reference_start|>Intelligent Computer Numerical Control unit for machine tools: The paper describes a new CNC control unit for machining centres with learning ability and automatic intelligent generating of NC programs on the bases of a neural network, which is built-in into a CNC unit as special device. The device performs intelligent and completely automatically the NC part programs only on the bases of 2D, 2,5D or 3D computer model of prismatic part. Intervention of the operator is not needed. The neural network for milling, drilling, reaming, threading and operations alike has learned to generate NC programs in the learning module, which is a part of intelligent CAD/CAM system.<|reference_end|> | arxiv | @article{balic2004intelligent,
title={Intelligent Computer Numerical Control unit for machine tools},
author={J. Balic},
journal={Neural-Network-Based Numerical Control for Milling Machine,
Journal of Intelligent and Robotic System, Volume 40, Issue 4, Aug 2004;
Pages: 343-358 (extended version)},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410064},
primaryClass={cs.CE}
} | balic2004intelligent |
arxiv-672249 | cs/0410065 | A Categorical View on Algebraic Lattices in Formal Concept Analysis | <|reference_start|>A Categorical View on Algebraic Lattices in Formal Concept Analysis: Formal concept analysis has grown from a new branch of the mathematical field of lattice theory to a widely recognized tool in Computer Science and elsewhere. In order to fully benefit from this theory, we believe that it can be enriched with notions such as approximation by computation or representability. The latter are commonly studied in denotational semantics and domain theory and captured most prominently by the notion of algebraicity, e.g. of lattices. In this paper, we explore the notion of algebraicity in formal concept analysis from a category-theoretical perspective. To this end, we build on the the notion of approximable concept with a suitable category and show that the latter is equivalent to the category of algebraic lattices. At the same time, the paper provides a relatively comprehensive account of the representation theory of algebraic lattices in the framework of Stone duality, relating well-known structures such as Scott information systems with further formalisms from logic, topology, domains and lattice theory.<|reference_end|> | arxiv | @article{hitzler2004a,
title={A Categorical View on Algebraic Lattices in Formal Concept Analysis},
author={Pascal Hitzler, Markus Kr"otzsch, Guo-Qiang Zhang},
journal={Fundam. Inform. 74:2-3 (2006) 301-328},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410065},
primaryClass={cs.LO}
} | hitzler2004a |
arxiv-672250 | cs/0410066 | Fast Query Processing by Distributing an Index over CPU Caches | <|reference_start|>Fast Query Processing by Distributing an Index over CPU Caches: Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.<|reference_end|> | arxiv | @article{ma2004fast,
title={Fast Query Processing by Distributing an Index over CPU Caches},
author={Xiaoqin Ma and Gene Cooperman},
journal={arXiv preprint arXiv:cs/0410066},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410066},
primaryClass={cs.DC cs.PF}
} | ma2004fast |
arxiv-672251 | cs/0410067 | Computational Unification: a Vision for Connecting Researchers | <|reference_start|>Computational Unification: a Vision for Connecting Researchers: The extent to which the benefits of science can be fully realized depends critically upon the quality of the connection between researchers themselves and between researchers and members of the public. We believe that it is now possible to improve these connections on a community-wide and even world-wide basis through the use of an appropriate information management system. In this paper we explore the concepts and challenges, and propose an architecture for the implementation of such a system. "One of the greatest visions for science is a computational unification in which every researcher can interact with all other researchers through use of their own research system." "These features not only enable research collaboration on a scale never previously envisaged, they also enable sharing and dissemination of scientific knowledge to the public at large with a sophistication unparalleled in history."<|reference_end|> | arxiv | @article{troy2004computational,
title={Computational Unification: a Vision for Connecting Researchers},
author={Richard M. Troy III},
journal={arXiv preprint arXiv:cs/0410067},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410067},
primaryClass={cs.DC cs.CY}
} | troy2004computational |
arxiv-672252 | cs/0410068 | Analyzing and Improving Performance of a Class of Anomaly-based Intrusion Detectors | <|reference_start|>Analyzing and Improving Performance of a Class of Anomaly-based Intrusion Detectors: Anomaly-based intrusion detection (AID) techniques are useful for detecting novel intrusions into computing resources. One of the most successful AID detectors proposed to date is stide, which is based on analysis of system call sequences. In this paper, we present a detailed formal framework to analyze, understand and improve the performance of stide and similar AID techniques. Several important properties of stide-like detectors are established through formal proofs, and validated by carefully conducted experiments using test datasets. Finally, the framework is utilized to design two applications to improve the cost and performance of stide-like detectors which are based on sequence analysis. The first application reduces the cost of developing AID detectors by identifying the critical sections in the training dataset, and the second application identifies the intrusion context in the intrusive dataset, that helps to fine-tune the detectors. Such fine-tuning in turn helps to improve detection rate and reduce false alarm rate, thereby increasing the effectiveness and efficiency of the intrusion detectors.<|reference_end|> | arxiv | @article{li2004analyzing,
title={Analyzing and Improving Performance of a Class of Anomaly-based
Intrusion Detectors},
author={Zhuowei Li and Amitabha Das},
journal={arXiv preprint arXiv:cs/0410068},
year={2004},
number={cais-tr-2004-001},
archivePrefix={arXiv},
eprint={cs/0410068},
primaryClass={cs.CR cs.AI}
} | li2004analyzing |
arxiv-672253 | cs/0410069 | Selfish peering and routing in the Internet | <|reference_start|>Selfish peering and routing in the Internet: The Internet is a loose amalgamation of independent service providers acting in their own self-interest. We examine the implications of this economic reality on peering relationships. Specifically, we consider how the incentives of the providers might determine where they choose to interconnect with each other. We consider a game where two selfish network providers must establish peering points between their respective network graphs, given knowledge of traffic conditions and a nearest-exit routing policy for out-going traffic, as well as costs based on congestion and peering connectivity. We focus on the pairwise stability equilibrium concept and use a stochastic procedure to solve for the stochastically pairwise stable configurations. Stochastically stable networks are selected for their robustness to deviations in strategy and are therefore posited as the more likely networks to emerge in a dynamic setting. We note a paucity of stochastically stable peering configurations under asymmetric conditions, particularly to unequal interdomain traffic flow, with adverse effects on system-wide efficiency. Under bilateral flow conditions, we find that as the cost associated with the establishment of peering links approaches zero, the variance in the number of peering links of stochastically pairwise stable equilibria increases dramatically.<|reference_end|> | arxiv | @article{corbo2004selfish,
title={Selfish peering and routing in the Internet},
author={Jacomo Corbo and Thomas Petermann},
journal={arXiv preprint arXiv:cs/0410069},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410069},
primaryClass={cs.GT cond-mat.stat-mech cs.NI}
} | corbo2004selfish |
arxiv-672254 | cs/0410070 | Using image partitions in 4th Dimension | <|reference_start|>Using image partitions in 4th Dimension: I have plotted an image by using mathematical functions in the Database "4th Dimension". I'm going to show an alternative method to: detect which sector has been clicked; highlight it and combine it with other sectors already highlighted; store the graph information in an efficient way; load and splat image layers to reconstruct the stored graph.<|reference_end|> | arxiv | @article{gasparri2004using,
title={Using image partitions in 4th Dimension},
author={Giovanni Gasparri},
journal={arXiv preprint arXiv:cs/0410070},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410070},
primaryClass={cs.DB}
} | gasparri2004using |
arxiv-672255 | cs/0410071 | The Cyborg Astrobiologist: First Field Experience | <|reference_start|>The Cyborg Astrobiologist: First Field Experience: We present results from the first geological field tests of the `Cyborg Astrobiologist', which is a wearable computer and video camcorder system that we are using to test and train a computer-vision system towards having some of the autonomous decision-making capabilities of a field-geologist and field-astrobiologist. The Cyborg Astrobiologist platform has thus far been used for testing and development of these algorithms and systems: robotic acquisition of quasi-mosaics of images, real-time image segmentation, and real-time determination of interesting points in the image mosaics. The hardware and software systems function reliably, and the computer-vision algorithms are adequate for the first field tests. In addition to the proof-of-concept aspect of these field tests, the main result of these field tests is the enumeration of those issues that we can improve in the future, including: first, detection and accounting for shadows caused by 3D jagged edges in the outcrop; second, reincorporation of more sophisticated texture-analysis algorithms into the system; third, creation of hardware and software capabilities to control the camera's zoom lens in an intelligent manner; and fourth, development of algorithms for interpretation of complex geological scenery. Nonetheless, despite these technical inadequacies, this Cyborg Astrobiologist system, consisting of a camera-equipped wearable-computer and its computer-vision algorithms, has demonstrated its ability of finding genuinely interesting points in real-time in the geological scenery, and then gathering more information about these interest points in an automated manner.<|reference_end|> | arxiv | @article{mcguire2004the,
title={The Cyborg Astrobiologist: First Field Experience},
author={Patrick C. McGuire, Jens Ormo, Enrique Diaz-Martinez, Jose Antonio
Rodriguez-Manfredi, Javier Gomez-Elvira, Helge Ritter, Markus Oesker, Joerg
Ontrup, (Centro de Astrobiologia, Madrid; University of Bielefeld)},
journal={Int.J.Astrobiol. 3 (2004) 189-207},
year={2004},
doi={10.1017/S147355040500220X},
archivePrefix={arXiv},
eprint={cs/0410071},
primaryClass={cs.CV astro-ph cs.AI cs.CE cs.HC cs.RO cs.SE q-bio.NC}
} | mcguire2004the |
arxiv-672256 | cs/0410072 | Temporal logic with predicate abstraction | <|reference_start|>Temporal logic with predicate abstraction: A predicate linear temporal logic LTL_{\lambda,=} without quantifiers but with predicate abstraction mechanism and equality is considered. The models of LTL_{\lambda,=} can be naturally seen as the systems of pebbles (flexible constants) moving over the elements of some (possibly infinite) domain. This allows to use LTL_{\lambda,=} for the specification of dynamic systems using some resources, such as processes using memory locations, mobile agents occupying some sites, etc. On the other hand we show that LTL_{\lambda,=} is not recursively axiomatizable and, therefore, fully automated verification of LTL_{\lambda,=} specifications is not, in general, possible.<|reference_end|> | arxiv | @article{lisitsa2004temporal,
title={Temporal logic with predicate abstraction},
author={Alexei Lisitsa, Igor Potapov},
journal={arXiv preprint arXiv:cs/0410072},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410072},
primaryClass={cs.LO cs.CL}
} | lisitsa2004temporal |
arxiv-672257 | cs/0410073 | On Spatial Conjunction as Second-Order Logic | <|reference_start|>On Spatial Conjunction as Second-Order Logic: Spatial conjunction is a powerful construct for reasoning about dynamically allocated data structures, as well as concurrent, distributed and mobile computation. While researchers have identified many uses of spatial conjunction, its precise expressive power compared to traditional logical constructs was not previously known. In this paper we establish the expressive power of spatial conjunction. We construct an embedding from first-order logic with spatial conjunction into second-order logic, and more surprisingly, an embedding from full second order logic into first-order logic with spatial conjunction. These embeddings show that the satisfiability of formulas in first-order logic with spatial conjunction is equivalent to the satisfiability of formulas in second-order logic. These results explain the great expressive power of spatial conjunction and can be used to show that adding unrestricted spatial conjunction to a decidable logic leads to an undecidable logic. As one example, we show that adding unrestricted spatial conjunction to two-variable logic leads to undecidability. On the side of decidability, the embedding into second-order logic immediately implies the decidability of first-order logic with a form of spatial conjunction over trees. The embedding into spatial conjunction also has useful consequences: because a restricted form of spatial conjunction in two-variable logic preserves decidability, we obtain that a correspondingly restricted form of second-order quantification in two-variable logic is decidable. The resulting language generalizes the first-order theory of boolean algebra over sets and is useful in reasoning about the contents of data structures in object-oriented languages.<|reference_end|> | arxiv | @article{kuncak2004on,
title={On Spatial Conjunction as Second-Order Logic},
author={Viktor Kuncak, Martin Rinard},
journal={arXiv preprint arXiv:cs/0410073},
year={2004},
number={MIT CSAIL 970},
archivePrefix={arXiv},
eprint={cs/0410073},
primaryClass={cs.LO cs.PL cs.SE}
} | kuncak2004on |
arxiv-672258 | cs/0410074 | ReCord: A Distributed Hash Table with Recursive Structure | <|reference_start|>ReCord: A Distributed Hash Table with Recursive Structure: We propose a simple distributed hash table called ReCord, which is a generalized version of Randomized-Chord and offers improved tradeoffs in performance and topology maintenance over existing P2P systems. ReCord is scalable and can be easily implemented as an overlay network, and offers a good tradeoff between the node degree and query latency. For instance, an $n$-node ReCord with $O(\log n)$ node degree has an expected latency of $\Theta(\log n)$ hops. Alternatively, it can also offer $\Theta(\frac{\log n}{\log \log n})$ hops latency at a higher cost of $O(\frac{\log^2 n}{\log \log n})$ node degree. Meanwhile, simulations of the dynamic behaviors of ReCord are studied.<|reference_end|> | arxiv | @article{zeng2004record:,
title={ReCord: A Distributed Hash Table with Recursive Structure},
author={Jianyang Zeng, Wen-Jing Hsu},
journal={arXiv preprint arXiv:cs/0410074},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410074},
primaryClass={cs.DC}
} | zeng2004record: |
arxiv-672259 | cs/0410075 | Some first thoughts on the stability of the asynchronous systems | <|reference_start|>Some first thoughts on the stability of the asynchronous systems: The (non-initialized, non-deterministic) asynchronous systems (in the input-output sense) are multi-valued functions from m-dimensional signals to sets of n-dimensional signals, the concept being inspired by the modeling of the asynchronous circuits. Our purpose is to state the problem of the their stability.<|reference_end|> | arxiv | @article{vlad2004some,
title={Some first thoughts on the stability of the asynchronous systems},
author={Serban E. Vlad},
journal={The 12-th Conference on Applied and Industrial Mathematics CAIM
2004, University of Pitesti, October 15-17, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0410075},
primaryClass={cs.GL}
} | vlad2004some |
arxiv-672260 | cs/0411001 | Synchronization from a Categorical Perspective | <|reference_start|>Synchronization from a Categorical Perspective: We introduce a notion of synchronization for higher-dimensional automata, based on coskeletons of cubical sets. Categorification transports this notion to the setting of categorical transition systems. We apply the results to study the semantics of an imperative programming language with message-passing.<|reference_end|> | arxiv | @article{worytkiewicz2004synchronization,
title={Synchronization from a Categorical Perspective},
author={Krzysztof Worytkiewicz},
journal={arXiv preprint arXiv:cs/0411001},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411001},
primaryClass={cs.PL cs.DM}
} | worytkiewicz2004synchronization |
arxiv-672261 | cs/0411002 | Fibonacci-Like Polynomials Produced by m-ary Huffman Codes for Absolutely Ordered Sequences | <|reference_start|>Fibonacci-Like Polynomials Produced by m-ary Huffman Codes for Absolutely Ordered Sequences: Fibonacci-like polynomials produced by m-ary Huffman codes for absolutely ordered sequences have been described.<|reference_end|> | arxiv | @article{vinokur2004fibonacci-like,
title={Fibonacci-Like Polynomials Produced by m-ary Huffman Codes for
Absolutely Ordered Sequences},
author={Alex Vinokur},
journal={arXiv preprint arXiv:cs/0411002},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411002},
primaryClass={cs.DM math.NT}
} | vinokur2004fibonacci-like |
arxiv-672262 | cs/0411003 | Applications of LDPC Codes to the Wiretap Channel | <|reference_start|>Applications of LDPC Codes to the Wiretap Channel: With the advent of quantum key distribution (QKD) systems, perfect (i.e. information-theoretic) security can now be achieved for distribution of a cryptographic key. QKD systems and similar protocols use classical error-correcting codes for both error correction (for the honest parties to correct errors) and privacy amplification (to make an eavesdropper fully ignorant). From a coding perspective, a good model that corresponds to such a setting is the wire tap channel introduced by Wyner in 1975. In this paper, we study fundamental limits and coding methods for wire tap channels. We provide an alternative view of the proof for secrecy capacity of wire tap channels and show how capacity achieving codes can be used to achieve the secrecy capacity for any wiretap channel. We also consider binary erasure channel and binary symmetric channel special cases for the wiretap channel and propose specific practical codes. In some cases our designs achieve the secrecy capacity and in others the codes provide security at rates below secrecy capacity. For the special case of a noiseless main channel and binary erasure channel, we consider encoder and decoder design for codes achieving secrecy on the wiretap channel; we show that it is possible to construct linear-time decodable secrecy codes based on LDPC codes that achieve secrecy.<|reference_end|> | arxiv | @article{thangaraj2004applications,
title={Applications of LDPC Codes to the Wiretap Channel},
author={Andrew Thangaraj, Souvik Dihidar, A. R. Calderbank, Steven McLaughlin,
Jean-Marc Merolla},
journal={arXiv preprint arXiv:cs/0411003},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411003},
primaryClass={cs.IT cs.CR math.IT}
} | thangaraj2004applications |
arxiv-672263 | cs/0411004 | Computational Aspects of a Numerical Model for Combustion Flow | <|reference_start|>Computational Aspects of a Numerical Model for Combustion Flow: A computational method for numeric resolution of a PDEs system, based on a Finite Differences schema integrated by interpolations of partial results, and an estimate of the error of its solution respect to the normal FD solution.<|reference_end|> | arxiv | @article{argentini2004computational,
title={Computational Aspects of a Numerical Model for Combustion Flow},
author={Gianluca Argentini},
journal={arXiv preprint arXiv:cs/0411004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411004},
primaryClass={cs.NA physics.comp-ph}
} | argentini2004computational |
arxiv-672264 | cs/0411005 | A Directed Threshold - Signature Scheme | <|reference_start|>A Directed Threshold - Signature Scheme: Directed signature is the solution of such problems when the signed message contains information sensitive to the signature receiver. Generally, in many application of directed signature, the signer is generally a single person. But when the message is on behalf of an organization, a valid sensitive message may require the approval of several people. Threshold signature schemes are used to solve these problems. This paper presents a threshold directed signature scheme.<|reference_end|> | arxiv | @article{lal2004a,
title={A Directed Threshold - Signature Scheme},
author={Sunder Lal and Manoj Kumar},
journal={arXiv preprint arXiv:cs/0411005},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411005},
primaryClass={cs.CR}
} | lal2004a |
arxiv-672265 | cs/0411006 | Capacity Achieving Code Constructions for Two Classes of (d,k) Constraints | <|reference_start|>Capacity Achieving Code Constructions for Two Classes of (d,k) Constraints: In this paper, we present two low complexity algorithms that achieve capacity for the noiseless (d,k) constrained channel when k=2d+1, or when k-d+1 is not prime. The first algorithm, called symbol sliding, is a generalized version of the bit flipping algorithm introduced by Aviran et al. [1]. In addition to achieving capacity for (d,2d+1) constraints, it comes close to capacity in other cases. The second algorithm is based on interleaving, and is a generalized version of the bit stuffing algorithm introduced by Bender and Wolf [2]. This method uses fewer than k-d biased bit streams to achieve capacity for (d,k) constraints with k-d+1 not prime. In particular, the encoder for (d,d+2^m-1) constraints, 1\le m<\infty, requires only m biased bit streams.<|reference_end|> | arxiv | @article{sankarasubramaniam2004capacity,
title={Capacity Achieving Code Constructions for Two Classes of (d,k)
Constraints},
author={Yogesh Sankarasubramaniam, Steven W. McLaughlin},
journal={arXiv preprint arXiv:cs/0411006},
year={2004},
doi={10.1109/TIT.2006.876224},
archivePrefix={arXiv},
eprint={cs/0411006},
primaryClass={cs.IT math.IT}
} | sankarasubramaniam2004capacity |
arxiv-672266 | cs/0411007 | Basic properties for sand automata | <|reference_start|>Basic properties for sand automata: We prove several results about the relations between injectivity and surjectivity for sand automata. Moreover, we begin the exploration of the dynamical behavior of sand automata proving that the property of nilpotency is undecidable. We believe that the proof technique used for this last result might reveal useful for many other results in this context.<|reference_end|> | arxiv | @article{cervelle2004basic,
title={Basic properties for sand automata},
author={Julien Cervelle (IGM), Enrico Formenti (I3S), Benoit Masson (I3S)},
journal={arXiv preprint arXiv:cs/0411007},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411007},
primaryClass={cs.CC}
} | cervelle2004basic |
arxiv-672267 | cs/0411008 | Intuitionistic computability logic | <|reference_start|>Intuitionistic computability logic: Computability logic (CL) is a systematic formal theory of computational tasks and resources, which, in a sense, can be seen as a semantics-based alternative to (the syntactically introduced) linear logic. With its expressive and flexible language, where formulas represent computational problems and "truth" is understood as algorithmic solvability, CL potentially offers a comprehensive logical basis for constructive applied theories and computing systems inherently requiring constructive and computationally meaningful underlying logics. Among the best known constructivistic logics is Heyting's intuitionistic calculus INT, whose language can be seen as a special fragment of that of CL. The constructivistic philosophy of INT, however, has never really found an intuitively convincing and mathematically strict semantical justification. CL has good claims to provide such a justification and hence a materialization of Kolmogorov's known thesis "INT = logic of problems". The present paper contains a soundness proof for INT with respect to the CL semantics. A comprehensive online source on CL is available at http://www.cis.upenn.edu/~giorgi/cl.html<|reference_end|> | arxiv | @article{japaridze2004intuitionistic,
title={Intuitionistic computability logic},
author={Giorgi Japaridze},
journal={Acta Cybernetica 18 (2007), pp. 77-113},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411008},
primaryClass={cs.LO cs.AI math.LO}
} | japaridze2004intuitionistic |
arxiv-672268 | cs/0411009 | The equations of the ideal latches | <|reference_start|>The equations of the ideal latches: The latches are simple circuits with feedback from the digital electrical engineering. We have included in our work the C element of Muller, the RS latch, the clocked RS latch, the D latch and also circuits containing two interconnected latches: the edge triggered RS flip-flop, the D flip-flop, the JK flip-flop, the T flip-flop. The purpose of this study is to model with equations the previous circuits, considered to be ideal, i.e. non-inertial. The technique of analysis is the pseudoboolean differential calculus.<|reference_end|> | arxiv | @article{vlad2004the,
title={The equations of the ideal latches},
author={Serban E. Vlad},
journal={The 12-th Conference on Applied and Industrial Mathematics CAIM
2004, University of Pitesti, October 15-17, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411009},
primaryClass={cs.GL}
} | vlad2004the |
arxiv-672269 | cs/0411010 | A Trace Logic for Local Security Properties | <|reference_start|>A Trace Logic for Local Security Properties: We propose a new simple \emph{trace} logic that can be used to specify \emph{local security properties}, i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.<|reference_end|> | arxiv | @article{corin2004a,
title={A Trace Logic for Local Security Properties},
author={Ricardo Corin, Antonio Durante, Sandro Etalle, Pieter Hartel},
journal={arXiv preprint arXiv:cs/0411010},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411010},
primaryClass={cs.CR}
} | corin2004a |
arxiv-672270 | cs/0411011 | Capacity Analysis for Continuous Alphabet Channels with Side Information, Part I: A General Framework | <|reference_start|>Capacity Analysis for Continuous Alphabet Channels with Side Information, Part I: A General Framework: Capacity analysis for channels with side information at the receiver has been an active area of interest. This problem is well investigated for the case of finite alphabet channels. However, the results are not easily generalizable to the case of continuous alphabet channels due to analytic difficulties inherent with continuous alphabets. In the first part of this two-part paper, we address an analytical framework for capacity analysis of continuous alphabet channels with side information at the receiver. For this purpose, we establish novel necessary and sufficient conditions for weak* continuity and strict concavity of the mutual information. These conditions are used in investigating the existence and uniqueness of the capacity-achieving measures. Furthermore, we derive necessary and sufficient conditions that characterize the capacity value and the capacity-achieving measure for continuous alphabet channels with side information at the receiver.<|reference_end|> | arxiv | @article{fozunbal2004capacity,
title={Capacity Analysis for Continuous Alphabet Channels with Side
Information, Part I: A General Framework},
author={Majid Fozunbal, Steven W. McLaughlin, and Ronald W. Schafer},
journal={arXiv preprint arXiv:cs/0411011},
year={2004},
doi={10.1109/TIT.2005.853322},
archivePrefix={arXiv},
eprint={cs/0411011},
primaryClass={cs.IT math.IT}
} | fozunbal2004capacity |
arxiv-672271 | cs/0411012 | Capacity Analysis for Continuous Alphabet Channels with Side Information, Part II: MIMO Channels | <|reference_start|>Capacity Analysis for Continuous Alphabet Channels with Side Information, Part II: MIMO Channels: In this part, we consider the capacity analysis for wireless mobile systems with multiple antenna architectures. We apply the results of the first part to a commonly known baseband, discrete-time multiple antenna system where both the transmitter and receiver know the channel's statistical law. We analyze the capacity for additive white Gaussian noise (AWGN) channels, fading channels with full channel state information (CSI) at the receiver, fading channels with no CSI, and fading channels with partial CSI at the receiver. For each type of channels, we study the capacity value as well as issues such as the existence, uniqueness, and characterization of the capacity-achieving measures for different types of moment constraints. The results are applicable to both Rayleigh and Rician fading channels in the presence of arbitrary line-of-sight and correlation profiles.<|reference_end|> | arxiv | @article{fozunbal2004capacity,
title={Capacity Analysis for Continuous Alphabet Channels with Side
Information, Part II: MIMO Channels},
author={Majid Fozunbal, Steven W. McLaughlin, and Ronald W. Schafer},
journal={arXiv preprint arXiv:cs/0411012},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411012},
primaryClass={cs.IT math.IT}
} | fozunbal2004capacity |
arxiv-672272 | cs/0411013 | Efficient Algorithms for Large-Scale Topology Discovery | <|reference_start|>Efficient Algorithms for Large-Scale Topology Discovery: There is a growing interest in discovery of internet topology at the interface level. A new generation of highly distributed measurement systems is currently being deployed. Unfortunately, the research community has not examined the problem of how to perform such measurements efficiently and in a network-friendly manner. In this paper we make two contributions toward that end. First, we show that standard topology discovery methods (e.g., skitter) are quite inefficient, repeatedly probing the same interfaces. This is a concern, because when scaled up, such methods will generate so much traffic that they will begin to resemble DDoS attacks. We measure two kinds of redundancy in probing (intra- and inter-monitor) and show that both kinds are important. We show that straightforward approaches to addressing these two kinds of redundancy must take opposite tacks, and are thus fundamentally in conflict. Our second contribution is to propose and evaluate Doubletree, an algorithm that reduces both types of redundancy simultaneously on routers and end systems. The key ideas are to exploit the tree-like structure of routes to and from a single point in order to guide when to stop probing, and to probe each path by starting near its midpoint. Our results show that Doubletree can reduce both types of measurement load on the network dramatically, while permitting discovery of nearly the same set of nodes and links. We then show how to enable efficient communication between monitors through the use of Bloom filters.<|reference_end|> | arxiv | @article{donnet2004efficient,
title={Efficient Algorithms for Large-Scale Topology Discovery},
author={Benoit Donnet, Philippe Raoult, Timur Friedman, Mark Crovella},
journal={arXiv preprint arXiv:cs/0411013},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411013},
primaryClass={cs.NI}
} | donnet2004efficient |
arxiv-672273 | cs/0411014 | Rate Distortion and Denoising of Individual Data Using Kolmogorov complexity | <|reference_start|>Rate Distortion and Denoising of Individual Data Using Kolmogorov complexity: We examine the structure of families of distortion balls from the perspective of Kolmogorov complexity. Special attention is paid to the canonical rate-distortion function of a source word which returns the minimal Kolmogorov complexity of all distortion balls containing that word subject to a bound on their cardinality. This canonical rate-distortion function is related to the more standard algorithmic rate-distortion function for the given distortion measure. Examples are given of list distortion, Hamming distortion, and Euclidean distortion. The algorithmic rate-distortion function can behave differently from Shannon's rate-distortion function. To this end, we show that the canonical rate-distortion function can and does assume a wide class of shapes (unlike Shannon's); we relate low algorithmic mutual information to low Kolmogorov complexity (and consequently suggest that certain aspects of the mutual information formulation of Shannon's rate-distortion function behave differently than would an analogous formulation using algorithmic mutual information); we explore the notion that low Kolmogorov complexity distortion balls containing a given word capture the interesting properties of that word (which is hard to formalize in Shannon's theory) and this suggests an approach to denoising; and, finally, we show that the different behavior of the rate-distortion curves of individual source words to some extent disappears after averaging over the source words.<|reference_end|> | arxiv | @article{vereshchagin2004rate,
title={Rate Distortion and Denoising of Individual Data Using Kolmogorov
complexity},
author={Nikolai K. Vereshchagin (Moscow State Univ.), Paul M.B. Vitanyi (CWI
and University of Amsterdam)},
journal={arXiv preprint arXiv:cs/0411014},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411014},
primaryClass={cs.IT math.IT}
} | vereshchagin2004rate |
arxiv-672274 | cs/0411015 | Bounded Input Bounded Predefined Control Bounded Output | <|reference_start|>Bounded Input Bounded Predefined Control Bounded Output: The paper is an attempt to generalize a methodology, which is similar to the bounded-input bounded-output method currently widely used for the system stability studies. The presented earlier methodology allows decomposition of input space into bounded subspaces and defining for each subspace its bounding surface. It also defines a corresponding predefined control, which maps any point of a bounded input into a desired bounded output subspace. This methodology was improved by providing a mechanism for the fast defining a bounded surface. This paper presents enhanced bounded-input bounded-predefined-control bounded-output approach, which provides adaptability feature to the control and allows transferring of a controlled system along a suboptimal trajectory.<|reference_end|> | arxiv | @article{flikop2004bounded,
title={Bounded Input Bounded Predefined Control Bounded Output},
author={Ziny Flikop},
journal={arXiv preprint arXiv:cs/0411015},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411015},
primaryClass={cs.AI}
} | flikop2004bounded |
arxiv-672275 | cs/0411016 | Intelligent search strategies based on adaptive Constraint Handling Rules | <|reference_start|>Intelligent search strategies based on adaptive Constraint Handling Rules: The most advanced implementation of adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP). This presentation compares an improved version of conflict-directed backjumping and two variants of dynamic backtracking with respect to chronological backtracking on some of the AIM instances which are a benchmark set of random 3-SAT problems. A CHR implementation of a Boolean constraint solver combined with these different search strategies in Java is thus being compared with a CHR implementation of the same Boolean constraint solver combined with chronological backtracking in SICStus Prolog. This comparison shows that the addition of ``intelligence'' to the search process may reduce the number of search steps dramatically. Furthermore, the runtime of their Java implementations is in most cases faster than the implementations of chronological backtracking. More specifically, conflict-directed backjumping is even faster than the SICStus Prolog implementation of chronological backtracking, although our Java implementation of CHR lacks the optimisations made in the SICStus Prolog system. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|> | arxiv | @article{wolf2004intelligent,
title={Intelligent search strategies based on adaptive Constraint Handling
Rules},
author={Armin Wolf},
journal={arXiv preprint arXiv:cs/0411016},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411016},
primaryClass={cs.AI cs.PL}
} | wolf2004intelligent |
arxiv-672276 | cs/0411017 | Analysis of 80211b MAC: A QoS, Fairness, and Performance Perspective | <|reference_start|>Analysis of 80211b MAC: A QoS, Fairness, and Performance Perspective: Wireless LANs have achieved a tremendous amount of growth in recent years. Among various wireless LAN technologies, the IEEE 802.11b based wireless LAN technology can be cited as the most prominent technology today. Despite being widely deployed, 802.11b cannot be termed as a well matured technology. Although 802.11b is adequate for basic connectivity and packet switching, It is evident that there is ample scope for its improvement in areas like quality of service, fairness, performance, security, etc. In this survey report, we identify and argue that the Medium Access Controller for 802.11b networks is the prime area for these improvements. To enunciate our claims we highlight some of the quality of service, fairness, and performance issues related to 802.11b MAC. We also describe and analyze some of the current research aimed at addressing these issues. We then propose a novel scheme called the Intelligent Collision Avoidance, seeking to enhance the MAC to address some of the performance issues in 802.11b and similar networks.<|reference_end|> | arxiv | @article{sharma2004analysis,
title={Analysis of 802.11b MAC: A QoS, Fairness, and Performance Perspective},
author={Srikant Sharma},
journal={arXiv preprint arXiv:cs/0411017},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411017},
primaryClass={cs.NI}
} | sharma2004analysis |
arxiv-672277 | cs/0411018 | Artificial Intelligence and Systems Theory: Applied to Cooperative Robots | <|reference_start|>Artificial Intelligence and Systems Theory: Applied to Cooperative Robots: This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams.<|reference_end|> | arxiv | @article{lima2004artificial,
title={Artificial Intelligence and Systems Theory: Applied to Cooperative
Robots},
author={Pedro U. Lima & Luis M. M. Custodio},
journal={International Journal of Advanced Robotic Systems, ISSN 1729-8806,
Volume 1, Number 3 September 2004, pp.141-148},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411018},
primaryClass={cs.RO cs.AI}
} | lima2004artificial |
arxiv-672278 | cs/0411019 | Programmable Ethernet Switches and Their Applications | <|reference_start|>Programmable Ethernet Switches and Their Applications: Modern Ethernet switches support many advanced features beyond route learning and packet forwarding such as VLAN tagging, IGMP snooping, rate limiting, and status monitoring, which can be controlled through a programmatic interface. Traditionally, these features are mostly used to statically configure a network. This paper proposes to apply them as dynamic control mechanisms to maximize physical network link resources, to minimize failure recovery time, to enforce QoS requirements, and to support link-layer multicast without broadcasting. With these advanced programmable control mechanisms, standard Ethernet switches can be used as effective building blocks for metropolitan-area Ethernet networks (MEN), storage-area networks (SAN), and computation cluster interconnects. We demonstrate the usefulness of this new level of control over Ethernet switches with a MEN architecture that features multi-fold throughput gains and sub-second failure recovery time.<|reference_end|> | arxiv | @article{sharma2004programmable,
title={Programmable Ethernet Switches and Their Applications},
author={Srikant Sharma, Tzi-cker Chiueh},
journal={arXiv preprint arXiv:cs/0411019},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411019},
primaryClass={cs.NI cs.AR cs.PF}
} | sharma2004programmable |
arxiv-672279 | cs/0411020 | Dynamic Modelling and Adaptive Traction Control for Mobile Robots | <|reference_start|>Dynamic Modelling and Adaptive Traction Control for Mobile Robots: Mobile robots have received a great deal of research in recent years. A significant amount of research has been published in many aspects related to mobile robots. Most of the research is devoted to design and develop some control techniques for robot motion and path planning. A large number of researchers have used kinematic models to develop motion control strategy for mobile robots. Their argument and assumption that these models are valid if the robot has low speed, low acceleration and light load. However, dynamic modelling of mobile robots is very important as they are designed to travel at higher speed and perform heavy duty work. This paper presents and discusses a new approach to develop a dynamic model and control strategy for wheeled mobile robot which I modelled as a rigid body that roles on two wheels and a castor. The motion control strategy consists of two levels. The first level is dealing with the dynamic of the system and denoted as Low level controller. The second level is developed to take care of path planning and trajectory generation.<|reference_end|> | arxiv | @article{albagul2004dynamic,
title={Dynamic Modelling and Adaptive Traction Control for Mobile Robots},
author={A. Albagul and Wahyudi},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 3, September 2004, pp.149-154},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411020},
primaryClass={cs.RO}
} | albagul2004dynamic |
arxiv-672280 | cs/0411021 | Coevolution Based Adaptive Monte Carlo Localization (CEAMCL) | <|reference_start|>Coevolution Based Adaptive Monte Carlo Localization (CEAMCL): An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robots pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robots pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL). Experiments have been carried out to prove the efficiency of the new localization algorithm.<|reference_end|> | arxiv | @article{ronghua2004coevolution,
title={Coevolution Based Adaptive Monte Carlo Localization (CEAMCL)},
author={Luo Ronghua & Hong Bingrong},
journal={International Journal of Advanced Robotic Systems, Volume 1,
Number 3, September 2004, pp. 183-190},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411021},
primaryClass={cs.RO}
} | ronghua2004coevolution |
arxiv-672281 | cs/0411022 | Topological Navigation of Simulated Robots using Occupancy Grid | <|reference_start|>Topological Navigation of Simulated Robots using Occupancy Grid: Formerly I presented a metric navigation method in the Webots mobile robot simulator. The navigating Khepera-like robot builds an occupancy grid of the environment and explores the square-shaped room around with a value iteration algorithm. Now I created a topological navigation procedure based on the occupancy grid process. The extension by a skeletonization algorithm results a graph of important places and the connecting routes among them. I also show the significant time profit gained during the process.<|reference_end|> | arxiv | @article{szabo2004topological,
title={Topological Navigation of Simulated Robots using Occupancy Grid},
author={Richard Szabo},
journal={International Journal of Advanced Robotic Systems, ISSN 1729-8806,
Volume 1, Number 3 (2004), pp.201-206},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411022},
primaryClass={cs.RO cs.AI}
} | szabo2004topological |
arxiv-672282 | cs/0411023 | Design and Implementation of a General Decision-making Model in RoboCup Simulation | <|reference_start|>Design and Implementation of a General Decision-making Model in RoboCup Simulation: The study of the collaboration, coordination and negotiation among different agents in a multi-agent system (MAS) has always been the most challenging yet popular in the research of distributed artificial intelligence. In this paper, we will suggest for RoboCup simulation, a typical MAS, a general decision-making model, rather than define a different algorithm for each tactic (e.g. ball handling, pass, shoot and interception, etc.) in soccer games as most RoboCup simulation teams did. The general decision-making model is based on two critical factors in soccer games: the vertical distance to the goal line and the visual angle for the goalpost. We have used these two parameters to formalize the defensive and offensive decisions in RoboCup simulation and the results mentioned above had been applied in NOVAURO, original name is UJDB, a RoboCup simulation team of Jiangsu University, whose decision-making model, compared with that of Tsinghua University, the world champion team in 2001, is a universal model and easier to be implemented.<|reference_end|> | arxiv | @article{wang2004design,
title={Design and Implementation of a General Decision-making Model in RoboCup
Simulation},
author={Changda Wang, Xianyi Chen, Xibin Zhao & Shiguang Ju},
journal={International Journal of Advanced Robotic Systems, ISSN 1729-8806,
Volume 1, Number 3 (2004), pp.207-112},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411023},
primaryClass={cs.RO}
} | wang2004design |
arxiv-672283 | cs/0411024 | Space Robotics Part 2: Space-based Manipulators | <|reference_start|>Space Robotics Part 2: Space-based Manipulators: In this second of three short papers, I introduce some of the basic concepts of space robotics with an emphasis on some specific challenging areas of research that are peculiar to the application of robotics to space infrastructure development. The style of these short papers is pedagogical and the concepts in this paper are developed from fundamental manipulator robotics. This second paper considers the application of space manipulators to on-orbit servicing (OOS), an application which has considerable commercial application. I provide some background to the notion of robotic on-orbit servicing and explore how manipulator control algorithms may be modified to accommodate space manipulators which operate in the micro-gravity of space.<|reference_end|> | arxiv | @article{ellery2004space,
title={Space Robotics Part 2: Space-based Manipulators},
author={Alex Ellery},
journal={International Journal of Advanced Robotic Systems, ISSN 1729-8806,
Volume 1, Number 3 (2004), pp.213-216},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411024},
primaryClass={cs.RO}
} | ellery2004space |
arxiv-672284 | cs/0411025 | Bionic Humans Using EAP as Artificial Muscles Reality and Challenges | <|reference_start|>Bionic Humans Using EAP as Artificial Muscles Reality and Challenges: For many years, the idea of a human with bionic muscles immediately conjures up science fiction images of a TV series superhuman character that was implanted with bionic muscles and portrayed with strength and speed far superior to any normal human. As fantastic as this idea may seem, recent developments in electroactive polymers (EAP) may one day make such bionics possible. Polymers that exhibit large displacement in response to stimulation that is other than electrical signal were known for many years. Initially, EAP received relatively little attention due to their limited actuation capability. However, in the recent years, the view of the EAP materials has changed due to the introduction of effective new materials that significantly surpassed the capability of the widely used piezoelectric polymer, PVDF. As this technology continues to evolve, novel mechanisms that are biologically inspired are expected to emerge. EAP materials can potentially provide actuation with lifelike response and more flexible configurations. While further improvements in performance and robustness are still needed, there already have been several reported successes. In recognition of the need for cooperation in this multidisciplinary field, the author initiated and organized a series of international forums that are leading to a growing number of research and development projects and to great advances in the field. In 1999, he challenged the worldwide science and engineering community of EAP experts to develop a robotic arm that is actuated by artificial muscles to win a wrestling match against a human opponent. In this paper, the field of EAP as artificial muscles will be reviewed covering the state of the art, the challenges and the vision for the progress in future years.<|reference_end|> | arxiv | @article{bar-cohen2004bionic,
title={Bionic Humans Using EAP as Artificial Muscles Reality and Challenges},
author={Yoseph Bar-Cohen},
journal={International Journal of Advanced Robotic Systems, ISSN 1729-8806,
Volume 1, Number 3 (2004), pp.217-222},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411025},
primaryClass={cs.RO cs.AI}
} | bar-cohen2004bionic |
arxiv-672285 | cs/0411026 | A Search Relevancy Tuning Method Using Expert Results Content Evaluation | <|reference_start|>A Search Relevancy Tuning Method Using Expert Results Content Evaluation: The article presents an online relevancy tuning method using explicit user feedback. The author developed and tested a method of words' weights modification based on search result evaluation by user. User decides whether the result is useful or not after inspecting the full result content. The experiment proved that the constantly accumulated words weights base leads to better search quality in a specified data domain. The author also suggested future improvements of the method.<|reference_end|> | arxiv | @article{tylevich2004a,
title={A Search Relevancy Tuning Method Using Expert Results Content Evaluation},
author={Boris Mark Tylevich (Moscow Institute of Physics and Technology,
Moscow, Russia)},
journal={arXiv preprint arXiv:cs/0411026},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411026},
primaryClass={cs.IR}
} | tylevich2004a |
arxiv-672286 | cs/0411027 | Extremal Properties of Three Dimensional Sensor Networks with Applications | <|reference_start|>Extremal Properties of Three Dimensional Sensor Networks with Applications: In this paper, we analyze various critical transmitting/sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions: For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (resp. below) which the property exists with high (resp. a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity/coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors and their transmitting/sensing ranges. More specifically, we consider the following problems: Assume that $n$ nodes, each capable of sensing events within a radius of $r$, are randomly and uniformly distributed in a 3-dimensional region $\mathcal{R}$ of volume $V$, how large must the sensing range be to ensure a given degree of coverage of the region to monitor? For a given transmission range, what is the minimum (resp. maximum) degree of the network? What is then the typical hop-diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks.<|reference_end|> | arxiv | @article{ravelomanana2004extremal,
title={Extremal Properties of Three Dimensional Sensor Networks with
Applications},
author={Vlady Ravelomanana (LIPN)},
journal={IEEE Transactions on Mobile Computing Vol 3 (2004) pages 246--257},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411027},
primaryClass={cs.DS cs.DC cs.DM}
} | ravelomanana2004extremal |
arxiv-672287 | cs/0411028 | A machine-independent port of the SR language run-time system to the NetBSD operating system | <|reference_start|>A machine-independent port of the SR language run-time system to the NetBSD operating system: SR (synchronizing resources) is a PASCAL - style language enhanced with constructs for concurrent programming developed at the University of Arizona in the late 1980s. MPD (presented in Gregory Andrews' book about Foundations of Multithreaded, Parallel, and Distributed Programming) is its successor, providing the same language primitives with a different syntax. The run-time system (in theory, identical) of both languages provides the illusion of a multiprocessor machine on a single single- or multi- CPU Unix-like system or a (local area) network of Unix-like machines. Chair V of the Computer Science Department of the University of Bonn is operating a laboratory for a practical course in parallel programming consisting of computing nodes running NetBSD/arm, normally used via PVM, MPI etc. We are considering to offer SR and MPD for this, too. As the original language distributions are only targeted at a few commercial Unix systems, some porting effort is needed, outlined in the SR porting guide. The integrated POSIX threads support of NetBSD-2.0 should allow us to use library primitives provided for NetBSD's phtread system to implement the primitives needed by the SR run-time system, thus implementing 13 target CPUs at once and automatically making use of SMP on VAX, Alpha, PowerPC, Sparc, 32-bit Intel and 64 bit AMD CPUs. This paper describes work in progress.<|reference_end|> | arxiv | @article{souvatzis2004a,
title={A machine-independent port of the SR language run-time system to the
NetBSD operating system},
author={Ignatios Souvatzis},
journal={Juergen Egeling (Ed.): Proceedings of the 3rd European BSD
Conference, Karlsruhe, Germany 2004, p.181},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411028},
primaryClass={cs.DC cs.PL}
} | souvatzis2004a |
arxiv-672288 | cs/0411029 | Modules and Logic Programming | <|reference_start|>Modules and Logic Programming: We study conditions for a concurrent construction of proof-nets in the framework developed by Andreoli in recent papers. We define specific correctness criteria for that purpose. We first study closed modules (i.e. validity of the execution of a logic program), then extend the criterion to open modules (i.e. validity during the execution) distinguishing criteria for acyclicity and connectability in order to allow incremental verification.<|reference_end|> | arxiv | @article{fouquere2004modules,
title={Modules and Logic Programming},
author={Christophe Fouquere and Virgile Mogbil},
journal={arXiv preprint arXiv:cs/0411029},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411029},
primaryClass={cs.LO}
} | fouquere2004modules |
arxiv-672289 | cs/0411030 | Security of public key cryptosystems based on Chebyshev Polynomials | <|reference_start|>Security of public key cryptosystems based on Chebyshev Polynomials: Chebyshev polynomials have been recently proposed for designing public-key systems. Indeed, they enjoy some nice chaotic properties, which seem to be suitable for use in Cryptography. Moreover, they satisfy a semi-group property, which makes possible implementing a trapdoor mechanism. In this paper we study a public key cryptosystem based on such polynomials, which provides both encryption and digital signature. The cryptosystem works on real numbers and is quite efficient. Unfortunately, from our analysis it comes up that it is not secure. We describe an attack which permits to recover the corresponding plaintext from a given ciphertext. The same attack can be applied to produce forgeries if the cryptosystem is used for signing messages. Then, we point out that also other primitives, a Diffie-Hellman like key agreement scheme and an authentication scheme, designed along the same lines of the cryptosystem, are not secure due to the aforementioned attack. We close the paper by discussing the issues and the possibilities of constructing public key cryptosystems on real numbers.<|reference_end|> | arxiv | @article{bergamo2004security,
title={Security of public key cryptosystems based on Chebyshev Polynomials},
author={Pina Bergamo, Paolo D'Arco, Alfredo De Santis, and Ljupco Kocarev},
journal={arXiv preprint arXiv:cs/0411030},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411030},
primaryClass={cs.CR}
} | bergamo2004security |
arxiv-672290 | cs/0411031 | Complexity of the Two-Variable Fragment with (Binary-Coded) Counting Quantifiers | <|reference_start|>Complexity of the Two-Variable Fragment with (Binary-Coded) Counting Quantifiers: We show that the satisfiability and finite satisfiability problems for the two-variable fragment of first-order logic with counting quantifiers are both in NEXPTIME, even when counting quantifiers are coded succinctly.<|reference_end|> | arxiv | @article{pratt-hartmann2004complexity,
title={Complexity of the Two-Variable Fragment with (Binary-Coded) Counting
Quantifiers},
author={Ian Pratt-Hartmann},
journal={Journal of Logic, Language and Information, 14(3), 2005, pp.
369--395},
year={2004},
doi={10.1007/s10849-005-5791-1},
archivePrefix={arXiv},
eprint={cs/0411031},
primaryClass={cs.LO}
} | pratt-hartmann2004complexity |
arxiv-672291 | cs/0411032 | Logic Column 10: Specifying Confidentiality | <|reference_start|>Logic Column 10: Specifying Confidentiality: This article illustrates the use of a logical specification language to capture various forms of confidentiality properties used in the security literature.<|reference_end|> | arxiv | @article{pucella2004logic,
title={Logic Column 10: Specifying Confidentiality},
author={Riccardo Pucella},
journal={SIGACT News, 35(4), pp. 72-83, 2004},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411032},
primaryClass={cs.LO}
} | pucella2004logic |
arxiv-672292 | cs/0411033 | On Invariance and Convergence in Time Complexity theory | <|reference_start|>On Invariance and Convergence in Time Complexity theory: This article introduces three invariance principles under which P is different from NP. In the second part a theorem of convergence is proven. This theorem states that for any language L there exists an infinite sequence of languages from O(n) that converges to L.<|reference_end|> | arxiv | @article{moscu2004on,
title={On Invariance and Convergence in Time Complexity theory},
author={Mircea Alexandru Popescu Moscu},
journal={arXiv preprint arXiv:cs/0411033},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411033},
primaryClass={cs.CC}
} | moscu2004on |
arxiv-672293 | cs/0411034 | Generating Conditional Probabilities for Bayesian Networks: Easing the Knowledge Acquisition Problem | <|reference_start|>Generating Conditional Probabilities for Bayesian Networks: Easing the Knowledge Acquisition Problem: The number of probability distributions required to populate a conditional probability table (CPT) in a Bayesian network, grows exponentially with the number of parent-nodes associated with that table. If the table is to be populated through knowledge elicited from a domain expert then the sheer magnitude of the task forms a considerable cognitive barrier. In this paper we devise an algorithm to populate the CPT while easing the extent of knowledge acquisition. The input to the algorithm consists of a set of weights that quantify the relative strengths of the influences of the parent-nodes on the child-node, and a set of probability distributions the number of which grows only linearly with the number of associated parent-nodes. These are elicited from the domain expert. The set of probabilities are obtained by taking into consideration the heuristics that experts use while arriving at probabilistic estimations. The algorithm is used to populate the CPT by computing appropriate weighted sums of the elicited distributions. We invoke the methods of information geometry to demonstrate how these weighted sums capture the expert's judgemental strategy.<|reference_end|> | arxiv | @article{das2004generating,
title={Generating Conditional Probabilities for Bayesian Networks: Easing the
Knowledge Acquisition Problem},
author={Balaram Das},
journal={arXiv preprint arXiv:cs/0411034},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411034},
primaryClass={cs.AI}
} | das2004generating |
arxiv-672294 | cs/0411035 | A FP-Tree Based Approach for Mining All Strongly Correlated Pairs without Candidate Generation | <|reference_start|>A FP-Tree Based Approach for Mining All Strongly Correlated Pairs without Candidate Generation: Given a user-specified minimum correlation threshold and a transaction database, the problem of mining all-strong correlated pairs is to find all item pairs with Pearson's correlation coefficients above the threshold . Despite the use of upper bound based pruning technique in the Taper algorithm [1], when the number of items and transactions are very large, candidate pair generation and test is still costly. To avoid the costly test of a large number of candidate pairs, in this paper, we propose an efficient algorithm, called Tcp, based on the well-known FP-tree data structure, for mining the complete set of all-strong correlated item pairs. Our experimental results on both synthetic and real world datasets show that, Tcp's performance is significantly better than that of the previously developed Taper algorithm over practical ranges of correlation threshold specifications.<|reference_end|> | arxiv | @article{he2004a,
title={A FP-Tree Based Approach for Mining All Strongly Correlated Pairs
without Candidate Generation},
author={Zengyou He, Xiaofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0411035},
year={2004},
number={TR-04-06},
archivePrefix={arXiv},
eprint={cs/0411035},
primaryClass={cs.DB cs.AI}
} | he2004a |
arxiv-672295 | cs/0411036 | Feedback Capacity of the First-Order Moving Average Gaussian Channel | <|reference_start|>Feedback Capacity of the First-Order Moving Average Gaussian Channel: The feedback capacity of the stationary Gaussian additive noise channel has been open, except for the case where the noise is white. Here we find the feedback capacity of the stationary first-order moving average additive Gaussian noise channel in closed form. Specifically, the channel is given by $Y_i = X_i + Z_i,$ $i = 1, 2, ...,$ where the input $\{X_i\}$ satisfies a power constraint and the noise $\{Z_i\}$ is a first-order moving average Gaussian process defined by $Z_i = \alpha U_{i-1} + U_i,$ $|\alpha| \le 1,$ with white Gaussian innovations $U_i,$ $i = 0,1,....$ We show that the feedback capacity of this channel is $-\log x_0,$ where $x_0$ is the unique positive root of the equation $ \rho x^2 = (1-x^2) (1 - |\alpha|x)^2,$ and $\rho$ is the ratio of the average input power per transmission to the variance of the noise innovation $U_i$. The optimal coding scheme parallels the simple linear signalling scheme by Schalkwijk and Kailath for the additive white Gaussian noise channel -- the transmitter sends a real-valued information-bearing signal at the beginning of communication and subsequently refines the receiver's error by processing the feedback noise signal through a linear stationary first-order autoregressive filter. The resulting error probability of the maximum likelihood decoding decays doubly-exponentially in the duration of the communication. This feedback capacity of the first-order moving average Gaussian channel is very similar in form to the best known achievable rate for the first-order \emph{autoregressive} Gaussian noise channel studied by Butman, Wolfowitz, and Tiernan, although the optimality of the latter is yet to be established.<|reference_end|> | arxiv | @article{kim2004feedback,
title={Feedback Capacity of the First-Order Moving Average Gaussian Channel},
author={Young-Han Kim},
journal={arXiv preprint arXiv:cs/0411036},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411036},
primaryClass={cs.IT math.IT}
} | kim2004feedback |
arxiv-672296 | cs/0411037 | A Note on Bulk Quantum Turing Machine | <|reference_start|>A Note on Bulk Quantum Turing Machine: Recently, among experiments for realization of quantum computers, NMR quantum computers have achieved the most impressive succession. There is a model of the NMR quantum computation,namely Atsumi and Nishino's bulk quantum Turing Machine. It assumes, however, an unnatural assumption with quantum mechanics. We, then, define a more natural and quantum mechanically realizable modified bulk quantum Turing Machine, and show its computational ability by comparing complexity classes with quantum Turing Machine's counter part.<|reference_end|> | arxiv | @article{matsui2004a,
title={A Note on Bulk Quantum Turing Machine},
author={Tetsushi Matsui},
journal={arXiv preprint arXiv:cs/0411037},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411037},
primaryClass={cs.CC}
} | matsui2004a |
arxiv-672297 | cs/0411038 | Impact of IT on Higher education Through Continuing Education | <|reference_start|>Impact of IT on Higher education Through Continuing Education: Information Technology is emerging to be the technology of 21st century. The paradigm shift from industrial society to information society had already become a reality! It is indeed high time to think about integrating IT in all facets of education -- may it be in secondary level, or be it in reskilling the employed ones. This paper discusses various issues in incorporating IT in various levels of education, and the need to think about a task force to counter the so-called slow down and recession in IT industry. The opportunities for aspiring IT professionals were also discussed. The importance of reskilling as a continuing education programme to make the people aware of the changing trends in IT was also discussed.<|reference_end|> | arxiv | @article{shajeemohan2004impact,
title={Impact of IT on Higher education Through Continuing Education},
author={B.S. Shajeemohan},
journal={arXiv preprint arXiv:cs/0411038},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411038},
primaryClass={cs.CY}
} | shajeemohan2004impact |
arxiv-672298 | cs/0411039 | Using Wireless Sensor Networks to Narrow the Gap between Low-Level Information and Context-Awareness | <|reference_start|>Using Wireless Sensor Networks to Narrow the Gap between Low-Level Information and Context-Awareness: Wireless sensor networks are finally becoming a reality. In this paper, we present a scalable architecture for using wireless sensor networks in combination with wireless Ethernet networks to provide a complete end-to-end solution to narrow the gap between the low-level information and context awareness. We developed and implemented a complete proximity detector in order to give a wearable computer, such as a PDA, location context. Since location is only one element of contextawareness, we pursued utilizing photo sensors and temperature sensors in learning as much as possible about the environment. We used the TinyOS RF Motes as our test bed WSN (Wireless Sensor Network), 802.11 compatible hardware as our wireless Ethernet network, and conventional PCs and wired 802.3 networks to build the upper levels of the architecture.<|reference_end|> | arxiv | @article{raicu2004using,
title={Using Wireless Sensor Networks to Narrow the Gap between Low-Level
Information and Context-Awareness},
author={Ioan Raicu, Owen Richter, Loren Schwiebert, Sherali Zeadally},
journal={arXiv preprint arXiv:cs/0411039},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411039},
primaryClass={cs.NI}
} | raicu2004using |
arxiv-672299 | cs/0411040 | Efficient Even Distribution of Power Consumption in Wireless Sensor Networks | <|reference_start|>Efficient Even Distribution of Power Consumption in Wireless Sensor Networks: One of the limitations of wireless sensor nodes is their inherent limited energy resource. Besides maximizing the lifetime of the sensor node, it is preferable to distribute the energy dissipated throughout the wireless sensor network in order to minimize maintenance and maximize overall system performance. We investigate a new routing algorithm that uses diffusion in order to achieve relatively even power dissipation throughout a wireless sensor network by making good local decisions. We leverage from concepts of peer-to-peer networks in which the system acts completely decentralized and all nodes in the network are equal peers. Our algorithm utilizes the node load, power levels, and spatial information in order to make the optimal routing decision. According to our preliminary experimental results, our proposed algorithm performs well according to its goals.<|reference_end|> | arxiv | @article{raicu2004efficient,
title={Efficient Even Distribution of Power Consumption in Wireless Sensor
Networks},
author={Ioan Raicu},
journal={arXiv preprint arXiv:cs/0411040},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411040},
primaryClass={cs.NI}
} | raicu2004efficient |
arxiv-672300 | cs/0411041 | Content Based Image Retrieval with Mobile Agents and Steganography | <|reference_start|>Content Based Image Retrieval with Mobile Agents and Steganography: In this paper we present an image retrieval system based on Gabor texture features, steganography, and mobile agents.. By employing the information hiding technique, the image attributes can be hidden in an image without degrading the image quality. Thus the image retrieval process becomes simple. Java based mobile agents manage the query phase of the system. Based on the simulation results, the proposed system not only shows the efficiency in hiding the attributes but also provides other advantages such as: (1) fast transmission of the retrieval image to the receiver, (2) searching made easy.<|reference_end|> | arxiv | @article{thampi2004content,
title={Content Based Image Retrieval with Mobile Agents and Steganography},
author={Sabu .M Thampi, K. Chandra Sekaran},
journal={arXiv preprint arXiv:cs/0411041},
year={2004},
archivePrefix={arXiv},
eprint={cs/0411041},
primaryClass={cs.CR}
} | thampi2004content |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.