corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-672801 | cs/0504038 | On Approximating Restricted Cycle Covers | <|reference_start|>On Approximating Restricted Cycle Covers: A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. The weight of a cycle cover of an edge-weighted graph is the sum of the weights of its edges. We come close to settling the complexity and approximability of computing L-cycle covers. On the one hand, we show that for almost all L, computing L-cycle covers of maximum weight in directed and undirected graphs is APX-hard and NP-hard. Most of our hardness results hold even if the edge weights are restricted to zero and one. On the other hand, we show that the problem of computing L-cycle covers of maximum weight can be approximated within a factor of 2 for undirected graphs and within a factor of 8/3 in the case of directed graphs. This holds for arbitrary sets L.<|reference_end|> | arxiv | @article{manthey2005on,
title={On Approximating Restricted Cycle Covers},
author={Bodo Manthey},
journal={arXiv preprint arXiv:cs/0504038},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504038},
primaryClass={cs.CC cs.DM}
} | manthey2005on |
arxiv-672802 | cs/0504039 | TeXmacs-maxima interface | <|reference_start|>TeXmacs-maxima interface: This tutorial presents features of the new and improved TeXmacs-maxima interface. It is designed for running maxima-5.9.2 from TeXmacs-1.0.5 (or later).<|reference_end|> | arxiv | @article{grozin2005texmacs-maxima,
title={TeXmacs-maxima interface},
author={A. G. Grozin},
journal={arXiv preprint arXiv:cs/0504039},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504039},
primaryClass={cs.SC cs.MS hep-ph}
} | grozin2005texmacs-maxima |
arxiv-672803 | cs/0504040 | DTN Routing in a Mobility Pattern Space | <|reference_start|>DTN Routing in a Mobility Pattern Space: Routing in Delay Tolerant Networks (DTNs) benefits considerably if one can take advantage of knowledge concerning node mobility. The main contribution of this paper is the definition of a generic routing scheme for DTNs using a high-dimensional Euclidean space constructed upon nodes' mobility patterns. For example, nodes are represented as points having as coordinates their probability of being found in each possible location. We present simulation results indicating that such a scheme can be beneficial in a scenario inspired by studies done on real mobility traces. This work should open the way to further use of the virtual space formalism in DTN routing.<|reference_end|> | arxiv | @article{leguay2005dtn,
title={DTN Routing in a Mobility Pattern Space},
author={Jeremie Leguay, Timur Friedman and Vania Conan},
journal={arXiv preprint arXiv:cs/0504040},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504040},
primaryClass={cs.NI}
} | leguay2005dtn |
arxiv-672804 | cs/0504041 | Learning Polynomial Networks for Classification of Clinical Electroencephalograms | <|reference_start|>Learning Polynomial Networks for Classification of Clinical Electroencephalograms: We describe a polynomial network technique developed for learning to classify clinical electroencephalograms (EEGs) presented by noisy features. Using an evolutionary strategy implemented within Group Method of Data Handling, we learn classification models which are comprehensively described by sets of short-term polynomials. The polynomial models were learnt to classify the EEGs recorded from Alzheimer and healthy patients and recognize the EEG artifacts. Comparing the performances of our technique and some machine learning methods we conclude that our technique can learn well-suited polynomial models which experts can find easy-to-understand.<|reference_end|> | arxiv | @article{schetinin2005learning,
title={Learning Polynomial Networks for Classification of Clinical
Electroencephalograms},
author={Vitaly Schetinin and Joachim Schult},
journal={J Soft Computing 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504041},
primaryClass={cs.AI cs.NE}
} | schetinin2005learning |
arxiv-672805 | cs/0504042 | The Bayesian Decision Tree Technique with a Sweeping Strategy | <|reference_start|>The Bayesian Decision Tree Technique with a Sweeping Strategy: The uncertainty of classification outcomes is of crucial importance for many safety critical applications including, for example, medical diagnostics. In such applications the uncertainty of classification can be reliably estimated within a Bayesian model averaging technique that allows the use of prior information. Decision Tree (DT) classification models used within such a technique gives experts additional information by making this classification scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology of stochastic sampling makes the Bayesian DT technique feasible to perform. However, in practice, the MCMC technique may become stuck in a particular DT which is far away from a region with a maximal posterior. Sampling such DTs causes bias in the posterior estimates, and as a result the evaluation of classification uncertainty may be incorrect. In a particular case, the negative effect of such sampling may be reduced by giving additional prior information on the shape of DTs. In this paper we describe a new approach based on sweeping the DTs without additional priors on the favorite shape of DTs. The performances of Bayesian DT techniques with the standard and sweeping strategies are compared on a synthetic data as well as on real datasets. Quantitatively evaluating the uncertainty in terms of entropy of class posterior probabilities, we found that the sweeping strategy is superior to the standard strategy.<|reference_end|> | arxiv | @article{schetinin2005the,
title={The Bayesian Decision Tree Technique with a Sweeping Strategy},
author={V. Schetinin, J.E. Fieldsend, D. Partridge, W.J. Krzanowski, R.M.
Everson, T.C. Bailey, A. Hernandez},
journal={arXiv preprint arXiv:cs/0504042},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504042},
primaryClass={cs.AI cs.LG}
} | schetinin2005the |
arxiv-672806 | cs/0504043 | Experimental Comparison of Classification Uncertainty for Randomised and Bayesian Decision Tree Ensembles | <|reference_start|>Experimental Comparison of Classification Uncertainty for Randomised and Bayesian Decision Tree Ensembles: In this paper we experimentally compare the classification uncertainty of the randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique with a restarting strategy on a synthetic dataset as well as on some datasets commonly used in the machine learning community. For quantitative evaluation of classification uncertainty, we use an Uncertainty Envelope dealing with the class posterior distribution and a given confidence probability. Counting the classifier outcomes, this technique produces feasible evaluations of the classification uncertainty. Using this technique in our experiments, we found that the Bayesian DT technique is superior to the randomised DT ensemble technique.<|reference_end|> | arxiv | @article{schetinin2005experimental,
title={Experimental Comparison of Classification Uncertainty for Randomised and
Bayesian Decision Tree Ensembles},
author={V. Schetinin, D. Partridge, W.J. Krzanowski, R.M. Everson, J.E.
Fieldsend, T.C. Bailey, and A. Hernandez},
journal={arXiv preprint arXiv:cs/0504043},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504043},
primaryClass={cs.AI cs.LG}
} | schetinin2005experimental |
arxiv-672807 | cs/0504044 | JClarens: A Java Framework for Developing and Deploying Web Services for Grid Computing | <|reference_start|>JClarens: A Java Framework for Developing and Deploying Web Services for Grid Computing: High Energy Physics (HEP) and other scientific communities have adopted Service Oriented Architectures (SOA) as part of a larger Grid computing effort. This effort involves the integration of many legacy applications and programming libraries into a SOA framework. The Grid Analysis Environment (GAE) is such a service oriented architecture based on the Clarens Grid Services Framework and is being developed as part of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at European Laboratory for Particle Physics (CERN). Clarens provides a set of authorization, access control, and discovery services, as well as XMLRPC and SOAP access to all deployed services. Two implementations of the Clarens Web Services Framework (Python and Java) offer integration possibilities for a wide range of programming languages. This paper describes the Java implementation of the Clarens Web Services Framework called JClarens. and several web services of interest to the scientific and Grid community that have been deployed using JClarens.<|reference_end|> | arxiv | @article{thomas2005jclarens:,
title={JClarens: A Java Framework for Developing and Deploying Web Services for
Grid Computing},
author={Michael Thomas, Conrad Steenberg, Frank van Lingen, Harvey Newman,
Julian Bunn, Arshad Ali, Richard McClatchey, Ashiq Anjum, Tahir Azim, Waqas
ur Rehman, Faisal Khan, Jang Uk In},
journal={arXiv preprint arXiv:cs/0504044},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504044},
primaryClass={cs.DC}
} | thomas2005jclarens: |
arxiv-672808 | cs/0504045 | Analyzing Worms and Network Traffic using Compression | <|reference_start|>Analyzing Worms and Network Traffic using Compression: Internet worms have become a widespread threat to system and network operations. In order to fight them more efficiently, it is necessary to analyze newly discovered worms and attack patterns. This paper shows how techniques based on Kolmogorov Complexity can help in the analysis of internet worms and network traffic. Using compression, different species of worms can be clustered by type. This allows us to determine whether an unknown worm binary could in fact be a later version of an existing worm in an extremely simple, automated, manner. This may become a useful tool in the initial analysis of malicious binaries. Furthermore, compression can also be useful to distinguish different types of network traffic and can thus help to detect traffic anomalies: Certain anomalies may be detected by looking at the compressibility of a network session alone. We furthermore show how to use compression to detect malicious network sessions that are very similar to known intrusion attempts. This technique could become a useful tool to detect new variations of an attack and thus help to prevent IDS evasion. We provide two new plugins for Snort which demonstrate both approaches.<|reference_end|> | arxiv | @article{wehner2005analyzing,
title={Analyzing Worms and Network Traffic using Compression},
author={Stephanie Wehner (CWI, Amsterdam)},
journal={Journal of Computer Security, Vol 15, Number 3, 303-320, 2007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504045},
primaryClass={cs.CR}
} | wehner2005analyzing |
arxiv-672809 | cs/0504046 | On the Entropy Rate of Pattern Processes | <|reference_start|>On the Entropy Rate of Pattern Processes: We study the entropy rate of pattern sequences of stochastic processes, and its relationship to the entropy rate of the original process. We give a complete characterization of this relationship for i.i.d. processes over arbitrary alphabets, stationary ergodic processes over discrete alphabets, and a broad family of stationary ergodic processes over uncountable alphabets. For cases where the entropy rate of the pattern process is infinite, we characterize the possible growth rate of the block entropy.<|reference_end|> | arxiv | @article{gemelos2005on,
title={On the Entropy Rate of Pattern Processes},
author={George M. Gemelos and Tsachy Weissman},
journal={arXiv preprint arXiv:cs/0504046},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504046},
primaryClass={cs.IT math.IT}
} | gemelos2005on |
arxiv-672810 | cs/0504047 | Pushdown dimension | <|reference_start|>Pushdown dimension: This paper develops the theory of pushdown dimension and explores its relationship with finite-state dimension. Pushdown dimension is trivially bounded above by finite-state dimension for all sequences, since a pushdown gambler can simulate any finite-state gambler. We show that for every rational 0 < d < 1, there exists a sequence with finite-state dimension d whose pushdown dimension is at most d/2. This establishes a quantitative analogue of the well-known fact that pushdown automata decide strictly more languages than finite automata.<|reference_end|> | arxiv | @article{doty2005pushdown,
title={Pushdown dimension},
author={David Doty, Jared Nichols},
journal={arXiv preprint arXiv:cs/0504047},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504047},
primaryClass={cs.IT cs.CC math.IT}
} | doty2005pushdown |
arxiv-672811 | cs/0504048 | Oracles Are Subtle But Not Malicious | <|reference_start|>Oracles Are Subtle But Not Malicious: Theoretical computer scientists have been debating the role of oracles since the 1970's. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems. First, we give an oracle relative to which PP has linear-sized circuits, by proving a new lower bound for perceptrons and low- degree threshold polynomials. This oracle settles a longstanding open question, and generalizes earlier results due to Beigel and to Buhrman, Fortnow, and Thierauf. More importantly, it implies the first nonrelativizing separation of "traditional" complexity classes, as opposed to interactive proof classes such as MIP and MA-EXP. For Vinodchandran showed, by a nonrelativizing argument, that PP does not have circuits of size n^k for any fixed k. We present an alternative proof of this fact, which shows that PP does not even have quantum circuits of size n^k with quantum advice. To our knowledge, this is the first nontrivial lower bound on quantum circuit size. Second, we study a beautiful algorithm of Bshouty et al. for learning Boolean circuits in ZPP^NP. We show that the NP queries in this algorithm cannot be parallelized by any relativizing technique, by giving an oracle relative to which ZPP^||NP and even BPP^||NP have linear-size circuits. On the other hand, we also show that the NP queries could be parallelized if P=NP. Thus, classes such as ZPP^||NP inhabit a "twilight zone," where we need to distinguish between relativizing and black-box techniques. Our results on this subject have implications for computational learning theory as well as for the circuit minimization problem.<|reference_end|> | arxiv | @article{aaronson2005oracles,
title={Oracles Are Subtle But Not Malicious},
author={Scott Aaronson},
journal={arXiv preprint arXiv:cs/0504048},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504048},
primaryClass={cs.CC quant-ph}
} | aaronson2005oracles |
arxiv-672812 | cs/0504049 | Bounds on the Entropy of Patterns of IID Sequences | <|reference_start|>Bounds on the Entropy of Patterns of IID Sequences: Bounds on the entropy of patterns of sequences generated by independently identically distributed (i.i.d.) sources are derived. A pattern is a sequence of indices that contains all consecutive integer indices in increasing order of first occurrence. If the alphabet of a source that generated a sequence is unknown, the inevitable cost of coding the unknown alphabet symbols can be exploited to create the pattern of the sequence. This pattern can in turn be compressed by itself. The bounds derived here are functions of the i.i.d. source entropy, alphabet size, and letter probabilities. It is shown that for large alphabets, the pattern entropy must decrease from the i.i.d. one. The decrease is in many cases more significant than the universal coding redundancy bounds derived in prior works. The pattern entropy is confined between two bounds that depend on the arrangement of the letter probabilities in the probability space. For very large alphabets whose size may be greater than the coded pattern length, all low probability letters are packed into one symbol. The pattern entropy is upper and lower bounded in terms of the i.i.d. entropy of the new packed alphabet. Correction terms, which are usually negligible, are provided for both upper and lower bounds.<|reference_end|> | arxiv | @article{shamir2005bounds,
title={Bounds on the Entropy of Patterns of I.I.D. Sequences},
author={Gil I. Shamir},
journal={arXiv preprint arXiv:cs/0504049},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504049},
primaryClass={cs.IT math.IT}
} | shamir2005bounds |
arxiv-672813 | cs/0504050 | Mapping Fusion and Synchronized Hyperedge Replacement into Logic Programming | <|reference_start|>Mapping Fusion and Synchronized Hyperedge Replacement into Logic Programming: In this paper we compare three different formalisms that can be used in the area of models for distributed, concurrent and mobile systems. In particular we analyze the relationships between a process calculus, the Fusion Calculus, graph transformations in the Synchronized Hyperedge Replacement with Hoare synchronization (HSHR) approach and logic programming. We present a translation from Fusion Calculus into HSHR (whereas Fusion Calculus uses Milner synchronization) and prove a correspondence between the reduction semantics of Fusion Calculus and HSHR transitions. We also present a mapping from HSHR into a transactional version of logic programming and prove that there is a full correspondence between the two formalisms. The resulting mapping from Fusion Calculus to logic programming is interesting since it shows the tight analogies between the two formalisms, in particular for handling name generation and mobility. The intermediate step in terms of HSHR is convenient since graph transformations allow for multiple, remote synchronizations, as required by Fusion Calculus semantics.<|reference_end|> | arxiv | @article{lanese2005mapping,
title={Mapping Fusion and Synchronized Hyperedge Replacement into Logic
Programming},
author={Ivan Lanese and Ugo Montanari},
journal={arXiv preprint arXiv:cs/0504050},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504050},
primaryClass={cs.LO cs.PL}
} | lanese2005mapping |
arxiv-672814 | cs/0504051 | A Scalable Stream-Oriented Framework for Cluster Applications | <|reference_start|>A Scalable Stream-Oriented Framework for Cluster Applications: This paper presents a stream-oriented architecture for structuring cluster applications. Clusters that run applications based on this architecture can scale to tenths of thousands of nodes with significantly less performance loss or reliability problems. Our architecture exploits the stream nature of the data flow and reduces congestion through load balancing, hides latency behind data pushes and transparently handles node failures. In our ongoing work, we are developing an implementation for this architecture and we are able to run simple data mining applications on a cluster simulator.<|reference_end|> | arxiv | @article{argyros2005a,
title={A Scalable Stream-Oriented Framework for Cluster Applications},
author={Tassos S. Argyros and David R. Cheriton},
journal={arXiv preprint arXiv:cs/0504051},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504051},
primaryClass={cs.DC cs.DB cs.NI cs.OS cs.PL}
} | argyros2005a |
arxiv-672815 | cs/0504052 | Learning Multi-Class Neural-Network Models from Electroencephalograms | <|reference_start|>Learning Multi-Class Neural-Network Models from Electroencephalograms: We describe a new algorithm for learning multi-class neural-network models from large-scale clinical electroencephalograms (EEGs). This algorithm trains hidden neurons separately to classify all the pairs of classes. To find best pairwise classifiers, our algorithm searches for input variables which are relevant to the classification problem. Despite patient variability and heavily overlapping classes, a 16-class model learnt from EEGs of 65 sleeping newborns correctly classified 80.8% of the training and 80.1% of the testing examples. Additionally, the neural-network model provides a probabilistic interpretation of decisions.<|reference_end|> | arxiv | @article{schetinin2005learning,
title={Learning Multi-Class Neural-Network Models from Electroencephalograms},
author={Vitaly Schetinin, Joachim Schult, Burkhart Scheidt, and Valery
Kuriakin},
journal={arXiv preprint arXiv:cs/0504052},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504052},
primaryClass={cs.NE cs.LG}
} | schetinin2005learning |
arxiv-672816 | cs/0504053 | A Neural-Network Technique for Recognition of Filaments in Solar Images | <|reference_start|>A Neural-Network Technique for Recognition of Filaments in Solar Images: We describe a new neural-network technique developed for an automated recognition of solar filaments visible in the hydrogen H-alpha line full disk spectroheliograms. This technique allows neural networks learn from a few image fragments labelled manually to recognize the single filaments depicted on a local background. The trained network is able to recognize filaments depicted on the backgrounds with variations in brightness caused by atmospherics distortions. Despite the difference in backgrounds in our experiments the neural network has properly recognized filaments in the testing image fragments. Using a parabolic activation function we extend this technique to recognize multiple solar filaments which may appear in one fragment.<|reference_end|> | arxiv | @article{zharkova2005a,
title={A Neural-Network Technique for Recognition of Filaments in Solar Images},
author={V.V.Zharkova and V.Schetinin},
journal={arXiv preprint arXiv:cs/0504053},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504053},
primaryClass={cs.NE}
} | zharkova2005a |
arxiv-672817 | cs/0504054 | Learning from Web: Review of Approaches | <|reference_start|>Learning from Web: Review of Approaches: Knowledge discovery is defined as non-trivial extraction of implicit, previously unknown and potentially useful information from given data. Knowledge extraction from web documents deals with unstructured, free-format documents whose number is enormous and rapidly growing. The artificial neural networks are well suitable to solve a problem of knowledge discovery from web documents because trained networks are able more accurately and easily to classify the learning and testing examples those represent the text mining domain. However, the neural networks that consist of large number of weighted connections and activation units often generate the incomprehensible and hard-to-understand models of text classification. This problem may be also addressed to most powerful recurrent neural networks that employ the feedback links from hidden or output units to their input units. Due to feedback links, recurrent neural networks are able take into account of a context in document. To be useful for data mining, self-organizing neural network techniques of knowledge extraction have been explored and developed. Self-organization principles were used to create an adequate neural-network structure and reduce a dimensionality of features used to describe text documents. The use of these principles seems interesting because ones are able to reduce a neural-network redundancy and considerably facilitate the knowledge representation.<|reference_end|> | arxiv | @article{schetinin2005learning,
title={Learning from Web: Review of Approaches},
author={Vitaly Schetinin},
journal={arXiv preprint arXiv:cs/0504054},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504054},
primaryClass={cs.NE cs.LG}
} | schetinin2005learning |
arxiv-672818 | cs/0504055 | A Learning Algorithm for Evolving Cascade Neural Networks | <|reference_start|>A Learning Algorithm for Evolving Cascade Neural Networks: A new learning algorithm for Evolving Cascade Neural Networks (ECNNs) is described. An ECNN starts to learn with one input node and then adding new inputs as well as new hidden neurons evolves it. The trained ECNN has a nearly minimal number of input and hidden neurons as well as connections. The algorithm was successfully applied to classify artifacts and normal segments in clinical electroencephalograms (EEGs). The EEG segments were visually labeled by EEG-viewer. The trained ECNN has correctly classified 96.69% of the testing segments. It is slightly better than a standard fully connected neural network.<|reference_end|> | arxiv | @article{schetinin2005a,
title={A Learning Algorithm for Evolving Cascade Neural Networks},
author={Vitaly Schetinin},
journal={Neural Processing Letter 17:21-31, 2003. Kluwer},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504055},
primaryClass={cs.NE cs.AI}
} | schetinin2005a |
arxiv-672819 | cs/0504056 | Self-Organizing Multilayered Neural Networks of Optimal Complexity | <|reference_start|>Self-Organizing Multilayered Neural Networks of Optimal Complexity: The principles of self-organizing the neural networks of optimal complexity is considered under the unrepresentative learning set. The method of self-organizing the multi-layered neural networks is offered and used to train the logical neural networks which were applied to the medical diagnostics.<|reference_end|> | arxiv | @article{schetinin2005self-organizing,
title={Self-Organizing Multilayered Neural Networks of Optimal Complexity},
author={V. Schetinin},
journal={arXiv preprint arXiv:cs/0504056},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504056},
primaryClass={cs.NE cs.AI}
} | schetinin2005self-organizing |
arxiv-672820 | cs/0504057 | Diagnostic Rule Extraction Using Neural Networks | <|reference_start|>Diagnostic Rule Extraction Using Neural Networks: The neural networks have trained on incomplete sets that a doctor could collect. Trained neural networks have correctly classified all the presented instances. The number of intervals entered for encoding the quantitative variables is equal two. The number of features as well as the number of neurons and layers in trained neural networks was minimal. Trained neural networks are adequately represented as a set of logical formulas that more comprehensible and easy-to-understand. These formulas are as the syndrome-complexes, which may be easily tabulated and represented as a diagnostic table that the doctors usually use. Decision rules provide the evaluations of their confidence in which interested a doctor. Conducted clinical researches have shown that iagnostic decisions produced by symbolic rules have coincided with the doctor's conclusions.<|reference_end|> | arxiv | @article{schetinin2005diagnostic,
title={Diagnostic Rule Extraction Using Neural Networks},
author={Vitaly Schetinin and Anatoly Brazhnikov},
journal={arXiv preprint arXiv:cs/0504057},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504057},
primaryClass={cs.NE cs.AI}
} | schetinin2005diagnostic |
arxiv-672821 | cs/0504058 | Polynomial Neural Networks Learnt to Classify EEG Signals | <|reference_start|>Polynomial Neural Networks Learnt to Classify EEG Signals: A neural network based technique is presented, which is able to successfully extract polynomial classification rules from labeled electroencephalogram (EEG) signals. To represent the classification rules in an analytical form, we use the polynomial neural networks trained by a modified Group Method of Data Handling (GMDH). The classification rules were extracted from clinical EEG data that were recorded from an Alzheimer patient and the sudden death risk patients. The third data is EEG recordings that include the normal and artifact segments. These EEG data were visually identified by medical experts. The extracted polynomial rules verified on the testing EEG data allow to correctly classify 72% of the risk group patients and 96.5% of the segments. These rules performs slightly better than standard feedforward neural networks.<|reference_end|> | arxiv | @article{schetinin2005polynomial,
title={Polynomial Neural Networks Learnt to Classify EEG Signals},
author={Vitaly Schetinin},
journal={arXiv preprint arXiv:cs/0504058},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504058},
primaryClass={cs.NE cs.AI}
} | schetinin2005polynomial |
arxiv-672822 | cs/0504059 | A Neural Network Decision Tree for Learning Concepts from EEG Data | <|reference_start|>A Neural Network Decision Tree for Learning Concepts from EEG Data: To learn the multi-class conceptions from the electroencephalogram (EEG) data we developed a neural network decision tree (DT), that performs the linear tests, and a new training algorithm. We found that the known methods fail inducting the classification models when the data are presented by the features some of them are irrelevant, and the classes are heavily overlapped. To train the DT, our algorithm exploits a bottom up search of the features that provide the best classification accuracy of the linear tests. We applied the developed algorithm to induce the DT from the large EEG dataset consisted of 65 patients belonging to 16 age groups. In these recordings each EEG segment was represented by 72 calculated features. The DT correctly classified 80.8% of the training and 80.1% of the testing examples. Correspondingly it correctly classified 89.2% and 87.7% of the EEG recordings.<|reference_end|> | arxiv | @article{schetinin2005a,
title={A Neural Network Decision Tree for Learning Concepts from EEG Data},
author={Vitaly Schetinin},
journal={arXiv preprint arXiv:cs/0504059},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504059},
primaryClass={cs.NE cs.AI}
} | schetinin2005a |
arxiv-672823 | cs/0504060 | Universal Minimax Discrete Denoising under Channel Uncertainty | <|reference_start|>Universal Minimax Discrete Denoising under Channel Uncertainty: The goal of a denoising algorithm is to recover a signal from its noise-corrupted observations. Perfect recovery is seldom possible and performance is measured under a given single-letter fidelity criterion. For discrete signals corrupted by a known discrete memoryless channel, the DUDE was recently shown to perform this task asymptotically optimally, without knowledge of the statistical properties of the source. In the present work we address the scenario where, in addition to the lack of knowledge of the source statistics, there is also uncertainty in the channel characteristics. We propose a family of discrete denoisers and establish their asymptotic optimality under a minimax performance criterion which we argue is appropriate for this setting. As we show elsewhere, the proposed schemes can also be implemented computationally efficiently.<|reference_end|> | arxiv | @article{gemelos2005universal,
title={Universal Minimax Discrete Denoising under Channel Uncertainty},
author={George Gemelos, Styrmir Sigurjonsson and Tsachy Weissman},
journal={arXiv preprint arXiv:cs/0504060},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504060},
primaryClass={cs.IT math.IT}
} | gemelos2005universal |
arxiv-672824 | cs/0504061 | Summarization from Medical Documents: A Survey | <|reference_start|>Summarization from Medical Documents: A Survey: Objective: The aim of this paper is to survey the recent work in medical documents summarization. Background: During the last decade, documents summarization got increasing attention by the AI research community. More recently it also attracted the interest of the medical research community as well, due to the enormous growth of information that is available to the physicians and researchers in medicine, through the large and growing number of published journals, conference proceedings, medical sites and portals on the World Wide Web, electronic medical records, etc. Methodology: This survey gives first a general background on documents summarization, presenting the factors that summarization depends upon, discussing evaluation issues and describing briefly the various types of summarization techniques. It then examines the characteristics of the medical domain through the different types of medical documents. Finally, it presents and discusses the summarization techniques used so far in the medical domain, referring to the corresponding systems and their characteristics. Discussion and conclusions: The paper discusses thoroughly the promising paths for future research in medical documents summarization. It mainly focuses on the issue of scaling to large collections of documents in various languages and from different media, on personalization issues, on portability to new sub-domains, and on the integration of summarization technology in practical applications<|reference_end|> | arxiv | @article{afantenos2005summarization,
title={Summarization from Medical Documents: A Survey},
author={Stergos D. Afantenos, Vangelis Karkaletsis, Panagiotis Stamatopoulos},
journal={Artificial Intelligence in Medicine, Volume 33, Issue 2, February
2005, Pages 157-177},
year={2005},
doi={10.1016/j.artmed.2004.07.017},
archivePrefix={arXiv},
eprint={cs/0504061},
primaryClass={cs.CL cs.IR}
} | afantenos2005summarization |
arxiv-672825 | cs/0504062 | Conditional Hardness for Approximate Coloring | <|reference_start|>Conditional Hardness for Approximate Coloring: We study the coloring problem: Given a graph G, decide whether $c(G) \leq q$ or $c(G) \ge Q$, where c(G) is the chromatic number of G. We derive conditional hardness for this problem for any constant $3 \le q < Q$. For $q\ge 4$, our result is based on Khot's 2-to-1 conjecture [Khot'02]. For $q=3$, we base our hardness result on a certain `fish shaped' variant of his conjecture. We also prove that the problem almost coloring is hard for any constant $\eps>0$, assuming Khot's Unique Games conjecture. This is the problem of deciding for a given graph, between the case where one can 3-color all but a $\eps$ fraction of the vertices without monochromatic edges, and the case where the graph contains no independent set of relative size at least $\eps$. Our result is based on bounding various generalized noise-stability quantities using the invariance principle of Mossel et al [MOO'05].<|reference_end|> | arxiv | @article{dinur2005conditional,
title={Conditional Hardness for Approximate Coloring},
author={Irit Dinur and Elchanan Mossel and Oded Regev},
journal={arXiv preprint arXiv:cs/0504062},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504062},
primaryClass={cs.CC math.PR}
} | dinur2005conditional |
arxiv-672826 | cs/0504063 | Selection in Scale-Free Small World | <|reference_start|>Selection in Scale-Free Small World: In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information/all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web.<|reference_end|> | arxiv | @article{palotai2005selection,
title={Selection in Scale-Free Small World},
author={Zs. Palotai, Cs. Farkas, A. Lorincz},
journal={arXiv preprint arXiv:cs/0504063},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504063},
primaryClass={cs.LG cs.IR}
} | palotai2005selection |
arxiv-672827 | cs/0504064 | Neural-Network Techniques for Visual Mining Clinical Electroencephalograms | <|reference_start|>Neural-Network Techniques for Visual Mining Clinical Electroencephalograms: In this chapter we describe new neural-network techniques developed for visual mining clinical electroencephalograms (EEGs), the weak electrical potentials invoked by brain activity. These techniques exploit fruitful ideas of Group Method of Data Handling (GMDH). Section 2 briefly describes the standard neural-network techniques which are able to learn well-suited classification modes from data presented by relevant features. Section 3 introduces an evolving cascade neural network technique which adds new input nodes as well as new neurons to the network while the training error decreases. This algorithm is applied to recognize artifacts in the clinical EEGs. Section 4 presents the GMDH-type polynomial networks learnt from data. We applied this technique to distinguish the EEGs recorded from an Alzheimer and a healthy patient as well as recognize EEG artifacts. Section 5 describes the new neural-network technique developed to induce multi-class concepts from data. We used this technique for inducing a 16-class concept from the large-scale clinical EEG data. Finally we discuss perspectives of applying the neural-network techniques to clinical EEGs.<|reference_end|> | arxiv | @article{schetinin2005neural-network,
title={Neural-Network Techniques for Visual Mining Clinical
Electroencephalograms},
author={Vitaly Schetinin, Joachim Schult and Anatoly Brazhnikov},
journal={arXiv preprint arXiv:cs/0504064},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504064},
primaryClass={cs.AI}
} | schetinin2005neural-network |
arxiv-672828 | cs/0504065 | Estimating Classification Uncertainty of Bayesian Decision Tree Technique on Financial Data | <|reference_start|>Estimating Classification Uncertainty of Bayesian Decision Tree Technique on Financial Data: Bayesian averaging over classification models allows the uncertainty of classification outcomes to be evaluated, which is of crucial importance for making reliable decisions in applications such as financial in which risks have to be estimated. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the diversity of a classifier ensemble and the required performance. The interpretability of classification models can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models. The required diversity of the DT ensemble can be achieved by using the Bayesian model averaging all possible DTs. In practice, the Bayesian approach can be implemented on the base of a Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior distribution. For sampling large DTs, the MCMC method is extended by Reversible Jump technique which allows inducing DTs under given priors. For the case when the prior information on the DT size is unavailable, the sweeping technique defining the prior implicitly reveals a better performance. Within this Chapter we explore the classification uncertainty of the Bayesian MCMC techniques on some datasets from the StatLog Repository and real financial data. The classification uncertainty is compared within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. This technique provides realistic estimates of the classification uncertainty which can be easily interpreted in statistical terms with the aim of risk evaluation.<|reference_end|> | arxiv | @article{schetinin2005estimating,
title={Estimating Classification Uncertainty of Bayesian Decision Tree
Technique on Financial Data},
author={Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J.
Krzanowski, Richard M. Everson, Trevor C. Bailey and Adolfo Hernandez},
journal={arXiv preprint arXiv:cs/0504065},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504065},
primaryClass={cs.AI}
} | schetinin2005estimating |
arxiv-672829 | cs/0504066 | Comparison of the Bayesian and Randomised Decision Tree Ensembles within an Uncertainty Envelope Technique | <|reference_start|>Comparison of the Bayesian and Randomised Decision Tree Ensembles within an Uncertainty Envelope Technique: Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of classification outcomes that is of crucial importance for safety critical applications. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the classifier diversity and the required performance. The interpretability of MCSs can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models for experts. The required diversity of MCSs exploiting such classification models can be achieved by using two techniques, the Bayesian model averaging and the randomised DT ensemble. Both techniques have revealed promising results when applied to real-world problems. In this paper we experimentally compare the classification uncertainty of the Bayesian model averaging with a restarting strategy and the randomised DT ensemble on a synthetic dataset and some domain problems commonly used in the machine learning community. To make the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo technique. The classification uncertainty is evaluated within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. Exploring a full posterior distribution, this technique produces realistic estimates which can be easily interpreted in statistical terms. In our experiments we found out that the Bayesian DTs are superior to the randomised DT ensembles within the Uncertainty Envelope technique.<|reference_end|> | arxiv | @article{schetinin2005comparison,
title={Comparison of the Bayesian and Randomised Decision Tree Ensembles within
an Uncertainty Envelope Technique},
author={Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J.
Krzanowski, Richard M. Everson, Trevor C. Bailey, and Adolfo Hernandez},
journal={Journal of Mathematical Modelling and Algorithms, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504066},
primaryClass={cs.AI}
} | schetinin2005comparison |
arxiv-672830 | cs/0504067 | An Evolving Cascade Neural Network Technique for Cleaning Sleep Electroencephalograms | <|reference_start|>An Evolving Cascade Neural Network Technique for Cleaning Sleep Electroencephalograms: Evolving Cascade Neural Networks (ECNNs) and a new training algorithm capable of selecting informative features are described. The ECNN initially learns with one input node and then evolves by adding new inputs as well as new hidden neurons. The resultant ECNN has a near minimal number of hidden neurons and inputs. The algorithm is successfully used for training ECNN to recognise artefacts in sleep electroencephalograms (EEGs) which were visually labelled by EEG-viewers. In our experiments, the ECNN outperforms the standard neural-network as well as evolutionary techniques.<|reference_end|> | arxiv | @article{schetinin2005an,
title={An Evolving Cascade Neural Network Technique for Cleaning Sleep
Electroencephalograms},
author={Vitaly Schetinin},
journal={Natural Computing Application, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504067},
primaryClass={cs.NE cs.AI}
} | schetinin2005an |
arxiv-672831 | cs/0504068 | Self-Organization of the Neuron Collective of Optimal Complexity | <|reference_start|>Self-Organization of the Neuron Collective of Optimal Complexity: The optimal complexity of neural networks is achieved when the self-organization principles is used to eliminate the contradictions existing in accordance with the K. Godel theorem about incompleteness of the systems based on axiomatics. The principle of S. Beer exterior addition the Heuristic Group Method of Data Handling by A. Ivakhnenko realized is used.<|reference_end|> | arxiv | @article{schetinin2005self-organization,
title={Self-Organization of the Neuron Collective of Optimal Complexity},
author={V. Schetinin and A. Kostunin},
journal={arXiv preprint arXiv:cs/0504068},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504068},
primaryClass={cs.NE cs.AI}
} | schetinin2005self-organization |
arxiv-672832 | cs/0504069 | A Neural-Network Technique to Learn Concepts from Electroencephalograms | <|reference_start|>A Neural-Network Technique to Learn Concepts from Electroencephalograms: A new technique is presented developed to learn multi-class concepts from clinical electroencephalograms. A desired concept is represented as a neuronal computational model consisting of the input, hidden, and output neurons. In this model the hidden neurons learn independently to classify the electroencephalogram segments presented by spectral and statistical features. This technique has been applied to the electroencephalogram data recorded from 65 sleeping healthy newborns in order to learn a brain maturation concept of newborns aged between 35 and 51 weeks. The 39399 and 19670 segments from these data have been used for learning and testing the concept, respectively. As a result, the concept has correctly classified 80.1% of the testing segments or 87.7% of the 65 records.<|reference_end|> | arxiv | @article{schetinin2005a,
title={A Neural-Network Technique to Learn Concepts from Electroencephalograms},
author={Vitaly Schetinin and Joachim Schult},
journal={arXiv preprint arXiv:cs/0504069},
year={2005},
number={Theory in Biosciences, 2005},
archivePrefix={arXiv},
eprint={cs/0504069},
primaryClass={cs.NE cs.AI cs.LG}
} | schetinin2005a |
arxiv-672833 | cs/0504070 | The Combined Technique for Detection of Artifacts in Clinical Electroencephalograms of Sleeping Newborns | <|reference_start|>The Combined Technique for Detection of Artifacts in Clinical Electroencephalograms of Sleeping Newborns: In this paper we describe a new method combining the polynomial neural network and decision tree techniques in order to derive comprehensible classification rules from clinical electroencephalograms (EEGs) recorded from sleeping newborns. These EEGs are heavily corrupted by cardiac, eye movement, muscle and noise artifacts and as a consequence some EEG features are irrelevant to classification problems. Combining the polynomial network and decision tree techniques, we discover comprehensible classification rules whilst also attempting to keep their classification error down. This technique is shown to outperform a number of commonly used machine learning technique applied to automatically recognize artifacts in the sleep EEGs.<|reference_end|> | arxiv | @article{schetinin2005the,
title={The Combined Technique for Detection of Artifacts in Clinical
Electroencephalograms of Sleeping Newborns},
author={Vitaly Schetinin and Joachim Schult},
journal={arXiv preprint arXiv:cs/0504070},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504070},
primaryClass={cs.NE cs.AI cs.LG}
} | schetinin2005the |
arxiv-672834 | cs/0504071 | Proceedings of the Pacific Knowledge Acquisition Workshop 2004 | <|reference_start|>Proceedings of the Pacific Knowledge Acquisition Workshop 2004: Artificial intelligence (AI) research has evolved over the last few decades and knowledge acquisition research is at the core of AI research. PKAW-04 is one of three international knowledge acquisition workshops held in the Pacific-Rim, Canada and Europe over the last two decades. PKAW-04 has a strong emphasis on incremental knowledge acquisition, machine learning, neural nets and active mining. The proceedings contain 19 papers that were selected by the program committee among 24 submitted papers. All papers were peer reviewed by at least two reviewers. The papers in these proceedings cover the methods and tools as well as the applications related to develop expert systems or knowledge based systems.<|reference_end|> | arxiv | @article{kang2005proceedings,
title={Proceedings of the Pacific Knowledge Acquisition Workshop 2004},
author={Byeong Ho Kang, Achim Hoffmann, Takahira Yamaguchi, Wai Kiang Yeap},
journal={arXiv preprint arXiv:cs/0504071},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504071},
primaryClass={cs.AI}
} | kang2005proceedings |
arxiv-672835 | cs/0504072 | Knowledge Representation Issues in Semantic Graphs for Relationship Detection | <|reference_start|>Knowledge Representation Issues in Semantic Graphs for Relationship Detection: An important task for Homeland Security is the prediction of threat vulnerabilities, such as through the detection of relationships between seemingly disjoint entities. A structure used for this task is a "semantic graph", also known as a "relational data graph" or an "attributed relational graph". These graphs encode relationships as "typed" links between a pair of "typed" nodes. Indeed, semantic graphs are very similar to semantic networks used in AI. The node and link types are related through an ontology graph (also known as a schema). Furthermore, each node has a set of attributes associated with it (e.g., "age" may be an attribute of a node of type "person"). Unfortunately, the selection of types and attributes for both nodes and links depends on human expertise and is somewhat subjective and even arbitrary. This subjectiveness introduces biases into any algorithm that operates on semantic graphs. Here, we raise some knowledge representation issues for semantic graphs and provide some possible solutions using recently developed ideas in the field of complex networks. In particular, we use the concept of transitivity to evaluate the relevance of individual links in the semantic graph for detecting relationships. We also propose new statistical measures for semantic graphs and illustrate these semantic measures on graphs constructed from movies and terrorism data.<|reference_end|> | arxiv | @article{barthelemy2005knowledge,
title={Knowledge Representation Issues in Semantic Graphs for Relationship
Detection},
author={Marc Barthelemy, Edmond Chow, and Tina Eliassi-Rad},
journal={Papers from the 2005 AAAI Spring Symposium, AAAI Press, 2005, pp.
91-98},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504072},
primaryClass={cs.AI physics.soc-ph}
} | barthelemy2005knowledge |
arxiv-672836 | cs/0504073 | Rendezvous Regions: A Scalable Architecture for Resource Discovery and Service Location in Large-Scale Mobile Networks | <|reference_start|>Rendezvous Regions: A Scalable Architecture for Resource Discovery and Service Location in Large-Scale Mobile Networks: In large-scale wireless networks such as mobile ad hoc and sensor networks, efficient and robust service discovery and data-access mechanisms are both essential and challenging. Rendezvous-based mechanisms provide a valuable solution for provisioning a wide range of services. In this paper, we describe Rendezvous Regions (RRs) - a novel scalable rendezvous-based architecture for wireless networks. RR is a general architecture proposed for service location and bootstrapping in ad hoc networks, in addition to data-centric storage, configuration, and task assignment in sensor networks. In RR the network topology is divided into geographical regions, where each region is responsible for a set of keys representing the services or data of interest. Each key is mapped to a region based on a hash-table-like mapping scheme. A few elected nodes inside each region are responsible for maintaining the mapped information. The service or data provider stores the information in the corresponding region and the seekers retrieve it from there. We run extensive detailed simulations, and high-level simulations and analysis, to investigate the design space, and study the architecture in various environments including node mobility and failures. We evaluate it against other approaches to identify its merits and limitations. The results show high success rate and low overhead even with dynamics. RR scales to large number of nodes and is highly robust and efficient to node failures. It is also robust to node mobility and location inaccuracy with a significant advantage over point-based rendezvous mechanisms.<|reference_end|> | arxiv | @article{seada2005rendezvous,
title={Rendezvous Regions: A Scalable Architecture for Resource Discovery and
Service Location in Large-Scale Mobile Networks},
author={Karim Seada, Ahmed Helmy},
journal={arXiv preprint arXiv:cs/0504073},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504073},
primaryClass={cs.NI}
} | seada2005rendezvous |
arxiv-672837 | cs/0504074 | Metalinguistic Information Extraction for Terminology | <|reference_start|>Metalinguistic Information Extraction for Terminology: This paper describes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.<|reference_end|> | arxiv | @article{rodriguez2005metalinguistic,
title={Metalinguistic Information Extraction for Terminology},
author={Carlos Rodriguez},
journal={arXiv preprint arXiv:cs/0504074},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504074},
primaryClass={cs.CL cs.AI cs.IR}
} | rodriguez2005metalinguistic |
arxiv-672838 | cs/0504075 | Dichotomy for Voting Systems | <|reference_start|>Dichotomy for Voting Systems: Scoring protocols are a broad class of voting systems. Each is defined by a vector $(\alpha_1,\alpha_2,...,\alpha_m)$, $\alpha_1 \geq \alpha_2 \geq >... \geq \alpha_m$, of integers such that each voter contributes $\alpha_1$ points to his/her first choice, $\alpha_2$ points to his/her second choice, and so on, and any candidate receiving the most points is a winner. What is it about scoring-protocol election systems that makes some have the desirable property of being NP-complete to manipulate, while others can be manipulated in polynomial time? We find the complete, dichotomizing answer: Diversity of dislike. Every scoring-protocol election system having two or more point values assigned to candidates other than the favorite--i.e., having $||\{\alpha_i \condition 2 \leq i \leq m\}||\geq 2$--is NP-complete to manipulate. Every other scoring-protocol election system can be manipulated in polynomial time. In effect, we show that--other than trivial systems (where all candidates alway tie), plurality voting, and plurality voting's transparently disguised translations--\emph{every} scoring-protocol election system is NP-complete to manipulate.<|reference_end|> | arxiv | @article{hemaspaandra2005dichotomy,
title={Dichotomy for Voting Systems},
author={Edith Hemaspaandra and Lane A. Hemaspaandra},
journal={arXiv preprint arXiv:cs/0504075},
year={2005},
number={URCS-TR-2005-861},
archivePrefix={arXiv},
eprint={cs/0504075},
primaryClass={cs.GT cs.CC cs.MA}
} | hemaspaandra2005dichotomy |
arxiv-672839 | cs/0504076 | An Improved Remote User Authentication Scheme Using Smart Cards | <|reference_start|>An Improved Remote User Authentication Scheme Using Smart Cards: In 2000, Hwang and Li proposed a new remote user authentication scheme using smart cards. In the same year, Chan and Cheng pointed out that Hwang and Li’s scheme is not secure against the masquerade attack. Further, in 2003, Shen, Lin and Hwang pointed out a different type of attack on Hwang and Li’s scheme and presented a modified scheme to remove its security pitfalls. This paper presents an improved scheme which is secure against Chan-Cheng and all the extended attacks.<|reference_end|> | arxiv | @article{kumar2005an,
title={An Improved Remote User Authentication Scheme Using Smart Cards},
author={Manoj Kumar},
journal={arXiv preprint arXiv:cs/0504076},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504076},
primaryClass={cs.CR}
} | kumar2005an |
arxiv-672840 | cs/0504077 | The Modified Scheme is still vulnerable to the parallel Session Attack | <|reference_start|>The Modified Scheme is still vulnerable to the parallel Session Attack: In 2002, Chien–Jan–Tseng introduced an efficient remote user authentication scheme using smart cards. Further, in 2004, W. C. Ku and S. M. Chen proposed an efficient remote user authentication scheme using smart cards to solve the security problems of Chien et al.’s scheme. Recently, Hsu and Yoon et al. pointed out the security weakness of the Ku and Chen’s scheme Furthermore, Yoon et al. modified the password change phase of Ku and Chen’s scheme and they also proposed a new efficient remote user authentication scheme using smart cards. This paper analyzes that the modified scheme of Yoon et al. still vulnerable to parallel session attack.<|reference_end|> | arxiv | @article{kumar2005the,
title={The Modified Scheme is still vulnerable to the parallel Session Attack},
author={Manoj Kumar},
journal={arXiv preprint arXiv:cs/0504077},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504077},
primaryClass={cs.CR}
} | kumar2005the |
arxiv-672841 | cs/0504078 | Adaptive Online Prediction by Following the Perturbed Leader | <|reference_start|>Adaptive Online Prediction by Following the Perturbed Leader: When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai & Vempala (2003) (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.<|reference_end|> | arxiv | @article{hutter2005adaptive,
title={Adaptive Online Prediction by Following the Perturbed Leader},
author={Marcus Hutter and Jan Poland},
journal={Journal of Machine Learning Research 6 (2005) 639--660},
year={2005},
number={IDSIA-10-05},
archivePrefix={arXiv},
eprint={cs/0504078},
primaryClass={cs.AI cs.LG}
} | hutter2005adaptive |
arxiv-672842 | cs/0504079 | Prediction of Large Alphabet Processes and Its Application to Adaptive Source Coding | <|reference_start|>Prediction of Large Alphabet Processes and Its Application to Adaptive Source Coding: The problem of predicting a sequence $x_1,x_2,...$ generated by a discrete source with unknown statistics is considered. Each letter $x_{t+1}$ is predicted using information on the word $x_1x_2... x_t$ only. In fact, this problem is a classical problem which has received much attention. Its history can be traced back to Laplace. We address the problem where each $x_i$ belongs to some large (or even infinite) alphabet. A method is presented for which the precision is greater than for known algorithms, where precision is estimated by the Kullback-Leibler divergence. The results can readily be translated to results about adaptive coding.<|reference_end|> | arxiv | @article{ryabko2005prediction,
title={Prediction of Large Alphabet Processes and Its Application to Adaptive
Source Coding},
author={Boris Ryabko, Jaakko Astola},
journal={arXiv preprint arXiv:cs/0504079},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504079},
primaryClass={cs.IT math.IT}
} | ryabko2005prediction |
arxiv-672843 | cs/0504080 | Performance of Gaussian Signalling in Non Coherent Rayleigh Fading Channels | <|reference_start|>Performance of Gaussian Signalling in Non Coherent Rayleigh Fading Channels: The mutual information of a discrete time memoryless Rayleigh fading channel is considered, where neither the transmitter nor the receiver has the knowledge of the channel state information except the fading statistics. We present the mutual information of this channel in closed form when the input distribution is complex Gaussian, and derive a lower bound in terms of the capacity of the corresponding non fading channel and the capacity when the perfect channel state information is known at the receiver.<|reference_end|> | arxiv | @article{perera2005performance,
title={Performance of Gaussian Signalling in Non Coherent Rayleigh Fading
Channels},
author={Rasika Perera, Tony Pollock, Thushara Abhayapala},
journal={arXiv preprint arXiv:cs/0504080},
year={2005},
number={CLN: 5-340},
archivePrefix={arXiv},
eprint={cs/0504080},
primaryClass={cs.IT math.IT}
} | perera2005performance |
arxiv-672844 | cs/0504081 | A Decomposition Approach to Multi-Vehicle Cooperative Control | <|reference_start|>A Decomposition Approach to Multi-Vehicle Cooperative Control: We present methods that generate cooperative strategies for multi-vehicle control problems using a decomposition approach. By introducing a set of tasks to be completed by the team of vehicles and a task execution method for each vehicle, we decomposed the problem into a combinatorial component and a continuous component. The continuous component of the problem is captured by task execution, and the combinatorial component is captured by task assignment. In this paper, we present a solver for task assignment that generates near-optimal assignments quickly and can be used in real-time applications. To motivate our methods, we apply them to an adversarial game between two teams of vehicles. One team is governed by simple rules and the other by our algorithms. In our study of this game we found phase transitions, showing that the task assignment problem is most difficult to solve when the capabilities of the adversaries are comparable. Finally, we implement our algorithms in a multi-level architecture with a variable replanning rate at each level to provide feedback on a dynamically changing and uncertain environment.<|reference_end|> | arxiv | @article{earl2005a,
title={A Decomposition Approach to Multi-Vehicle Cooperative Control},
author={Matthew Earl and Raffaello D'Andrea},
journal={M. G. Earl and R. D'Andrea, "A Decomposition Approach to
Multi-Vehicle Cooperative Control," Robotics and Autonomous Systems, Volume
55, Issue 4, pages 276-291, April 2007.},
year={2005},
doi={10.1016/j.robot.2006.11.002},
archivePrefix={arXiv},
eprint={cs/0504081},
primaryClass={cs.RO}
} | earl2005a |
arxiv-672845 | cs/0504082 | Coloring Artemis graphs | <|reference_start|>Coloring Artemis graphs: We consider the class A of graphs that contain no odd hole, no antihole, and no ``prism'' (a graph consisting of two disjoint triangles with three disjoint paths between them). We show that the coloring algorithm found by the second and fourth author can be implemented in time O(n^2m) for any graph in A with n vertices and m edges, thereby improving on the complexity proposed in the original paper.<|reference_end|> | arxiv | @article{lévêque2005coloring,
title={Coloring Artemis graphs},
author={Benjamin L'ev^eque (Leibniz - IMAG), Fr'ed'eric Maffray (Leibniz -
IMAG), Bruce Reed, Nicolas Trotignon (Leibniz - IMAG)},
journal={arXiv preprint arXiv:cs/0504082},
year={2005},
doi={10.1016/j.tcs.2009.02.012},
archivePrefix={arXiv},
eprint={cs/0504082},
primaryClass={cs.DM}
} | lévêque2005coloring |
arxiv-672846 | cs/0504083 | On the Unicity Distance of Stego Key | <|reference_start|>On the Unicity Distance of Stego Key: Steganography is about how to send secret message covertly. And the purpose of steganalysis is to not only detect the existence of the hidden message but also extract it. So far there have been many reliable detecting methods on various steganographic algorithms, while there are few approaches that can extract the hidden information. In this paper, the difficulty of extracting hidden information, which is essentially a kind of privacy, is analyzed with information-theoretic method in the terms of unicity distance of steganographic key (abbreviated stego key). A lower bound for the unicity distance is obtained, which shows the relations between key rate, message rate, hiding capacity and difficulty of extraction. Furthermore the extracting attack to steganography is viewed as a special kind of cryptanalysis, and an effective method on recovering the stego key of popular LSB replacing steganography in spatial images is presented by combining the detecting technique of steganalysis and correlation attack of cryptanalysis together. The analysis for this method and experimental results on steganographic software ``Hide and Seek 4.1" are both accordant with the information-theoretic conclusion.<|reference_end|> | arxiv | @article{weiming2005on,
title={On the Unicity Distance of Stego Key},
author={Zhang Weiming and Li Shiqu},
journal={arXiv preprint arXiv:cs/0504083},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504083},
primaryClass={cs.CR}
} | weiming2005on |
arxiv-672847 | cs/0504084 | The Convergence of Digital-Libraries and the Peer-Review Process | <|reference_start|>The Convergence of Digital-Libraries and the Peer-Review Process: Pre-print repositories have seen a significant increase in use over the past fifteen years across multiple research domains. Researchers are beginning to develop applications capable of using these repositories to assist the scientific community above and beyond the pure dissemination of information. The contribution set forth by this paper emphasizes a deconstructed publication model in which the peer-review process is mediated by an OAI-PMH peer-review service. This peer-review service uses a social-network algorithm to determine potential reviewers for a submitted manuscript and for weighting the relative influence of each participating reviewer's evaluations. This paper also suggests a set of peer-review specific metadata tags that can accompany a pre-print's existing metadata record. The combinations of these contributions provide a unique repository-centric peer-review model that fits within the widely deployed OAI-PMH framework.<|reference_end|> | arxiv | @article{rodriguez2005the,
title={The Convergence of Digital-Libraries and the Peer-Review Process},
author={Marko A. Rodriguez, Johan Bollen, Herbert Van de Sompel},
journal={Journal of Information Science, 32(2), pp. 149-159, 2006.},
year={2005},
doi={10.1177/0165551506062327},
number={http://jis.sagepub.com/cgi/content/abstract/32/2/149},
archivePrefix={arXiv},
eprint={cs/0504084},
primaryClass={cs.DL cs.CY}
} | rodriguez2005the |
arxiv-672848 | cs/0504085 | Capacity per Unit Energy of Fading Channels with a Peak Constraint | <|reference_start|>Capacity per Unit Energy of Fading Channels with a Peak Constraint: A discrete-time single-user scalar channel with temporally correlated Rayleigh fading is analyzed. There is no side information at the transmitter or the receiver. A simple expression is given for the capacity per unit energy, in the presence of a peak constraint. The simple formula of Verdu for capacity per unit cost is adapted to a channel with memory, and is used in the proof. In addition to bounding the capacity of a channel with correlated fading, the result gives some insight into the relationship between the correlation in the fading process and the channel capacity. The results are extended to a channel with side information, showing that the capacity per unit energy is one nat per Joule, independently of the peak power constraint. A continuous-time version of the model is also considered. The capacity per unit energy subject to a peak constraint (but no bandwidth constraint) is given by an expression similar to that for discrete time, and is evaluated for Gauss-Markov and Clarke fading channels.<|reference_end|> | arxiv | @article{sethuraman2005capacity,
title={Capacity per Unit Energy of Fading Channels with a Peak Constraint},
author={Vignesh Sethuraman and Bruce Hajek},
journal={arXiv preprint arXiv:cs/0504085},
year={2005},
doi={10.1109/TIT.2005.853329},
archivePrefix={arXiv},
eprint={cs/0504085},
primaryClass={cs.IT math.IT}
} | sethuraman2005capacity |
arxiv-672849 | cs/0504086 | Componentwise Least Squares Support Vector Machines | <|reference_start|>Componentwise Least Squares Support Vector Machines: This chapter describes componentwise Least Squares Support Vector Machines (LS-SVMs) for the estimation of additive models consisting of a sum of nonlinear components. The primal-dual derivations characterizing LS-SVMs for the estimation of the additive model result in a single set of linear equations with size growing in the number of data-points. The derivation is elaborated for the classification as well as the regression case. Furthermore, different techniques are proposed to discover structure in the data by looking for sparse components in the model based on dedicated regularization schemes on the one hand and fusion of the componentwise LS-SVMs training with a validation criterion on the other hand. (keywords: LS-SVMs, additive models, regularization, structure detection)<|reference_end|> | arxiv | @article{pelckmans2005componentwise,
title={Componentwise Least Squares Support Vector Machines},
author={Kristiaan Pelckmans, Ivan Goethals, Jos De Brabanter, Johan A.K.
Suykens, Bart De Moor},
journal={arXiv preprint arXiv:cs/0504086},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504086},
primaryClass={cs.LG cs.AI}
} | pelckmans2005componentwise |
arxiv-672850 | cs/0504087 | Improved direct sum theorem in classical communication complexity | <|reference_start|>Improved direct sum theorem in classical communication complexity: Withdrawn due to critical error.<|reference_end|> | arxiv | @article{jain2005improved,
title={Improved direct sum theorem in classical communication complexity},
author={Rahul Jain},
journal={arXiv preprint arXiv:cs/0504087},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504087},
primaryClass={cs.OH}
} | jain2005improved |
arxiv-672851 | cs/0504088 | Time, Space, and Energy in Reversible Computing | <|reference_start|>Time, Space, and Energy in Reversible Computing: We survey results of a quarter century of work on computation by reversible general-purpose computers (in this setting Turing machines), and general reversible simulation of irreversible computations, with respect to energy-, time- and space requirements.<|reference_end|> | arxiv | @article{vitanyi2005time,,
title={Time, Space, and Energy in Reversible Computing},
author={Paul Vitanyi (CWI, University of Amsterdam, and National ICT
Australia)},
journal={arXiv preprint arXiv:cs/0504088},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504088},
primaryClass={cs.CC}
} | vitanyi2005time, |
arxiv-672852 | cs/0504089 | Universal Similarity | <|reference_start|>Universal Similarity: We survey a new area of parameter-free similarity distance measures useful in data-mining, pattern recognition, learning and automatic semantics extraction. Given a family of distances on a set of objects, a distance is universal up to a certain precision for that family if it minorizes every distance in the family between every two objects in the set, up to the stated precision (we do not require the universal distance to be an element of the family). We consider similarity distances for two types of objects: literal objects that as such contain all of their meaning, like genomes or books, and names for objects. The latter may have literal embodyments like the first type, but may also be abstract like ``red'' or ``christianity.'' For the first type we consider a family of computable distance measures corresponding to parameters expressing similarity according to particular features between pairs of literal objects. For the second type we consider similarity distances generated by web users corresponding to particular semantic relations between the (names for) the designated objects. For both families we give universal similarity distance measures, incorporating all particular distance measures in the family. In the first case the universal distance is based on compression and in the second case it is based on Google page counts related to search terms. In both cases experiments on a massive scale give evidence of the viability of the approaches.<|reference_end|> | arxiv | @article{vitanyi2005universal,
title={Universal Similarity},
author={Paul Vitanyi (CWI, University of Amsterdam, and National ICT
Australia)},
journal={arXiv preprint arXiv:cs/0504089},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504089},
primaryClass={cs.IR cs.AI cs.CL physics.data-an}
} | vitanyi2005universal |
arxiv-672853 | cs/0504090 | Discrete Morse Theory for free chain complexes | <|reference_start|>Discrete Morse Theory for free chain complexes: We extend the combinatorial Morse complex construction to the arbitrary free chain complexes, and give a short, self-contained, and elementary proof of the quasi-isomorphism between the original chain complex and its Morse complex. Even stronger, the main result states that, if $C_*$ is a free chain complex, and $\cm$ an acyclic matching, then $C_*=C_*^\cm\oplus T_*$, where $C_*^\cm$ is the Morse complex generated by the critical elements, and $T_*$ is an acyclic complex.<|reference_end|> | arxiv | @article{kozlov2005discrete,
title={Discrete Morse Theory for free chain complexes},
author={Dmitry N. Kozlov},
journal={Comptes Rendus Mathematique, Acad. Sci. Paris, Ser. I 340 (2005),
pp. 867-872},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504090},
primaryClass={cs.DM math.RA}
} | kozlov2005discrete |
arxiv-672854 | cs/0504091 | A Probabilistic Upper Bound on Differential Entropy | <|reference_start|>A Probabilistic Upper Bound on Differential Entropy: A novel, non-trivial, probabilistic upper bound on the entropy of an unknown one-dimensional distribution, given the support of the distribution and a sample from that distribution, is presented. No knowledge beyond the support of the unknown distribution is required, nor is the distribution required to have a density. Previous distribution-free bounds on the cumulative distribution function of a random variable given a sample of that variable are used to construct the bound. A simple, fast, and intuitive algorithm for computing the entropy bound from a sample is provided.<|reference_end|> | arxiv | @article{destefano2005a,
title={A Probabilistic Upper Bound on Differential Entropy},
author={Joseph DeStefano and Erik Learned-Miller},
journal={arXiv preprint arXiv:cs/0504091},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504091},
primaryClass={cs.IT math.IT}
} | destefano2005a |
arxiv-672855 | cs/0504092 | On Optimality Condition of Complex Systems: Computational Evidence | <|reference_start|>On Optimality Condition of Complex Systems: Computational Evidence: A general condition determining the optimal performance of a complex system has not yet been found and the possibility of its existence is unknown. To contribute in this direction, an optimization algorithm as a complex system is presented. The performance of the algorithm for any problem is controlled as a convex function with a single optimum. To characterize the performance optimums, certain quantities of the algorithm and the problem are suggested and interpreted as their complexities. An optimality condition of the algorithm is computationally found: if the algorithm shows its best performance for a problem, then the complexity of the algorithm is in a linear relationship with the complexity of the problem. The optimality condition provides a new perspective to the subject by recognizing that the relationship between certain quantities of the complex system and the problem may determine the optimal performance.<|reference_end|> | arxiv | @article{korotkikh2005on,
title={On Optimality Condition of Complex Systems: Computational Evidence},
author={Victor Korotkikh, Galina Korotkikh and Darryl Bond},
journal={arXiv preprint arXiv:cs/0504092},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504092},
primaryClass={cs.CC cond-mat.dis-nn nlin.AO}
} | korotkikh2005on |
arxiv-672856 | cs/0504093 | A Multi-proxy Signature Scheme for Partial delegation with Warrant | <|reference_start|>A Multi-proxy Signature Scheme for Partial delegation with Warrant: In some cases, the original signer may delegate its signing power to a specified proxy group while ensuring individual accountability of each participantsigner. The proxy signature scheme that achieves such purpose is called the multi-proxy signature scheme and the signature generated by the specified proxy group is called multi-proxy signature for the original signer. Recently such scheme has been discussed by Lin et al. Lins scheme is based on partial delegation by Mambo et al. In present chapter we introduce a new multi-proxy signature scheme, which requires less computational overhead in comparison to Lin et al, and also fulfill the requirement of partial delegation with warrant simultaneously.<|reference_end|> | arxiv | @article{awasthi2005a,
title={A Multi-proxy Signature Scheme for Partial delegation with Warrant},
author={Amit K Awasthi and Sunder Lal},
journal={arXiv preprint arXiv:cs/0504093},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504093},
primaryClass={cs.CR}
} | awasthi2005a |
arxiv-672857 | cs/0504094 | A New Remote User Authentication Scheme Using Smart Cards with Check Digits | <|reference_start|>A New Remote User Authentication Scheme Using Smart Cards with Check Digits: Since 1981, when Lamport introduced the remote user authentication scheme using table, a plenty of schemes had been proposed with table and without table using. In 1993, Chang and Wu [5] introduced Remote password authentication scheme with smart cards. A number of remote authentication schemes with smart cards have been proposed since then. These schemes allow a valid user to login a remote server and access the services provided by the remote server. But still there is no scheme to authenticate the remote proxy user. In this paper we propose firstly, a protocol to authenticate a proxy user remotely using smartcards.<|reference_end|> | arxiv | @article{awasthi2005a,
title={A New Remote User Authentication Scheme Using Smart Cards with Check
Digits},
author={Amit K Awasthi},
journal={arXiv preprint arXiv:cs/0504094},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504094},
primaryClass={cs.CR}
} | awasthi2005a |
arxiv-672858 | cs/0504095 | An Efficient Scheme for Sensitive Message Transmission using Blind Signcryption | <|reference_start|>An Efficient Scheme for Sensitive Message Transmission using Blind Signcryption: Blind signature schemes enable a useful protocol that guarantee the anonymity of the participants while Signcryption offers authentication of message and confidentiality of messages at the same time and more efficiently. In this paper, we present a blind signcryption scheme that combines the functionality of blind signature and signcryption. This blind Signcryption is useful for applications that are based on anonymity untracebility and unlinkability.<|reference_end|> | arxiv | @article{awasthi2005an,
title={An Efficient Scheme for Sensitive Message Transmission using Blind
Signcryption},
author={Amit K Awasthi and Sunder Lal},
journal={arXiv preprint arXiv:cs/0504095},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504095},
primaryClass={cs.CR}
} | awasthi2005an |
arxiv-672859 | cs/0504096 | P-Selectivity, Immunity, and the Power of One Bit | <|reference_start|>P-Selectivity, Immunity, and the Power of One Bit: We prove that P-sel, the class of all P-selective sets, is EXP-immune, but is not EXP/1-immune. That is, we prove that some infinite P-selective set has no infinite EXP-time subset, but we also prove that every infinite P-selective set has some infinite subset in EXP/1. Informally put, the immunity of P-sel is so fragile that it is pierced by a single bit of information. The above claims follow from broader results that we obtain about the immunity of the P-selective sets. In particular, we prove that for every recursive function f, P-sel is DTIME(f)-immune. Yet we also prove that P-sel is not \Pi_2^p/1-immune.<|reference_end|> | arxiv | @article{hemaspaandra2005p-selectivity,,
title={P-Selectivity, Immunity, and the Power of One Bit},
author={Lane A. Hemaspaandra and Leen Torenvliet},
journal={arXiv preprint arXiv:cs/0504096},
year={2005},
number={URCS-TR-2005-864},
archivePrefix={arXiv},
eprint={cs/0504096},
primaryClass={cs.CC}
} | hemaspaandra2005p-selectivity, |
arxiv-672860 | cs/0504097 | ID-based Ring Signature and Proxy Ring Signature Schemes from Bilinear Pairings | <|reference_start|>ID-based Ring Signature and Proxy Ring Signature Schemes from Bilinear Pairings: In 2001, Rivest et al. firstly introduced the concept of ring signatures. A ring signature is a simplified group signature without any manager. It protects the anonymity of a signer. The first scheme proposed by Rivest et al. was based on RSA cryptosystem and certificate based public key setting. The first ring signature scheme based on DLP was proposed by Abe, Ohkubo, and Suzuki. Their scheme is also based on the general certificate-based public key setting too. In 2002, Zhang and Kim proposed a new ID-based ring signature scheme using pairings. Later Lin and Wu proposed a more efficient ID-based ring signature scheme. Both these schemes have some inconsistency in computational aspect. In this paper we propose a new ID-based ring signature scheme and a proxy ring signature scheme. Both the schemes are more efficient than existing one. These schemes also take care of the inconsistencies in above two schemes.<|reference_end|> | arxiv | @article{awasthi2005id-based,
title={ID-based Ring Signature and Proxy Ring Signature Schemes from Bilinear
Pairings},
author={Amit K Awasthi and Sunder Lal},
journal={arXiv preprint arXiv:cs/0504097},
year={2005},
doi={10.13140/2.1.2549.1529},
archivePrefix={arXiv},
eprint={cs/0504097},
primaryClass={cs.CR}
} | awasthi2005id-based |
arxiv-672861 | cs/0504099 | The Capacity of Random Ad hoc Networks under a Realistic Link Layer Model | <|reference_start|>The Capacity of Random Ad hoc Networks under a Realistic Link Layer Model: The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1/n) (instead of Theta(1/sqrt{n log{n}})) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1/(K_n sqrt{n log{n}})), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity.<|reference_end|> | arxiv | @article{mhatre2005the,
title={The Capacity of Random Ad hoc Networks under a Realistic Link Layer
Model},
author={Vivek P. Mhatre, Catherine P. Rosenberg},
journal={arXiv preprint arXiv:cs/0504099},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504099},
primaryClass={cs.IT cs.NI math.IT}
} | mhatre2005the |
arxiv-672862 | cs/0504100 | A DNA Sequence Compression Algorithm Based on LUT and LZ77 | <|reference_start|>A DNA Sequence Compression Algorithm Based on LUT and LZ77: This article introduces a new DNA sequence compression algorithm which is based on LUT and LZ77 algorithm. Combined a LUT-based pre-coding routine and LZ77 compression routine,this algorithm can approach a compression ratio of 1.9bits \slash base and even lower.The biggest advantage of this algorithm is fast execution, small memory occupation and easy implementation.<|reference_end|> | arxiv | @article{bao2005a,
title={A DNA Sequence Compression Algorithm Based on LUT and LZ77},
author={Sheng Bao, Shi Chen, Zhiqiang Jing, Ran Ren},
journal={arXiv preprint arXiv:cs/0504100},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504100},
primaryClass={cs.IT math.IT}
} | bao2005a |
arxiv-672863 | cs/0504101 | Single-solution Random 3-SAT Instances | <|reference_start|>Single-solution Random 3-SAT Instances: We study a class of random 3-SAT instances having exactly one solution. The properties of this ensemble considerably differ from those of a random 3-SAT ensemble. It is numerically shown that the running time of several complete and stochastic local search algorithms monotonically increases as the clause density is decreased. Therefore, there is no easy-hard-easy pattern of hardness as for standard random 3-SAT ensemble. Furthermore, the running time for short single-solution formulas increases with the problem size much faster than for random 3-SAT formulas from the phase transition region.<|reference_end|> | arxiv | @article{znidaric2005single-solution,
title={Single-solution Random 3-SAT Instances},
author={Marko Znidaric},
journal={arXiv preprint arXiv:cs/0504101},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504101},
primaryClass={cs.AI cs.CC cs.DM}
} | znidaric2005single-solution |
arxiv-672864 | cs/0504102 | Spectral Orbits and Peak-to-Average Power Ratio of Boolean Functions with respect to the I,H,N^n Transform | <|reference_start|>Spectral Orbits and Peak-to-Average Power Ratio of Boolean Functions with respect to the I,H,N^n Transform: We enumerate the inequivalent self-dual additive codes over GF(4) of blocklength n, thereby extending the sequence A090899 in The On-Line Encyclopedia of Integer Sequences from n = 9 to n = 12. These codes have a well-known interpretation as quantum codes. They can also be represented by graphs, where a simple graph operation generates the orbits of equivalent codes. We highlight the regularity and structure of some graphs that correspond to codes with high distance. The codes can also be interpreted as quadratic Boolean functions, where inequivalence takes on a spectral meaning. In this context we define PAR_IHN, peak-to-average power ratio with respect to the {I,H,N}^n transform set. We prove that PAR_IHN of a Boolean function is equivalent to the the size of the maximum independent set over the associated orbit of graphs. Finally we propose a construction technique to generate Boolean functions with low PAR_IHN and algebraic degree higher than 2.<|reference_end|> | arxiv | @article{danielsen2005spectral,
title={Spectral Orbits and Peak-to-Average Power Ratio of Boolean Functions
with respect to the {I,H,N}^n Transform},
author={Lars Eirik Danielsen (1), Matthew G. Parker (1) ((1) University of
Bergen)},
journal={In Sequences and Their Applications -- SETA 2004, edited by T.
Helleseth, D. Sarwate, H.-Y. Song, and K. Yang, Lecture Notes in Comput.
Sci., volume 3486, pp. 373--388, Springer-Verlag, Berlin, May 2005.},
year={2005},
doi={10.1007/11423461_28},
archivePrefix={arXiv},
eprint={cs/0504102},
primaryClass={cs.IT math.IT}
} | danielsen2005spectral |
arxiv-672865 | cs/0504103 | Incremental Medians via Online Bidding | <|reference_start|>Incremental Medians via Online Bidding: In the k-median problem we are given sets of facilities and customers, and distances between them. For a given set F of facilities, the cost of serving a customer u is the minimum distance between u and a facility in F. The goal is to find a set F of k facilities that minimizes the sum, over all customers, of their service costs. Following Mettu and Plaxton, we study the incremental medians problem, where k is not known in advance, and the algorithm produces a nested sequence of facility sets where the kth set has size k. The algorithm is c-cost-competitive if the cost of each set is at most c times the cost of the optimum set of size k. We give improved incremental algorithms for the metric version: an 8-cost-competitive deterministic algorithm, a 2e ~ 5.44-cost-competitive randomized algorithm, a (24+epsilon)-cost-competitive, poly-time deterministic algorithm, and a (6e+epsilon ~ .31)-cost-competitive, poly-time randomized algorithm. The algorithm is s-size-competitive if the cost of the kth set is at most the minimum cost of any set of size k, and has size at most s k. The optimal size-competitive ratios for this problem are 4 (deterministic) and e (randomized). We present the first poly-time O(log m)-size-approximation algorithm for the offline problem and first poly-time O(log m)-size-competitive algorithm for the incremental problem. Our proofs reduce incremental medians to the following online bidding problem: faced with an unknown threshold T, an algorithm submits "bids" until it submits a bid that is at least the threshold. It pays the sum of all its bids. We prove that folklore algorithms for online bidding are optimally competitive.<|reference_end|> | arxiv | @article{chrobak2005incremental,
title={Incremental Medians via Online Bidding},
author={Marek Chrobak and Claire Kenyon and John Noga and Neal E. Young},
journal={Algorithmica 50(4):455-478(2008)},
year={2005},
doi={10.1007/s00453-007-9005-x},
archivePrefix={arXiv},
eprint={cs/0504103},
primaryClass={cs.DS}
} | chrobak2005incremental |
arxiv-672866 | cs/0504104 | The reverse greedy algorithm for the metric k-median problem | <|reference_start|>The reverse greedy algorithm for the metric k-median problem: The Reverse Greedy algorithm (RGreedy) for the k-median problem works as follows. It starts by placing facilities on all nodes. At each step, it removes a facility to minimize the resulting total distance from the customers to the remaining facilities. It stops when k facilities remain. We prove that, if the distance function is metric, then the approximation ratio of RGreedy is between ?(log n/ log log n) and O(log n).<|reference_end|> | arxiv | @article{chrobak2005the,
title={The reverse greedy algorithm for the metric k-median problem},
author={Marek Chrobak and Claire Kenyon and Neal E. Young},
journal={Information Processing Letters 97:68-72(2006)},
year={2005},
doi={10.1016/j.ipl.2005.09.009},
archivePrefix={arXiv},
eprint={cs/0504104},
primaryClass={cs.DS}
} | chrobak2005the |
arxiv-672867 | cs/0504105 | Wikis in Tuple Spaces | <|reference_start|>Wikis in Tuple Spaces: We consider storing the pages of a wiki in a tuple space and the effects this might have on the wiki experience. In particular, wiki pages are stored in tuples with a few identifying values such as title, author, revision date, content, etc. and pages are retrieved by sending the tuple space templates, such as one that gives the title but nothing else, leaving the tuple space to resolve to a single tuple. We use a tuple space wiki to avoid deadlocks, infinite loops, and wasted efforts when page edit contention arises and examine how a tuple space wiki changes the wiki experience.<|reference_end|> | arxiv | @article{worley2005wikis,
title={Wikis in Tuple Spaces},
author={G Gordon Worley III},
journal={arXiv preprint arXiv:cs/0504105},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504105},
primaryClass={cs.DC cs.MM}
} | worley2005wikis |
arxiv-672868 | cs/0504106 | A Distributed Multimedia Communication System and its Applications to E-Learning | <|reference_start|>A Distributed Multimedia Communication System and its Applications to E-Learning: In this paper we report on a multimedia communication system including a VCoIP (Video Conferencing over IP) software with a distributed architecture and its applications for teaching scenarios. It is a simple, ready-to-use scheme for distributed presenting, recording and streaming multimedia content. We also introduce and investigate concepts and experiments to IPv6 user and session mobility, with the special focus on real-time video group communication.<|reference_end|> | arxiv | @article{cycon2005a,
title={A Distributed Multimedia Communication System and its Applications to
E-Learning},
author={Hans L. Cycon, Thomas C. Schmidt, Matthias Waehlisch, Mark Palkow and
Henrik Regensburg},
journal={IEEE International Symposium on Consumer Electronics, Sept. 1-3,
2004, Page(s):425 - 429},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504106},
primaryClass={cs.MM cs.NI}
} | cycon2005a |
arxiv-672869 | cs/0504107 | k-core decomposition: a tool for the visualization of large scale networks | <|reference_start|>k-core decomposition: a tool for the visualization of large scale networks: We use the k-core decomposition to visualize large scale complex networks in two dimensions. This decomposition, based on a recursive pruning of the least connected vertices, allows to disentangle the hierarchical structure of networks by progressively focusing on their central cores. By using this strategy we develop a general visualization algorithm that can be used to compare the structural properties of various networks and highlight their hierarchical structure. The low computational complexity of the algorithm, O(n+e), where 'n' is the size of the network, and 'e' is the number of edges, makes it suitable for the visualization of very large sparse networks. We apply the proposed visualization tool to several real and synthetic graphs, showing its utility in finding specific structural fingerprints of computer generated and real world networks.<|reference_end|> | arxiv | @article{alvarez-hamelin2005k-core,
title={k-core decomposition: a tool for the visualization of large scale
networks},
author={Jos'e Ignacio Alvarez-Hamelin (LPTO), Luca Dall'Asta (LPTO), Alain
Barrat (LPTO), Alessandro Vespignani (LPTO)},
journal={Advances in Neural Information Processing Systems 18, Canada
(2006) 41},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504107},
primaryClass={cs.NI cs.GR}
} | alvarez-hamelin2005k-core |
arxiv-672870 | cs/0504108 | Cooperative Game Theory within Multi-Agent Systems for Systems Scheduling | <|reference_start|>Cooperative Game Theory within Multi-Agent Systems for Systems Scheduling: Research concerning organization and coordination within multi-agent systems continues to draw from a variety of architectures and methodologies. The work presented in this paper combines techniques from game theory and multi-agent systems to produce self-organizing, polymorphic, lightweight, embedded agents for systems scheduling within a large-scale real-time systems environment. Results show how this approach is used to experimentally produce optimum real-time scheduling through the emergent behavior of thousands of agents. These results are obtained using a SWARM simulation of systems scheduling within a High Energy Physics experiment consisting of 2500 digital signal processors.<|reference_end|> | arxiv | @article{messie2005cooperative,
title={Cooperative Game Theory within Multi-Agent Systems for Systems
Scheduling},
author={Derek Messie (Syracuse University) and Jae C. Oh (Syracuse University)},
journal={arXiv preprint arXiv:cs/0504108},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504108},
primaryClass={cs.AI cs.MA}
} | messie2005cooperative |
arxiv-672871 | cs/0504109 | Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems | <|reference_start|>Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems: This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an `expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.<|reference_end|> | arxiv | @article{messie2005prototype,
title={Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time
Systems},
author={Derek Messie (1), Mina Jung (1), Jae C. Oh (1), Shweta Shetty (2),
Steven Nordstrom (2), Michael Haney (3) ((1) Syracuse University, (2)
Vanderbilt University, (3) University of Illinois at Urbana-Champaign)},
journal={arXiv preprint arXiv:cs/0504109},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504109},
primaryClass={cs.SE}
} | messie2005prototype |
arxiv-672872 | cs/0504110 | Computing finite-dimensional bipartite quantum separability | <|reference_start|>Computing finite-dimensional bipartite quantum separability: Ever since entanglement was identified as a computational and cryptographic resource, effort has been made to find an efficient way to tell whether a given density matrix represents an unentangled, or separable, state. Essentially, this is the quantum separability problem. Chapters 1 to 3 motivate a new interior-point algorithm which, given the expected values of a subset of an orthogonal basis of observables of an otherwise unknown quantum state, searches for an entanglement witness in the span of the subset of observables. When all the expected values are known, the algorithm solves the separability problem. In Chapter 4, I give the motivation for the algorithm and show how it can be used in a particular physical scenario to detect entanglement (or decide separability) of an unknown quantum state using as few quantum resources as possible. I then explain the intuitive idea behind the algorithm and relate it to the standard algorithms of its kind. I end the chapter with a comparison of the complexities of the algorithms surveyed in Chapter 3. Finally, in Chapter 5, I present the details of the algorithm and discuss its performance relative to standard methods.<|reference_end|> | arxiv | @article{ioannou2005computing,
title={Computing finite-dimensional bipartite quantum separability},
author={Lawrence M. Ioannou},
journal={arXiv preprint arXiv:cs/0504110},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504110},
primaryClass={cs.DS quant-ph}
} | ioannou2005computing |
arxiv-672873 | cs/0504111 | Efficient and Robust Geocasting Protocols for Sensor Networks | <|reference_start|>Efficient and Robust Geocasting Protocols for Sensor Networks: Geocasting is the delivery of packets to nodes within a certain geographic area. For many applications in wireless ad hoc and sensor networks, geocasting is an important and frequent communication service. The challenging problem in geocasting is distributing the packets to all the nodes within the geocast region with high probability but with low overhead. According to our study we notice a clear tradeoff between the proportion of nodes in the geocast region that receive the packet and the overhead incurred by the geocast packet especially at low densities and irregular distributions. We present two novel protocols for geocasting that achieve high delivery rate and low overhead by utilizing the local location information of nodes to combine geographic routing mechanisms with region flooding. We show that the first protocol Geographic-Forwarding-Geocast (GFG) has close-to-minimum overhead in dense networks and that the second protocol Geographic-Forwarding-Perimeter-Geocast (GFPG) provides guaranteed delivery without global flooding or global network information even at low densities and with the existence of region gaps or obstacles. An adaptive version of the second protocol (GFPG*) has the desirable property of perfect delivery at all densities and close-to-minimum overhead at high densities. We evaluate our mechanisms and compare them using simulation to other proposed geocasting mechanisms. The results show the significant improvement in delivery rate (up to 63% higher delivery percentage in low density networks) and reduction in overhead (up to 80% reduction) achieved by our mechanisms. We hope for our protocols to become building block mechanisms for dependable sensor network architectures that require robust efficient geocast services.<|reference_end|> | arxiv | @article{seada2005efficient,
title={Efficient and Robust Geocasting Protocols for Sensor Networks},
author={Karim Seada, Ahmed Helmy},
journal={arXiv preprint arXiv:cs/0504111},
year={2005},
archivePrefix={arXiv},
eprint={cs/0504111},
primaryClass={cs.NI}
} | seada2005efficient |
arxiv-672874 | cs/0505001 | Modelling investment in artificial stock markets: Analytical and Numerical Results | <|reference_start|>Modelling investment in artificial stock markets: Analytical and Numerical Results: In this article we study the behavior of a group of economic agents in the context of cooperative game theory, interacting according to rules based on the Potts Model with suitable modifications. Each agent can be thought of as belonging to a chain, where agents can only interact with their nearest neighbors (periodic boundary conditions are imposed). Each agent can invest an amount σ_{i}=0,...,q-1. Using the transfer matrix method we study analytically, among other things, the behavior of the investment as a function of a control parameter (denoted β) for the cases q=2 and 3. For q>3 numerical evaluation of eigenvalues and high precision numerical derivatives are used in order to assess this information.<|reference_end|> | arxiv | @article{da silva2005modelling,
title={Modelling investment in artificial stock markets: Analytical and
Numerical Results},
author={Roberto da Silva, Alexandre Tavares Baraviera, Silvio R. Dahmen},
journal={arXiv preprint arXiv:cs/0505001},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505001},
primaryClass={cs.CE}
} | da silva2005modelling |
arxiv-672875 | cs/0505002 | Tight Lower Bounds for Query Processing on Streaming and External Memory Data | <|reference_start|>Tight Lower Bounds for Query Processing on Streaming and External Memory Data: We study a clean machine model for external memory and stream processing. We show that the number of scans of the external data induces a strict hierarchy (as long as work space is sufficiently small, e.g., polylogarithmic in the size of the input). We also show that neither joins nor sorting are feasible if the product of the number $r(n)$ of scans of the external memory and the size $s(n)$ of the internal memory buffers is sufficiently small, e.g., of size $o(\sqrt[5]{n})$. We also establish tight bounds for the complexity of XPath evaluation and filtering.<|reference_end|> | arxiv | @article{grohe2005tight,
title={Tight Lower Bounds for Query Processing on Streaming and External Memory
Data},
author={Martin Grohe, Christoph Koch, Nicole Schweikardt},
journal={arXiv preprint arXiv:cs/0505002},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505002},
primaryClass={cs.DB cs.CC}
} | grohe2005tight |
arxiv-672876 | cs/0505003 | A New Kind of Hopfield Networks for Finding Global Optimum | <|reference_start|>A New Kind of Hopfield Networks for Finding Global Optimum: The Hopfield network has been applied to solve optimization problems over decades. However, it still has many limitations in accomplishing this task. Most of them are inherited from the optimization algorithms it implements. The computation of a Hopfield network, defined by a set of difference equations, can easily be trapped into one local optimum or another, sensitive to initial conditions, perturbations, and neuron update orders. It doesn't know how long it will take to converge, as well as if the final solution is a global optimum, or not. In this paper, we present a Hopfield network with a new set of difference equations to fix those problems. The difference equations directly implement a new powerful optimization algorithm.<|reference_end|> | arxiv | @article{huang2005a,
title={A New Kind of Hopfield Networks for Finding Global Optimum},
author={Xiaofei Huang},
journal={arXiv preprint arXiv:cs/0505003},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505003},
primaryClass={cs.NE}
} | huang2005a |
arxiv-672877 | cs/0505004 | Pluggable AOP: Designing Aspect Mechanisms for Third-party Composition | <|reference_start|>Pluggable AOP: Designing Aspect Mechanisms for Third-party Composition: Studies of Aspect-Oriented Programming (AOP) usually focus on a language in which a specific aspect extension is integrated with a base language. Languages specified in this manner have a fixed, non-extensible AOP functionality. In this paper we consider the more general case of integrating a base language with a set of domain specific third-party aspect extensions for that language. We present a general mixin-based method for implementing aspect extensions in such a way that multiple, independently developed, dynamic aspect extensions can be subject to third-party composition and work collaboratively.<|reference_end|> | arxiv | @article{kojarski2005pluggable,
title={Pluggable AOP: Designing Aspect Mechanisms for Third-party Composition},
author={Sergei Kojarski and David H. Lorenz},
journal={(new version) In Proceedings of the 20th Annual ACM SIGPLAN
Conference on Object Oriented Programming Systems Languages and Applications
(San Diego, CA, USA, October 16 - 20, 2005). OOPSLA '05. ACM Press, New York,
NY, 247-263.},
year={2005},
doi={10.1145/1094811.1094831},
archivePrefix={arXiv},
eprint={cs/0505004},
primaryClass={cs.SE cs.PL}
} | kojarski2005pluggable |
arxiv-672878 | cs/0505005 | Defragmenting the Module Layout of a Partially Reconfigurable Device | <|reference_start|>Defragmenting the Module Layout of a Partially Reconfigurable Device: Modern generations of field-programmable gate arrays (FPGAs) allow for partial reconfiguration. In an online context, where the sequence of modules to be loaded on the FPGA is unknown beforehand, repeated insertion and deletion of modules leads to progressive fragmentation of the available space, making defragmentation an important issue. We address this problem by propose an online and an offline component for the defragmentation of the available space. We consider defragmenting the module layout on a reconfigurable device. This corresponds to solving a two-dimensional strip packing problem. Problems of this type are NP-hard in the strong sense, and previous algorithmic results are rather limited. Based on a graph-theoretic characterization of feasible packings, we develop a method that can solve two-dimensional defragmentation instances of practical size to optimality. Our approach is validated for a set of benchmark instances.<|reference_end|> | arxiv | @article{van der veen2005defragmenting,
title={Defragmenting the Module Layout of a Partially Reconfigurable Device},
author={Jan van der Veen and Sandor P. Fekete and Ali Ahmadinia and Christophe
Bobda and Frank Hannig and Juergen Teich},
journal={arXiv preprint arXiv:cs/0505005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505005},
primaryClass={cs.AR cs.DS}
} | van der veen2005defragmenting |
arxiv-672879 | cs/0505006 | Searching for image information content, its discovery, extraction, and representation | <|reference_start|>Searching for image information content, its discovery, extraction, and representation: Image information content is known to be a complicated and controvercial problem. This paper posits a new image information content definition. Following the theory of Solomonoff-Kolmogorov-Chaitin's complexity, we define image information content as a set of descriptions of imafe data structures. Three levels of such description can be generally distinguished: 1)the global level, where the coarse structure of the entire scene is initially outlined; 2) the intermediate level, where structures of separate, non-overlapping image regions usually associated with individual scene objects are deliniated; and 3) the low-level description, where local image structures observed in a limited and restricted field of view are resolved. A technique for creating such image information content descriptors is developed. Its algorithm is presented and elucidated with some examples, which demonstrate the effectiveness of the proposed approach.<|reference_end|> | arxiv | @article{diamant2005searching,
title={Searching for image information content, its discovery, extraction, and
representation},
author={Emanuel Diamant},
journal={Journal of Electronic Imaging, vol. 14, issue 1, article 013016,
Jan-Mar 2005},
year={2005},
doi={10.1117/1.1867476},
archivePrefix={arXiv},
eprint={cs/0505006},
primaryClass={cs.CV}
} | diamant2005searching |
arxiv-672880 | cs/0505007 | Adaptive Codes: A New Class of Non-standard Variable-length Codes | <|reference_start|>Adaptive Codes: A New Class of Non-standard Variable-length Codes: We introduce a new class of non-standard variable-length codes, called adaptive codes. This class of codes associates a variable-length codeword to the symbol being encoded depending on the previous symbols in the input data string. An efficient algorithm for constructing adaptive codes of order one is presented. Then, we introduce a natural generalization of adaptive codes, called GA codes.<|reference_end|> | arxiv | @article{trinca2005adaptive,
title={Adaptive Codes: A New Class of Non-standard Variable-length Codes},
author={Dragos Trinca},
journal={arXiv preprint arXiv:cs/0505007},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505007},
primaryClass={cs.DS}
} | trinca2005adaptive |
arxiv-672881 | cs/0505008 | Data Mining on Crash Simulation Data | <|reference_start|>Data Mining on Crash Simulation Data: The work presented in this paper is part of the cooperative research project AUTO-OPT carried out by twelve partners from the automotive industries. One major work package concerns the application of data mining methods in the area of automotive design. Suitable methods for data preparation and data analysis are developed. The objective of the work is the re-use of data stored in the crash-simulation department at BMW in order to gain deeper insight into the interrelations between the geometric variations of the car during its design and its performance in crash testing. In this paper a method for data analysis of finite element models and results from crash simulation is proposed and application to recent data from the industrial partner BMW is demonstrated. All necessary steps from data pre-processing to re-integration into the working environment of the engineer are covered.<|reference_end|> | arxiv | @article{kuhlmann2005data,
title={Data Mining on Crash Simulation Data},
author={A. Kuhlmann, R.-M. Vetter, Ch. Luebbing, C.-A. Thole},
journal={Lecture Notes in Computer Science, Lecture Notes in Artificial
Intelligence, Proceedings Conference MLDM 2005, Leipzig/Germany, Springer
Verlag, LNAI 3587, ISBN: 3-540-26923-1, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505008},
primaryClass={cs.IR cs.CE}
} | kuhlmann2005data |
arxiv-672882 | cs/0505009 | Human being is a living random number generator | <|reference_start|>Human being is a living random number generator: General wisdom is, mathematical operation is needed to generate number by numbers. It is pointed out that without any mathematical operation true random numbers can be generated by numbers through algorithmic process. It implies that human brain itself is a living true random number generator. Human brain can meet the enormous human demand of true random numbers.<|reference_end|> | arxiv | @article{mitra2005human,
title={Human being is a living random number generator},
author={Arindam Mitra},
journal={arXiv preprint arXiv:cs/0505009},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505009},
primaryClass={cs.DS}
} | mitra2005human |
arxiv-672883 | cs/0505010 | On the Wyner-Ziv problem for individual sequences | <|reference_start|>On the Wyner-Ziv problem for individual sequences: We consider a variation of the Wyner-Ziv problem pertaining to lossy compression of individual sequences using finite-state encoders and decoders. There are two main results in this paper. The first characterizes the relationship between the performance of the best $M$-state encoder-decoder pair to that of the best block code of size $\ell$ for every input sequence, and shows that the loss of the latter relative to the former (in terms of both rate and distortion) never exceeds the order of $(\log M)/\ell$, independently of the input sequence. Thus, in the limit of large $M$, the best rate-distortion performance of every infinite source sequence can be approached universally by a sequence of block codes (which are also implementable by finite-state machines). While this result assumes an asymptotic regime where the number of states is fixed, and only the length $n$ of the input sequence grows without bound, we then consider the case where the number of states $M=M_n$ is allowed to grow concurrently with $n$. Our second result is then about the critical growth rate of $M_n$ such that the rate-distortion performance of $M_n$-state encoder-decoder pairs can still be matched by a universal code. We show that this critical growth rate of $M_n$ is linear in $n$.<|reference_end|> | arxiv | @article{merhav2005on,
title={On the Wyner-Ziv problem for individual sequences},
author={Neri Merhav and Jacob Ziv},
journal={arXiv preprint arXiv:cs/0505010},
year={2005},
number={CCIT Report #517, Department of Electrical Engineering, Technion,
February 2005},
archivePrefix={arXiv},
eprint={cs/0505010},
primaryClass={cs.IT math.IT}
} | merhav2005on |
arxiv-672884 | cs/0505011 | SWiM: A Simple Window Mover | <|reference_start|>SWiM: A Simple Window Mover: As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use.<|reference_end|> | arxiv | @article{chang2005swim:,
title={SWiM: A Simple Window Mover},
author={Tony Chang, Damon Cook, Ramona Su},
journal={arXiv preprint arXiv:cs/0505011},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505011},
primaryClass={cs.HC}
} | chang2005swim: |
arxiv-672885 | cs/0505012 | On the Shannon cipher system with a capacity-limited key-distribution channel | <|reference_start|>On the Shannon cipher system with a capacity-limited key-distribution channel: We consider the Shannon cipher system in a setting where the secret key is delivered to the legitimate receiver via a channel with limited capacity. For this setting, we characterize the achievable region in the space of three figures of merit: the security (measured in terms of the equivocation), the compressibility of the cryptogram, and the distortion associated with the reconstruction of the plaintext source. Although lossy reconstruction of the plaintext does not rule out the option that the (noisy) decryption key would differ, to a certain extent, from the encryption key, we show, nevertheless, that the best strategy is to strive for perfect match between the two keys, by applying reliable channel coding to the key bits, and to control the distortion solely via rate-distortion coding of the plaintext source before the encryption. In this sense, our result has a flavor similar to that of the classical source-channel separation theorem. Some variations and extensions of this model are discussed as well.<|reference_end|> | arxiv | @article{merhav2005on,
title={On the Shannon cipher system with a capacity-limited key-distribution
channel},
author={Neri Merhav},
journal={arXiv preprint arXiv:cs/0505012},
year={2005},
number={CCIT Report #530, Department of Electrical Engineering, Technion,
May 2005},
archivePrefix={arXiv},
eprint={cs/0505012},
primaryClass={cs.IT math.IT}
} | merhav2005on |
arxiv-672886 | cs/0505013 | Theories for TC0 and Other Small Complexity Classes | <|reference_start|>Theories for TC0 and Other Small Complexity Classes: We present a general method for introducing finitely axiomatizable "minimal" two-sorted theories for various subclasses of P (problems solvable in polynomial time). The two sorts are natural numbers and finite sets of natural numbers. The latter are essentially the finite binary strings, which provide a natural domain for defining the functions and sets in small complexity classes. We concentrate on the complexity class TC^0, whose problems are defined by uniform polynomial-size families of bounded-depth Boolean circuits with majority gates. We present an elegant theory VTC^0 in which the provably-total functions are those associated with TC^0, and then prove that VTC^0 is "isomorphic" to a different-looking single-sorted theory introduced by Johannsen and Pollet. The most technical part of the isomorphism proof is defining binary number multiplication in terms a bit-counting function, and showing how to formalize the proofs of its algebraic properties.<|reference_end|> | arxiv | @article{nguyen2005theories,
title={Theories for TC0 and Other Small Complexity Classes},
author={Phuong Nguyen and Stephen Cook},
journal={Logical Methods in Computer Science, Volume 2, Issue 1 (March 7,
2006) lmcs:2257},
year={2005},
doi={10.2168/LMCS-2(1:3)2006},
archivePrefix={arXiv},
eprint={cs/0505013},
primaryClass={cs.LO cs.CC}
} | nguyen2005theories |
arxiv-672887 | cs/0505014 | Interval Neutrosophic Sets and Logic: Theory and Applications in Computing | <|reference_start|>Interval Neutrosophic Sets and Logic: Theory and Applications in Computing: This book presents the advancements and applications of neutrosophics. Chapter 1 first introduces the interval neutrosophic sets which is an instance of neutrosophic sets. In this chapter, the definition of interval neutrosophic sets and set-theoretic operators are given and various properties of interval neutrosophic set are proved. Chapter 2 defines the interval neutrosophic logic based on interval neutrosophic sets including the syntax and semantics of first order interval neutrosophic propositional logic and first order interval neutrosophic predicate logic. The interval neutrosophic logic can reason and model fuzzy, incomplete and inconsistent information. In this chapter, we also design an interval neutrosophic inference system based on first order interval neutrosophic predicate logic. The interval neutrosophic inference system can be applied to decision making. Chapter 3 gives one application of interval neutrosophic sets and logic in the field of relational databases. Neutrosophic data model is the generalization of fuzzy data model and paraconsistent data model. Here, we generalize various set-theoretic and relation-theoretic operations of fuzzy data model to neutrosophic data model. Chapter 4 gives another application of interval neutrosophic logic. A soft semantic Web Services agent framework is proposed to faciliate the registration and discovery of high quality semantic Web Services agent. The intelligent inference engine module of soft Semantic Web Services agent is implemented using interval neutrosophic logic.<|reference_end|> | arxiv | @article{wang2005interval,
title={Interval Neutrosophic Sets and Logic: Theory and Applications in
Computing},
author={Haibin Wang, Florentin Smarandache, Yan-Qing Zhang, Rajshekhar
Sunderraman},
journal={arXiv preprint arXiv:cs/0505014},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505014},
primaryClass={cs.LO}
} | wang2005interval |
arxiv-672888 | cs/0505015 | Complex Mean and Variance of Linear Regression Model for High-Noised Systems by Kriging | <|reference_start|>Complex Mean and Variance of Linear Regression Model for High-Noised Systems by Kriging: The aim of the paper is to derive the complex-valued least-squares estimator for bias-noise mean and variance.<|reference_end|> | arxiv | @article{suslo2005complex,
title={Complex Mean and Variance of Linear Regression Model for High-Noised
Systems by Kriging},
author={Tomasz Suslo},
journal={arXiv preprint arXiv:cs/0505015},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505015},
primaryClass={cs.NA cs.DS}
} | suslo2005complex |
arxiv-672889 | cs/0505016 | Visual Character Recognition using Artificial Neural Networks | <|reference_start|>Visual Character Recognition using Artificial Neural Networks: The recognition of optical characters is known to be one of the earliest applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In this paper, a simplified neural approach to recognition of optical or visual characters is portrayed and discussed. The document is expected to serve as a resource for learners and amateur investigators in pattern recognition, neural networking and related disciplines.<|reference_end|> | arxiv | @article{araokar2005visual,
title={Visual Character Recognition using Artificial Neural Networks},
author={Shashank Araokar},
journal={arXiv preprint arXiv:cs/0505016},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505016},
primaryClass={cs.NE}
} | araokar2005visual |
arxiv-672890 | cs/0505017 | Point set stratification and Delaunay depth | <|reference_start|>Point set stratification and Delaunay depth: In the study of depth functions it is important to decide whether we want such a function to be sensitive to multimodality or not. In this paper we analyze the Delaunay depth function, which is sensitive to multimodality and compare this depth with others, as convex depth and location depth. We study the stratification that Delaunay depth induces in the point set (layers) and in the whole plane (levels), and we develop an algorithm for computing the Delaunay depth contours, associated to a point set in the plane, with running time O(n log^2 n). The depth of a query point p with respect to a data set S in the plane is the depth of p in the union of S and p. When S and p are given in the input the Delaunay depth can be computed in O(n log n), and we prove that this value is optimal.<|reference_end|> | arxiv | @article{abellanas2005point,
title={Point set stratification and Delaunay depth},
author={Manuel Abellanas, Merc`e Claverol, and Ferran Hurtado},
journal={arXiv preprint arXiv:cs/0505017},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505017},
primaryClass={cs.CG}
} | abellanas2005point |
arxiv-672891 | cs/0505018 | Temporal and Spatial Data Mining with Second-Order Hidden Models | <|reference_start|>Temporal and Spatial Data Mining with Second-Order Hidden Models: In the frame of designing a knowledge discovery system, we have developed stochastic models based on high-order hidden Markov models. These models are capable to map sequences of data into a Markov chain in which the transitions between the states depend on the \texttt{n} previous states according to the order of the model. We study the process of achieving information extraction fromspatial and temporal data by means of an unsupervised classification. We use therefore a French national database related to the land use of a region, named Teruti, which describes the land use both in the spatial and temporal domain. Land-use categories (wheat, corn, forest, ...) are logged every year on each site regularly spaced in the region. They constitute a temporal sequence of images in which we look for spatial and temporal dependencies. The temporal segmentation of the data is done by means of a second-order Hidden Markov Model (\hmmd) that appears to have very good capabilities to locate stationary segments, as shown in our previous work in speech recognition. Thespatial classification is performed by defining a fractal scanning ofthe images with the help of a Hilbert-Peano curve that introduces atotal order on the sites, preserving the relation ofneighborhood between the sites. We show that the \hmmd performs aclassification that is meaningful for the agronomists.Spatial and temporal classification may be achieved simultaneously by means of a 2 levels \hmmd that measures the \aposteriori probability to map a temporal sequence of images onto a set of hidden classes.<|reference_end|> | arxiv | @article{mari2005temporal,
title={Temporal and Spatial Data Mining with Second-Order Hidden Models},
author={Jean-Francois Mari (INRIA Lorraine - LORIA), Florence Le Ber (CEVH)},
journal={arXiv preprint arXiv:cs/0505018},
year={2005},
doi={10.1007/s00500-005-0501-0},
archivePrefix={arXiv},
eprint={cs/0505018},
primaryClass={cs.AI}
} | mari2005temporal |
arxiv-672892 | cs/0505019 | Artificial Neural Networks and their Applications | <|reference_start|>Artificial Neural Networks and their Applications: The Artificial Neural network is a functional imitation of simplified model of the biological neurons and their goal is to construct useful computers for real world problems. The ANN applications have increased dramatically in the last few years fired by both theoretical and practical applications in a wide variety of applications. A brief theory of ANN is presented and potential areas are identified and future trends are discussed.<|reference_end|> | arxiv | @article{malik2005artificial,
title={Artificial Neural Networks and their Applications},
author={Nitin Malik},
journal={arXiv preprint arXiv:cs/0505019},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505019},
primaryClass={cs.NE}
} | malik2005artificial |
arxiv-672893 | cs/0505020 | Asymptotic Capacity Results for Non-Stationary Time-Variant Channels Using Subspace Projections | <|reference_start|>Asymptotic Capacity Results for Non-Stationary Time-Variant Channels Using Subspace Projections: In this paper we deal with a single-antenna discrete-time flat-fading channel. The fading process is assumed to be stationary for the duration of a single data block. From block to block the fading process is allowed to be non-stationary. The number of scatterers bounds the rank of the channels covariance matrix. The signal-to-noise ratio (SNR), the user velocity, and the data block-length define the usable rank of the time-variant channel subspace. The usable channel subspace grows with the SNR. This growth in dimensionality must be taken into account for asymptotic capacity results in the high-SNR regime. Using results from the theory of time-concentrated and band-limited sequences we are able to define an SNR threshold below which the capacity grows logarithmically. Above this threshold the capacity grows double-logarithmically.<|reference_end|> | arxiv | @article{zemen2005asymptotic,
title={Asymptotic Capacity Results for Non-Stationary Time-Variant Channels
Using Subspace Projections},
author={Thomas Zemen (1), Stefan M. Moser (2) ((1) ftw. Forschungszentrum
Telekommunikation Wien, (2) Signal and Information Processing Laboratory ETH
Zurich)},
journal={arXiv preprint arXiv:cs/0505020},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505020},
primaryClass={cs.IT math.IT}
} | zemen2005asymptotic |
arxiv-672894 | cs/0505021 | Distant generalization by feedforward neural networks | <|reference_start|>Distant generalization by feedforward neural networks: This paper discusses the notion of generalization of training samples over long distances in the input space of a feedforward neural network. Such a generalization might occur in various ways, that differ in how great the contribution of different training features should be. The structure of a neuron in a feedforward neural network is analyzed and it is concluded, that the actual performance of the discussed generalization in such neural networks may be problematic -- while such neural networks might be capable for such a distant generalization, a random and spurious generalization may occur as well. To illustrate the differences in generalizing of the same function by different learning machines, results given by the support vector machines are also presented.<|reference_end|> | arxiv | @article{rataj2005distant,
title={Distant generalization by feedforward neural networks},
author={Artur Rataj},
journal={arXiv preprint arXiv:cs/0505021},
year={2005},
number={IITiS-2005-04-1-2.00},
archivePrefix={arXiv},
eprint={cs/0505021},
primaryClass={cs.NE}
} | rataj2005distant |
arxiv-672895 | cs/0505022 | Collaborative Beamforming for Distributed Wireless Ad Hoc Sensor Networks | <|reference_start|>Collaborative Beamforming for Distributed Wireless Ad Hoc Sensor Networks: The performance of collaborative beamforming is analyzed using the theory of random arrays. The statistical average and distribution of the beampattern of randomly generated phased arrays is derived in the framework of wireless ad hoc sensor networks. Each sensor node is assumed to have a single isotropic antenna and nodes in the cluster collaboratively transmit the signal such that the signal in the target direction is coherently added in the far- eld region. It is shown that with N sensor nodes uniformly distributed over a disk, the directivity can approach N, provided that the nodes are located sparsely enough. The distribution of the maximum sidelobe peak is also studied. With the application to ad hoc networks in mind, two scenarios, closed-loop and open-loop, are considered. Associated with these scenarios, the effects of phase jitter and location estimation errors on the average beampattern are also analyzed.<|reference_end|> | arxiv | @article{ochiai2005collaborative,
title={Collaborative Beamforming for Distributed Wireless Ad Hoc Sensor
Networks},
author={Hideki Ochiai, Patrick Mitran, H. Vincent Poor, Vahid Tarokh},
journal={arXiv preprint arXiv:cs/0505022},
year={2005},
doi={10.1109/TSP.2005.857028},
archivePrefix={arXiv},
eprint={cs/0505022},
primaryClass={cs.IT cs.NI math.IT}
} | ochiai2005collaborative |
arxiv-672896 | cs/0505023 | State Space Computation and Analysis of Time Petri Nets | <|reference_start|>State Space Computation and Analysis of Time Petri Nets: The theory of Petri Nets provides a general framework to specify the behaviors of real-time reactive systems and Time Petri Nets were introduced to take also temporal specifications into account. We present in this paper a forward zone-based algorithm to compute the state space of a bounded Time Petri Net: the method is different and more efficient than the classical State Class Graph. We prove the algorithm to be exact with respect to the reachability problem. Furthermore, we propose a translation of the computed state space into a Timed Automaton, proved to be timed bisimilar to the original Time Petri Net. As the method produce a single Timed Automaton, syntactical clocks reduction methods (Daws and Yovine for instance) may be applied to produce an automaton with fewer clocks. Then, our method allows to model-check TTPN by the use of efficient Timed Automata tools. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|> | arxiv | @article{gardey2005state,
title={State Space Computation and Analysis of Time Petri Nets},
author={Guillaume Gardey and Olivier H. Roux and Olivier F. Roux},
journal={arXiv preprint arXiv:cs/0505023},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505023},
primaryClass={cs.LO}
} | gardey2005state |
arxiv-672897 | cs/0505024 | Logic Column 12: Logical Verification and Equational Verification | <|reference_start|>Logic Column 12: Logical Verification and Equational Verification: This article examines two approaches to verification, one based on using a logic for expressing properties of a system, and one based on showing the system equivalent to a simpler system that obviously has whatever property is of interest. Using examples such as process calculi and regular programs, the relationship between these two approaches is explored.<|reference_end|> | arxiv | @article{pucella2005logic,
title={Logic Column 12: Logical Verification and Equational Verification},
author={Riccardo Pucella},
journal={SIGACT News, 36(2), pp. 77-88, 2005},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505024},
primaryClass={cs.LO}
} | pucella2005logic |
arxiv-672898 | cs/0505025 | Equivalence-Checking on Infinite-State Systems: Techniques and Results | <|reference_start|>Equivalence-Checking on Infinite-State Systems: Techniques and Results: The paper presents a selection of recently developed and/or used techniques for equivalence-checking on infinite-state systems, and an up-to-date overview of existing results (as of September 2004).<|reference_end|> | arxiv | @article{kucera2005equivalence-checking,
title={Equivalence-Checking on Infinite-State Systems: Techniques and Results},
author={Antonin Kucera and Petr Jancar},
journal={arXiv preprint arXiv:cs/0505025},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505025},
primaryClass={cs.LO}
} | kucera2005equivalence-checking |
arxiv-672899 | cs/0505026 | Automatic Verification of Timed Concurrent Constraint Programs | <|reference_start|>Automatic Verification of Timed Concurrent Constraint Programs: The language Timed Concurrent Constraint (tccp) is the extension over time of the Concurrent Constraint Programming (cc) paradigm that allows us to specify concurrent systems where timing is critical, for example reactive systems. Systems which may have an infinite number of states can be specified in tccp. Model checking is a technique which is able to verify finite-state systems with a huge number of states in an automatic way. In the last years several studies have investigated how to extend model checking techniques to systems with an infinite number of states. In this paper we propose an approach which exploits the computation model of tccp. Constraint based computations allow us to define a methodology for applying a model checking algorithm to (a class of) infinite-state systems. We extend the classical algorithm of model checking for LTL to a specific logic defined for the verification of tccp and to the tccp Structure which we define in this work for modeling the program behavior. We define a restriction on the time in order to get a finite model and then we develop some illustrative examples. To the best of our knowledge this is the first approach that defines a model checking methodology for tccp.<|reference_end|> | arxiv | @article{falaschi2005automatic,
title={Automatic Verification of Timed Concurrent Constraint Programs},
author={Moreno Falaschi and Alicia Villanueva},
journal={arXiv preprint arXiv:cs/0505026},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505026},
primaryClass={cs.LO}
} | falaschi2005automatic |
arxiv-672900 | cs/0505027 | The Generic Multiple-Precision Floating-Point Addition With Exact Rounding (as in the MPFR Library) | <|reference_start|>The Generic Multiple-Precision Floating-Point Addition With Exact Rounding (as in the MPFR Library): We study the multiple-precision addition of two positive floating-point numbers in base 2, with exact rounding, as specified in the MPFR library, i.e. where each number has its own precision. We show how the best possible complexity (up to a constant factor that depends on the implementation) can be obtain.<|reference_end|> | arxiv | @article{lefèvre2005the,
title={The Generic Multiple-Precision Floating-Point Addition With Exact
Rounding (as in the MPFR Library)},
author={Vincent Lef`evre (INRIA Lorraine - LORIA)},
journal={arXiv preprint arXiv:cs/0505027},
year={2005},
archivePrefix={arXiv},
eprint={cs/0505027},
primaryClass={cs.DS}
} | lefèvre2005the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.