corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-1401
0710.2674
Linguistic Information Energy
<|reference_start|>Linguistic Information Energy: In this treatment a text is considered to be a series of word impulses which are read at a constant rate. The brain then assembles these units of information into higher units of meaning. A classical systems approach is used to model an initial part of this assembly process. The concepts of linguistic system response, information energy, and ordering energy are defined and analyzed. Finally, as a demonstration, information energy is used to estimate the publication dates of a series of texts and the similarity of a set of texts.<|reference_end|>
arxiv
@article{ford2007linguistic, title={Linguistic Information Energy}, author={James Ford}, journal={arXiv preprint arXiv:0710.2674}, year={2007}, archivePrefix={arXiv}, eprint={0710.2674}, primaryClass={cs.CL cs.IT math.IT} }
ford2007linguistic
arxiv-1402
0710.2705
Fingerprinting with Minimum Distance Decoding
<|reference_start|>Fingerprinting with Minimum Distance Decoding: This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\approx 0.188$ is achievable using an ensemble of random linear codes. For $t\geq 3$, the existence of a non-resolvable collusion attack, with minimum distance decoding, for any non-zero rate is established. Inspired by our theoretical analysis, we then construct coding/decoding schemes for fingerprinting based on the celebrated Belief-Propagation framework. Using an explicit repeat-accumulate code, we obtain a vanishingly small probability of misidentification at rate 1/3 under averaging attack with t=2. For collusion attacks which satisfy the marking assumption, we use a more sophisticated accumulate repeat accumulate code to obtain a vanishingly small misidentification probability at rate 1/9 with t=2. These results represent a marked improvement over the best available designs in the literature.<|reference_end|>
arxiv
@article{lin2007fingerprinting, title={Fingerprinting with Minimum Distance Decoding}, author={Shih-Chun Lin, Mohammad Shahmohammadi, Hesham El Gamal}, journal={IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 4, NO. 1, MARCH 2009 pp.59-69}, year={2007}, doi={10.1109/TIFS.2008.2012201}, archivePrefix={arXiv}, eprint={0710.2705}, primaryClass={cs.IT cs.CR math.IT} }
lin2007fingerprinting
arxiv-1403
0710.2716
Cost and Effects of Pinning Control for Network Synchronization
<|reference_start|>Cost and Effects of Pinning Control for Network Synchronization: In this paper, the problem of pinning control for synchronization of complex dynamical networks is discussed. A cost function of the controlled network is defined by the feedback gain and the coupling strength of the network. An interesting result is that lower cost is achieved by the control scheme of pinning nodes with smaller degrees. Some rigorous mathematical analysis is presented for achieving lower cost in the synchronization of different star-shaped networks. Numerical simulations on some non-regular complex networks generated by the Barabasi-Albert model and various star-shaped networks are shown for verification and illustration.<|reference_end|>
arxiv
@article{li2007cost, title={Cost and Effects of Pinning Control for Network Synchronization}, author={Rong Li, Zhisheng Duan and Guanrong Chen}, journal={arXiv preprint arXiv:0710.2716}, year={2007}, archivePrefix={arXiv}, eprint={0710.2716}, primaryClass={cs.NI} }
li2007cost
arxiv-1404
0710.2732
Probabilistic communication complexity over the reals
<|reference_start|>Probabilistic communication complexity over the reals: Deterministic and probabilistic communication protocols are introduced in which parties can exchange the values of polynomials (rather than bits in the usual setting). It is established a sharp lower bound $2n$ on the communication complexity of recognizing the $2n$-dimensional orthant, on the other hand the probabilistic communication complexity of its recognizing does not exceed 4. A polyhedron and a union of hyperplanes are constructed in $\RR^{2n}$ for which a lower bound $n/2$ on the probabilistic communication complexity of recognizing each is proved. As a consequence this bound holds also for the EMPTINESS and the KNAPSACK problems.<|reference_end|>
arxiv
@article{grigoriev2007probabilistic, title={Probabilistic communication complexity over the reals}, author={Dima Grigoriev (IRMAR)}, journal={arXiv preprint arXiv:0710.2732}, year={2007}, number={07-60}, archivePrefix={arXiv}, eprint={0710.2732}, primaryClass={cs.CC} }
grigoriev2007probabilistic
arxiv-1405
0710.2736
L2 norm performance index of synchronization and optimal control synthesis of complex networks
<|reference_start|>L2 norm performance index of synchronization and optimal control synthesis of complex networks: In this paper, the synchronizability problem of dynamical networks is addressed, where better synchronizability means that the network synchronizes faster with lower-overshoot. The L2 norm of the error vector e is taken as a performance index to measure this kind of synchronizability. For the equilibrium synchronization case, it is shown that there is a close relationship between the L2 norm of the error vector e and the H2 norm of the transfer function G of the linearized network about the equilibrium point. Consequently, the effect of the network coupling topology on the H2 norm of the transfer function G is analyzed. Finally, an optimal controller is designed, according to the so-called LQR problem in modern control theory, which can drive the whole network to its equilibrium point and meanwhile minimize the L2 norm of the output of the linearized network.<|reference_end|>
arxiv
@article{liu2007l2, title={L2 norm performance index of synchronization and optimal control synthesis of complex networks}, author={Chao Liu, Zhisheng Duan and Guanrong Chen and Lin Huang}, journal={arXiv preprint arXiv:0710.2736}, year={2007}, archivePrefix={arXiv}, eprint={0710.2736}, primaryClass={cs.NI} }
liu2007l2
arxiv-1406
0710.2782
Effective linkage learning using low-order statistics and clustering
<|reference_start|>Effective linkage learning using low-order statistics and clustering: The adoption of probabilistic models for the best individuals found so far is a powerful approach for evolutionary computation. Increasingly more complex models have been used by estimation of distribution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective options, since those models are able to capture interactions of high order among the variables of a problem. Diversity preservation, through niching techniques, has also shown to be very important to allow the identification of the problem structure as much as for keeping several global optima. Recently, clustering was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs was not shown to be much improved by clustering, except for some simple multimodal problems. This work proposes and evaluates a combination operator guided by a measure from information theory which allows a clustered low-order EDA to effectively solve a comprehensive range of benchmark optimization problems.<|reference_end|>
arxiv
@article{emmendorfer2007effective, title={Effective linkage learning using low-order statistics and clustering}, author={Leonardo Emmendorfer and Aurora Pozo}, journal={arXiv preprint arXiv:0710.2782}, year={2007}, archivePrefix={arXiv}, eprint={0710.2782}, primaryClass={cs.NE cs.AI} }
emmendorfer2007effective
arxiv-1407
0710.2848
Consistency of trace norm minimization
<|reference_start|>Consistency of trace norm minimization: Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.<|reference_end|>
arxiv
@article{bach2007consistency, title={Consistency of trace norm minimization}, author={Francis Bach (WILLOW Project - Inria/Ens)}, journal={arXiv preprint arXiv:0710.2848}, year={2007}, archivePrefix={arXiv}, eprint={0710.2848}, primaryClass={cs.LG} }
bach2007consistency
arxiv-1408
0710.2852
Generating models for temporal representations
<|reference_start|>Generating models for temporal representations: We discuss the use of model building for temporal representations. We chose Polish to illustrate our discussion because it has an interesting aspectual system, but the points we wish to make are not language specific. Rather, our goal is to develop theoretical and computational tools for temporal model building tasks in computational semantics. To this end, we present a first-order theory of time and events which is rich enough to capture interesting semantic distinctions, and an algorithm which takes minimal models for first-order theories and systematically attempts to ``perturb'' their temporal component to provide non-minimal, but semantically significant, models.<|reference_end|>
arxiv
@article{blackburn2007generating, title={Generating models for temporal representations}, author={Patrick Blackburn (INRIA Lorraine - LORIA), S'ebastien Hinderer (INRIA Lorraine - LORIA)}, journal={Dans Recent Advances in Natural Language Processing (2007) 69-75}, year={2007}, archivePrefix={arXiv}, eprint={0710.2852}, primaryClass={cs.CL} }
blackburn2007generating
arxiv-1409
0710.2887
Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems - Report on the Workshop ICOOOLPS'2006 at ECOOP'06
<|reference_start|>Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems - Report on the Workshop ICOOOLPS'2006 at ECOOP'06: ICOOOLPS'2006 was the first edition of ECOOP-ICOOOLPS workshop. It intended to bring researchers and practitioners both from academia and industry together, with a spirit of openness, to try and identify and begin to address the numerous and very varied issues of optimization. This succeeded, as can be seen from the papers, the attendance and the liveliness of the discussions that took place during and after the workshop, not to mention a few new cooperations or postdoctoral contracts. The 22 talented people from different groups who participated were unanimous to appreciate this first edition and recommend that ICOOOLPS be continued next year. A community is thus beginning to form, and should be reinforced by a second edition next year, with all the improvements this first edition made emerge.<|reference_end|>
arxiv
@article{ducournau2007implementation,, title={Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems - Report on the Workshop ICOOOLPS'2006 at ECOOP'06}, author={Roland Ducournau (LIRMM), Etienne Gagnon, Chandra Krintz (RACE LAB), Philippe Mulet, Jan Vitek (S3L), Olivier Zendra (INRIA Lorraine - LORIA)}, journal={Object-Oriented Technology. ECOOP 2006 Workshop Reader - ECOOP 2006 Workshops, Nantes, France, July 3-7, 2006, Final Reports Springer Berlin / Heidelberg (Ed.) (2007) 1-14}, year={2007}, doi={10.1007/978-3-540-71774-4_1}, archivePrefix={arXiv}, eprint={0710.2887}, primaryClass={cs.PF cs.PL cs.SE} }
ducournau2007implementation,
arxiv-1410
0710.2889
An efficient reduction of ranking to classification
<|reference_start|>An efficient reduction of ranking to classification: This paper describes an efficient reduction of the learning problem of ranking to binary classification. The reduction guarantees an average pairwise misranking regret of at most that of the binary classifier regret, improving a recent result of Balcan et al which only guarantees a factor of 2. Moreover, our reduction applies to a broader class of ranking loss functions, admits a simpler proof, and the expected running time complexity of our algorithm in terms of number of calls to a classifier or preference function is improved from $\Omega(n^2)$ to $O(n \log n)$. In addition, when the top $k$ ranked elements only are required ($k \ll n$), as in many applications in information extraction or search engines, the time complexity of our algorithm can be further reduced to $O(k \log k + n)$. Our reduction and algorithm are thus practical for realistic applications where the number of points to rank exceeds several thousands. Much of our results also extend beyond the bipartite case previously studied. Our rediction is a randomized one. To complement our result, we also derive lower bounds on any deterministic reduction from binary (preference) classification to ranking, implying that our use of a randomized reduction is essentially necessary for the guarantees we provide.<|reference_end|>
arxiv
@article{ailon2007an, title={An efficient reduction of ranking to classification}, author={Nir Ailon and Mehryar Mohri}, journal={arXiv preprint arXiv:0710.2889}, year={2007}, archivePrefix={arXiv}, eprint={0710.2889}, primaryClass={cs.LG cs.IR} }
ailon2007an
arxiv-1411
0710.2970
A generic attack to ciphers
<|reference_start|>A generic attack to ciphers: In this paper, we present a generic attack for ciphers, which is in essence a collision attack on the secret keys of ciphers .<|reference_end|>
arxiv
@article{li2007a, title={A generic attack to ciphers}, author={An-Ping Li}, journal={arXiv preprint arXiv:0710.2970}, year={2007}, archivePrefix={arXiv}, eprint={0710.2970}, primaryClass={cs.CR} }
li2007a
arxiv-1412
0710.2988
Using Description Logics for Recognising Textual Entailment
<|reference_start|>Using Description Logics for Recognising Textual Entailment: The aim of this paper is to show how we can handle the Recognising Textual Entailment (RTE) task by using Description Logics (DLs). To do this, we propose a representation of natural language semantics in DLs inspired by existing representations in first-order logic. But our most significant contribution is the definition of two novel inference tasks: A-Box saturation and subgraph detection which are crucial for our approach to RTE.<|reference_end|>
arxiv
@article{bedaride2007using, title={Using Description Logics for Recognising Textual Entailment}, author={Paul Bedaride (INRIA Lorraine - Loria)}, journal={Dans 19th European Summer School in Logic, Language and Information (2007) 11-21}, year={2007}, archivePrefix={arXiv}, eprint={0710.2988}, primaryClass={cs.CL} }
bedaride2007using
arxiv-1413
0710.3027
Classical Capacities of Averaged and Compound Quantum Channels
<|reference_start|>Classical Capacities of Averaged and Compound Quantum Channels: We determine the capacity of compound classical-quantum channels. As a consequence we obtain the capacity formula for the averaged classical-quantum channels. The capacity result for compound channels demonstrates, as in the classical setting, the existence of reliable universal classical-quantum codes in scenarios where the only a priori information about the channel used for the transmission of information is that it belongs to a given set of memoryless classical-quantum channels. Our approach is based on the universal classical approximation of the quantum relative entropy which in turn relies on the universal hypothesis testing results.<|reference_end|>
arxiv
@article{bjelakovic2007classical, title={Classical Capacities of Averaged and Compound Quantum Channels}, author={Igor Bjelakovic and Holger Boche}, journal={arXiv preprint arXiv:0710.3027}, year={2007}, archivePrefix={arXiv}, eprint={0710.3027}, primaryClass={quant-ph cs.IT math-ph math.IT math.MP} }
bjelakovic2007classical
arxiv-1414
0710.3170
Fast Intrinsic Mode Decomposition of Time Series Data with Sawtooth Transform
<|reference_start|>Fast Intrinsic Mode Decomposition of Time Series Data with Sawtooth Transform: An efficient method is introduced in this paper to find the intrinsic mode function (IMF) components of time series data. This method is faster and more predictable than the Empirical Mode Decomposition (EMD) method devised by the author of Hilbert Huang Transform (HHT). The approach is to transforms the original data function into a piecewise linear sawtooth function (or triangle wave function), then directly constructs the upper envelope by connecting the maxima and construct lower envelope by connecting minima with straight line segments in the sawtooth space, the IMF is calculated as the difference between the sawtooth function and the mean of the upper and lower envelopes. The results found in the sawtooth space are reversely transformed into the original data space as the required IMF and envelopes mean. This decomposition method process the data in one pass to obtain a unique IMF component without the time consuming repetitive sifting process of EMD method. An alternative decomposition method with sawtooth function expansion is also presented.<|reference_end|>
arxiv
@article{lu2007fast, title={Fast Intrinsic Mode Decomposition of Time Series Data with Sawtooth Transform}, author={Louis Yu Lu}, journal={arXiv preprint arXiv:0710.3170}, year={2007}, archivePrefix={arXiv}, eprint={0710.3170}, primaryClass={cs.NA} }
lu2007fast
arxiv-1415
0710.3178
Modeling Context, Collaboration, and Civilization in End-User Informatics
<|reference_start|>Modeling Context, Collaboration, and Civilization in End-User Informatics: End-user informatics applications are Internet data web management automation solutions. These are mass modeling and mass management collaborative communal consensus solutions. They are made and maintained by managerial, professional, technical and specialist end-users. In end-user informatics the end-users are always right. So it becomes necessary for information technology professionals to understand information and informatics from the end-user perspective. End-user informatics starts with the observation that practical prose is a mass consensus communal modeling technology. This high technology is the mechanistic modeling medium we all use every day in all of our practical pursuits. Practical information flows are the lifeblood of modern capitalist communities. But what exactly is practical information? It's ultimately physical information, but the physics is highly emergent rather than elementary. So practical reality is just physical reality in deep disguise. Practical prose is the medium that we all use to model the everyday and elite mechanics of practical reality. So this is the medium that end-user informatics must automate and animate.<|reference_end|>
arxiv
@article{maney2007modeling, title={Modeling Context, Collaboration, and Civilization in End-User Informatics}, author={George A. Maney}, journal={arXiv preprint arXiv:0710.3178}, year={2007}, archivePrefix={arXiv}, eprint={0710.3178}, primaryClass={cs.OH} }
maney2007modeling
arxiv-1416
0710.3185
Fuzzy Modeling of Electrical Impedance Tomography Image of the Lungs
<|reference_start|>Fuzzy Modeling of Electrical Impedance Tomography Image of the Lungs: Electrical Impedance Tomography (EIT) is a functional imaging method that is being developed for bedside use in critical care medicine. Aiming at improving the chest anatomical resolution of EIT images we developed a fuzzy model based on EIT high temporal resolution and the functional information contained in the pulmonary perfusion and ventilation signals. EIT data from an experimental animal model were collected during normal ventilation and apnea while an injection of hypertonic saline was used as a reference . The fuzzy model was elaborated in three parts: a modeling of the heart, a pulmonary map from ventilation images and, a pulmonary map from perfusion images. Image segmentation was performed using a threshold method and a ventilation/perfusion map was generated. EIT images treated by the fuzzy model were compared with the hypertonic saline injection method and CT-scan images, presenting good results in both qualitative (the image obtained by the model was very similar to that of the CT-scan) and quantitative (the ROC curve provided an area equal to 0.93) point of view. Undoubtedly, these results represent an important step in the EIT images area, since they open the possibility of developing EIT-based bedside clinical methods, which are not available nowadays. These achievements could serve as the base to develop EIT diagnosis system for some life-threatening diseases commonly found in critical care medicine.<|reference_end|>
arxiv
@article{tanaka2007fuzzy, title={Fuzzy Modeling of Electrical Impedance Tomography Image of the Lungs}, author={Harki Tanaka, Neli Regina Siqueira Ortega, Mauricio Stanzione Galizia, Joao Batista Borges Sobrinho, and Marcelo Britto Passos Amato}, journal={arXiv preprint arXiv:0710.3185}, year={2007}, archivePrefix={arXiv}, eprint={0710.3185}, primaryClass={cs.AI cs.CV} }
tanaka2007fuzzy
arxiv-1417
0710.3246
Bloom maps
<|reference_start|>Bloom maps: We consider the problem of succinctly encoding a static map to support approximate queries. We derive upper and lower bounds on the space requirements in terms of the error rate and the entropy of the distribution of values over keys: our bounds differ by a factor log e. For the upper bound we introduce a novel data structure, the Bloom map, generalising the Bloom filter to this problem. The lower bound follows from an information theoretic argument.<|reference_end|>
arxiv
@article{talbot2007bloom, title={Bloom maps}, author={David Talbot and John Talbot}, journal={arXiv preprint arXiv:0710.3246}, year={2007}, archivePrefix={arXiv}, eprint={0710.3246}, primaryClass={cs.DS cs.IT math.IT} }
talbot2007bloom
arxiv-1418
0710.3279
Resource Allocation for Delay Differentiated Traffic in Multiuser OFDM Systems
<|reference_start|>Resource Allocation for Delay Differentiated Traffic in Multiuser OFDM Systems: Most existing work on adaptive allocation of subcarriers and power in multiuser orthogonal frequency division multiplexing (OFDM) systems has focused on homogeneous traffic consisting solely of either delay-constrained data (guaranteed service) or non-delay-constrained data (best-effort service). In this paper, we investigate the resource allocation problem in a heterogeneous multiuser OFDM system with both delay-constrained (DC) and non-delay-constrained (NDC) traffic. The objective is to maximize the sum-rate of all the users with NDC traffic while maintaining guaranteed rates for the users with DC traffic under a total transmit power constraint. Through our analysis we show that the optimal power allocation over subcarriers follows a multi-level water-filling principle; moreover, the valid candidates competing for each subcarrier include only one NDC user but all DC users. By converting this combinatorial problem with exponential complexity into a convex problem or showing that it can be solved in the dual domain, efficient iterative algorithms are proposed to find the optimal solutions. To further reduce the computational cost, a low-complexity suboptimal algorithm is also developed. Numerical studies are conducted to evaluate the performance the proposed algorithms in terms of service outage probability, achievable transmission rate pairs for DC and NDC traffic, and multiuser diversity.<|reference_end|>
arxiv
@article{tao2007resource, title={Resource Allocation for Delay Differentiated Traffic in Multiuser OFDM Systems}, author={Meixia Tao, Ying-Chang Liang and Fan Zhang}, journal={arXiv preprint arXiv:0710.3279}, year={2007}, doi={10.1109/ICC.2006.255331}, archivePrefix={arXiv}, eprint={0710.3279}, primaryClass={cs.NI cs.IT math.IT} }
tao2007resource
arxiv-1419
0710.3283
Effects of Non-Identical Rayleigh Fading on Differential Unitary Space-Time Modulation
<|reference_start|>Effects of Non-Identical Rayleigh Fading on Differential Unitary Space-Time Modulation: This paper has been withdrawn by the author.<|reference_end|>
arxiv
@article{tao2007effects, title={Effects of Non-Identical Rayleigh Fading on Differential Unitary Space-Time Modulation}, author={Meixia Tao}, journal={arXiv preprint arXiv:0710.3283}, year={2007}, archivePrefix={arXiv}, eprint={0710.3283}, primaryClass={cs.PF cs.IT math.IT} }
tao2007effects
arxiv-1420
0710.3285
Nontraditional Scoring of C-tests
<|reference_start|>Nontraditional Scoring of C-tests: In C-tests the hypothesis of items local independence is violated, which doesn't permit to consider them as real tests. It is suggested to determine the distances between separate C-test items (blanks) and to combine items into clusters. Weights, inversely proportional to the number of items in corresponding clusters, are assigned to items. As a result, the C-test structure becomes similar to the structure of classical tests, without violation of local independence hypothesis.<|reference_end|>
arxiv
@article{tamara2007nontraditional, title={Nontraditional Scoring of C-tests}, author={Tretjakova Tamara}, journal={arXiv preprint arXiv:0710.3285}, year={2007}, archivePrefix={arXiv}, eprint={0710.3285}, primaryClass={cs.CY cs.CL} }
tamara2007nontraditional
arxiv-1421
0710.3305
Automatic Methods for Analyzing Non-Repudiation Protocols with an Active Intruder
<|reference_start|>Automatic Methods for Analyzing Non-Repudiation Protocols with an Active Intruder: Non-repudiation protocols have an important role in many areas where secured transactions with proofs of participation are necessary. Formal methods are clever and without error, therefore using them for verifying such protocols is crucial. In this purpose, we show how to partially represent non-repudiation as a combination of authentications on the Fair Zhou-Gollmann protocol. After discussing its limits, we define a new method based on the handling of the knowledge of protocol participants. This method is very general and is of natural use, as it consists in adding simple annotations, like for authentication problems. The method is very easy to implement in tools able to handle participants knowledge. We have implemented it in the AVISPA Tool and analyzed the optimistic Cederquist-Corin- Dashti protocol, discovering two unknown attacks. This extension of the AVISPA Tool for handling non-repudiation opens a highway to the specification of many other properties, without any more change in the tool itself.<|reference_end|>
arxiv
@article{klay2007automatic, title={Automatic Methods for Analyzing Non-Repudiation Protocols with an Active Intruder}, author={Francis Klay (FT R&D), Judson Santiago (DIMAP - UFRN), Laurent Vigneron (INRIA Lorraine - LORIA / LIFC)}, journal={arXiv preprint arXiv:0710.3305}, year={2007}, archivePrefix={arXiv}, eprint={0710.3305}, primaryClass={cs.LO cs.CR} }
klay2007automatic
arxiv-1422
0710.3332
Model and Program Repair via SAT Solving
<|reference_start|>Model and Program Repair via SAT Solving: We consider the following \emph{model repair problem}: given a finite Kripke structure $M$ and a specification formula $\eta$ in some modal or temporal logic, determine if $M$ contains a substructure $M'$ (with the same initial state) that satisfies $\eta$. Thus, $M$ can be ``repaired'' to satisfy the specification $\eta$ by deleting some transitions. We map an instance $(M, \eta)$ of model repair to a boolean formula $\repfor(M,\eta)$ such that $(M, \eta)$ has a solution iff $\repfor(M,\eta)$ is satisfiable. Furthermore, a satisfying assignment determines which transitions must be removed from $M$ to generate a model $M'$ of $\eta$. Thus, we can use any SAT solver to repair Kripke structures. Furthermore, using a complete SAT solver yields a complete algorithm: it always finds a repair if one exists. We extend our method to repair finite-state shared memory concurrent programs, to solve the discrete event supervisory control problem \cite{RW87,RW89}, to check for the existence of symmettric solutions \cite{ES93}, and to accomodate any boolean constraint on the existence of states and transitions in the repaired model. Finally, we show that model repair is NP-complete for CTL, and logics with polynomial model checking algorithms to which CTL can be reduced in polynomial time. A notable example of such a logic is Alternating-Time Temporal Logic (ATL).<|reference_end|>
arxiv
@article{attie2007model, title={Model and Program Repair via SAT Solving}, author={Paul C. Attie and Jad Saklawi}, journal={arXiv preprint arXiv:0710.3332}, year={2007}, archivePrefix={arXiv}, eprint={0710.3332}, primaryClass={cs.LO} }
attie2007model
arxiv-1423
0710.3375
On the Capacity of Interference Channels with One Cooperating Transmitter
<|reference_start|>On the Capacity of Interference Channels with One Cooperating Transmitter: Inner and outer bounds are established on the capacity region of two-sender, two-receiver interference channels where one transmitter knows both messages. The transmitter with extra knowledge is referred to as being cognitive. The inner bound is based on strategies that generalize prior work, and include rate-splitting, Gel'fand-Pinsker coding and cooperative transmission. A general outer bound is based on the Nair-El Gamal outer bound for broadcast channels. A simpler bound is presented for the case in which one of the decoders can decode both messages. The bounds are evaluated and compared for Gaussian channels.<|reference_end|>
arxiv
@article{maric2007on, title={On the Capacity of Interference Channels with One Cooperating Transmitter}, author={I. Maric, A. Goldsmith, G. Kramer, S. Shamai}, journal={arXiv preprint arXiv:0710.3375}, year={2007}, archivePrefix={arXiv}, eprint={0710.3375}, primaryClass={cs.IT math.IT} }
maric2007on
arxiv-1424
0710.3427
Error Correction Capability of Column-Weight-Three LDPC Codes
<|reference_start|>Error Correction Capability of Column-Weight-Three LDPC Codes: In this paper, we investigate the error correction capability of column-weight-three LDPC codes when decoded using the Gallager A algorithm. We prove that the necessary condition for a code to correct $k \geq 5$ errors is to avoid cycles of length up to $2k$ in its Tanner graph. As a consequence of this result, we show that given any $\alpha>0, \exists N $ such that $\forall n>N$, no code in the ensemble of column-weight-three codes can correct all $\alpha n$ or fewer errors. We extend these results to the bit flipping algorithm.<|reference_end|>
arxiv
@article{chilappagari2007error, title={Error Correction Capability of Column-Weight-Three LDPC Codes}, author={Shashi Kiran Chilappagari and Bane Vasic}, journal={arXiv preprint arXiv:0710.3427}, year={2007}, archivePrefix={arXiv}, eprint={0710.3427}, primaryClass={cs.IT math.IT} }
chilappagari2007error
arxiv-1425
0710.3439
Utility-Based Wireless Resource Allocation for Variable Rate Transmission
<|reference_start|>Utility-Based Wireless Resource Allocation for Variable Rate Transmission: For most wireless services with variable rate transmission, both average rate and rate oscillation are important performance metrics. The traditional performance criterion, utility of average transmission rate, boosts the average rate but also results in high rate oscillations. We introduce a utility function of instantaneous transmission rates. It is capable of facilitating the resource allocation with flexible combinations of average rate and rate oscillation. Based on the new utility, we consider the time and power allocation in a time-shared wireless network. Two adaptation policies are developed, namely, time sharing (TS) and joint time sharing and power control (JTPC). An extension to quantized time sharing with limited channel feedback (QTSL) for practical systems is also discussed. Simulation results show that by controlling the concavity of the utility function, a tradeoff between the average rate and rate oscillation can be easily made.<|reference_end|>
arxiv
@article{zhang2007utility-based, title={Utility-Based Wireless Resource Allocation for Variable Rate Transmission}, author={Xiaolu Zhang, Meixia Tao and Chun Sum Ng}, journal={arXiv preprint arXiv:0710.3439}, year={2007}, archivePrefix={arXiv}, eprint={0710.3439}, primaryClass={cs.NI} }
zhang2007utility-based
arxiv-1426
0710.3443
DPA on quasi delay insensitive asynchronous circuits: formalization and improvement
<|reference_start|>DPA on quasi delay insensitive asynchronous circuits: formalization and improvement: The purpose of this paper is to formally specify a flow devoted to the design of Differential Power Analysis (DPA) resistant QDI asynchronous circuits. The paper first proposes a formal modeling of the electrical signature of QDI asynchronous circuits. The DPA is then applied to the formal model in order to identify the source of leakage of this type of circuits. Finally, a complete design flow is specified to minimize the information leakage. The relevancy and efficiency of the approach is demonstrated using the design of an AES crypto-processor.<|reference_end|>
arxiv
@article{bouesse2007dpa, title={DPA on quasi delay insensitive asynchronous circuits: formalization and improvement}, author={G.F. Bouesse (TIMA), M. Renaudin (TIMA), S. Dumont (TIMA), F. Germain}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.124}, archivePrefix={arXiv}, eprint={0710.3443}, primaryClass={cs.AR} }
bouesse2007dpa
arxiv-1427
0710.3502
Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events
<|reference_start|>Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events: In this paper we present a fresh look at the problem of summarizing evolving events from multiple sources. After a discussion concerning the nature of evolving events we introduce a distinction between linearly and non-linearly evolving events. We present then a general methodology for the automatic creation of summaries from evolving events. At its heart lie the notions of Synchronic and Diachronic cross-document Relations (SDRs), whose aim is the identification of similarities and differences between sources, from a synchronical and diachronical perspective. SDRs do not connect documents or textual elements found therein, but structures one might call messages. Applying this methodology will yield a set of messages and relations, SDRs, connecting them, that is a graph which we call grid. We will show how such a grid can be considered as the starting point of a Natural Language Generation System. The methodology is evaluated in two case-studies, one for linearly evolving events (descriptions of football matches) and another one for non-linearly evolving events (terrorist incidents involving hostages). In both cases we evaluate the results produced by our computational systems.<|reference_end|>
arxiv
@article{afantenos2007using, title={Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events}, author={Stergos D. Afantenos, V. Karkaletsis, P. Stamatopoulos and C. Halatsis}, journal={arXiv preprint arXiv:0710.3502}, year={2007}, doi={10.1007/s10844-006-0025-9}, archivePrefix={arXiv}, eprint={0710.3502}, primaryClass={cs.CL cs.IR} }
afantenos2007using
arxiv-1428
0710.3519
P-matrix recognition is co-NP-complete
<|reference_start|>P-matrix recognition is co-NP-complete: This is a summary of the proof by G.E. Coxson that P-matrix recognition is co-NP-complete. The result follows by a reduction from the MAX CUT problem using results of S. Poljak and J. Rohn.<|reference_end|>
arxiv
@article{foniok2007p-matrix, title={P-matrix recognition is co-NP-complete}, author={Jan Foniok}, journal={arXiv preprint arXiv:0710.3519}, year={2007}, archivePrefix={arXiv}, eprint={0710.3519}, primaryClass={cs.CC} }
foniok2007p-matrix
arxiv-1429
0710.3535
JANUS: an FPGA-based System for High Performance Scientific Computing
<|reference_start|>JANUS: an FPGA-based System for High Performance Scientific Computing: This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ~ 1000. We also discuss the role of JANUS on other classes of scientific applications.<|reference_end|>
arxiv
@article{belletti2007janus:, title={JANUS: an FPGA-based System for High Performance Scientific Computing}, author={F. Belletti, M. Cotallo, A. Cruz, L. A. Fern'andez, A. Gordillo, M. Guidetti, A. Maiorano, F. Mantovani, E. Marinari, V. Mart'in-Mayor, A. Mu~noz-Sudupe, D. Navarro, G. Parisi, S. P'erez-Gaviro, M. Rossi, J. J. Ruiz-Lorenzo, S. F. Schifano, D. Sciretti, A. Taranc'on, R. Tripiccione, J. L. Velasco}, journal={Computing in Science & Engineering 11 (2009 ) 48-58}, year={2007}, doi={10.1109/MCSE.2009.11}, archivePrefix={arXiv}, eprint={0710.3535}, primaryClass={cs.AR} }
belletti2007janus:
arxiv-1430
0710.3536
Common Beliefs and Public Announcements in Strategic Games with Arbitrary Strategy Sets
<|reference_start|>Common Beliefs and Public Announcements in Strategic Games with Arbitrary Strategy Sets: We provide an epistemic analysis of arbitrary strategic games based on possibility correspondences. We first establish a generic result that links true common beliefs (and, respectively, common knowledge) of players' rationality defined by means of `monotonic' properties, with the iterated elimination of strategies that do not satisfy these properties. It allows us to deduce the customary results concerned with true common beliefs of rationality and iterated elimination of strictly dominated strategies as simple corollaries. This approach relies on Tarski's Fixpoint Theorem. We also provide an axiomatic presentation of this generic result. This allows us to clarify the proof-theoretic principles assumed in players' reasoning. Finally, we provide an alternative characterization of the iterated elimination of strategies based on the concept of a public announcement. It applies to `global properties'. Both classes of properties include the notions of rationalizability and the iterated elimination of strictly dominated strategies.<|reference_end|>
arxiv
@article{apt2007common, title={Common Beliefs and Public Announcements in Strategic Games with Arbitrary Strategy Sets}, author={Krzysztof R. Apt and Jonathan A. Zvesper}, journal={arXiv preprint arXiv:0710.3536}, year={2007}, archivePrefix={arXiv}, eprint={0710.3536}, primaryClass={cs.GT} }
apt2007common
arxiv-1431
0710.3561
Stationary probability density of stochastic search processes in global optimization
<|reference_start|>Stationary probability density of stochastic search processes in global optimization: A method for the construction of approximate analytical expressions for the stationary marginal densities of general stochastic search processes is proposed. By the marginal densities, regions of the search space that with high probability contain the global optima can be readily defined. The density estimation procedure involves a controlled number of linear operations, with a computational cost per iteration that grows linearly with problem size.<|reference_end|>
arxiv
@article{berrones2007stationary, title={Stationary probability density of stochastic search processes in global optimization}, author={Arturo Berrones}, journal={J. Stat. Mech. (2008) P01013}, year={2007}, doi={10.1088/1742-5468/2008/01/P01013}, archivePrefix={arXiv}, eprint={0710.3561}, primaryClass={cs.AI cond-mat.stat-mech cs.NE} }
berrones2007stationary
arxiv-1432
0710.3603
On a Clique-Based Integer Programming Formulation of Vertex Colouring with Applications in Course Timetabling
<|reference_start|>On a Clique-Based Integer Programming Formulation of Vertex Colouring with Applications in Course Timetabling: Vertex colouring is a well-known problem in combinatorial optimisation, whose alternative integer programming formulations have recently attracted considerable attention. This paper briefly surveys seven known formulations of vertex colouring and introduces a formulation of vertex colouring using a suitable clique partition of the graph. This formulation is applicable in timetabling applications, where such a clique partition of the conflict graph is given implicitly. In contrast with some alternatives, the presented formulation can also be easily extended to accommodate complex performance indicators (``soft constraints'') imposed in a number of real-life course timetabling applications. Its performance depends on the quality of the clique partition, but encouraging empirical results for the Udine Course Timetabling problem are reported.<|reference_end|>
arxiv
@article{burke2007on, title={On a Clique-Based Integer Programming Formulation of Vertex Colouring with Applications in Course Timetabling}, author={Edmund K. Burke, Jakub Marecek, Andrew J. Parkes, and Hana Rudova}, journal={Annals of Operations Research (2010) 179(1), 105-130}, year={2007}, doi={10.1007/s10479-010-0716-z}, number={NOTTCS-TR-2007-10}, archivePrefix={arXiv}, eprint={0710.3603}, primaryClass={cs.DM cs.DS math.CO} }
burke2007on
arxiv-1433
0710.3621
Numerical removal of water-vapor effects from THz-TDS measurements
<|reference_start|>Numerical removal of water-vapor effects from THz-TDS measurements: One source of disturbance in a pulsed T-ray signal is attributed to ambient water vapor. Water molecules in the gas phase selectively absorb T-rays at discrete frequencies corresponding to their molecular rotational transitions. This results in prominent resonances spread over the T-ray spectrum, and in the time domain the T-ray signal is observed as fluctuations after the main pulse. These effects are generally undesired, since they may mask critical spectroscopic data. So, ambient water vapor is commonly removed from the T-ray path by using a closed chamber during the measurement. Yet, in some applications a closed chamber is not applicable. This situation, therefore, motivates the need for another method to reduce these unwanted artifacts. This paper presents a study on a computational means to address the problem. Initially, a complex frequency response of water vapor is modeled from a spectroscopic catalog. Using a deconvolution technique, together with fine tuning of the strength of each resonance, parts of the water-vapor response are removed from a measured T-ray signal, with minimal signal distortion.<|reference_end|>
arxiv
@article{withayachumnankul2007numerical, title={Numerical removal of water-vapor effects from THz-TDS measurements}, author={Withawat Withayachumnankul, Bernd M. Fischer, Samuel P. Mickan, Derek Abbott}, journal={Proceedings of the Royal Society A: Mathematical, Physical & Engineering Sciences, vol. 464, no. 2097, pp 2435-2456, 2008}, year={2007}, doi={10.1098/rspa.2007.0294}, archivePrefix={arXiv}, eprint={0710.3621}, primaryClass={cs.CE physics.comp-ph} }
withayachumnankul2007numerical
arxiv-1434
0710.3642
On the Complexity of Spill Everywhere under SSA Form
<|reference_start|>On the Complexity of Spill Everywhere under SSA Form: Compilation for embedded processors can be either aggressive (time consuming cross-compilation) or just in time (embedded and usually dynamic). The heuristics used in dynamic compilation are highly constrained by limited resources, time and memory in particular. Recent results on the SSA form open promising directions for the design of new register allocation heuristics for embedded systems and especially for embedded compilation. In particular, heuristics based on tree scan with two separated phases -- one for spilling, then one for coloring/coalescing -- seem good candidates for designing memory-friendly, fast, and competitive register allocators. Still, also because of the side effect on power consumption, the minimization of loads and stores overhead (spilling problem) is an important issue. This paper provides an exhaustive study of the complexity of the ``spill everywhere'' problem in the context of the SSA form. Unfortunately, conversely to our initial hopes, many of the questions we raised lead to NP-completeness results. We identify some polynomial cases but that are impractical in JIT context. Nevertheless, they can give hints to simplify formulations for the design of aggressive allocators.<|reference_end|>
arxiv
@article{bouchez2007on, title={On the Complexity of Spill Everywhere under SSA Form}, author={Florent Bouchez (LIP), Alain Darte (LIP), Fabrice Rastello (LIP)}, journal={ACM SIGPLAN Notices Issue 7, Volume 42 (2007) 103 - 112}, year={2007}, doi={10.1145/1254766.1254782}, archivePrefix={arXiv}, eprint={0710.3642}, primaryClass={cs.DS cs.CC} }
bouchez2007on
arxiv-1435
0710.3757
Inferring the conditional mean
<|reference_start|>Inferring the conditional mean: Consider a stationary real-valued time series $\{X_n\}_{n=0}^{\infty}$ with a priori unknown distribution. The goal is to estimate the conditional expectation $E(X_{n+1}|X_0,..., X_n)$ based on the observations $(X_0,..., X_n)$ in a pointwise consistent way. It is well known that this is not possible at all values of $n$. We will estimate it along stopping times.<|reference_end|>
arxiv
@article{morvai2007inferring, title={Inferring the conditional mean}, author={Gusztav Morvai, Benjamin Weiss}, journal={Theory Stoch. Process. 11 (2005), no. 1-2, pp. 112--120}, year={2007}, archivePrefix={arXiv}, eprint={0710.3757}, primaryClass={math.PR cs.IT math.IT} }
morvai2007inferring
arxiv-1436
0710.3760
Guessing the output of a stationary binary time series
<|reference_start|>Guessing the output of a stationary binary time series: The forward prediction problem for a binary time series $\{X_n\}_{n=0}^{\infty}$ is to estimate the probability that $X_{n+1}=1$ based on the observations $X_i$, $0\le i\le n$ without prior knowledge of the distribution of the process $\{X_n\}$. It is known that this is not possible if one estimates at all values of $n$. We present a simple procedure which will attempt to make such a prediction infinitely often at carefully selected stopping times chosen by the algorithm. The growth rate of the stopping times is also exhibited.<|reference_end|>
arxiv
@article{morvai2007guessing, title={Guessing the output of a stationary binary time series}, author={Gusztav Morvai}, journal={Foundations of statistical inference (Shoresh, 2000), 207--215, Contrib. Statist., Physica, Heidelberg, 2003}, year={2007}, archivePrefix={arXiv}, eprint={0710.3760}, primaryClass={math.PR cs.IT math.IT} }
morvai2007guessing
arxiv-1437
0710.3764
Design of a Distributed Reachability Algorithm for Analysis of Linear Hybrid Automata
<|reference_start|>Design of a Distributed Reachability Algorithm for Analysis of Linear Hybrid Automata: This paper presents the design of a novel distributed algorithm d-IRA for the reachability analysis of linear hybrid automata. Recent work on iterative relaxation abstraction (IRA) is leveraged to distribute the computational problem among multiple computational nodes in a non-redundant manner by performing careful infeasibility analysis of linear programs corresponding to spurious counterexamples. The d-IRA algorithm is resistant to failure of multiple computational nodes. The experimental results provide promising evidence for the possible successful application of this technique.<|reference_end|>
arxiv
@article{jha2007design, title={Design of a Distributed Reachability Algorithm for Analysis of Linear Hybrid Automata}, author={Sumit Kumar Jha}, journal={arXiv preprint arXiv:0710.3764}, year={2007}, archivePrefix={arXiv}, eprint={0710.3764}, primaryClass={cs.LO} }
jha2007design
arxiv-1438
0710.3773
Limitations on intermittent forecasting
<|reference_start|>Limitations on intermittent forecasting: Bailey showed that the general pointwise forecasting for stationary and ergodic time series has a negative solution. However, it is known that for Markov chains the problem can be solved. Morvai showed that there is a stopping time sequence $\{\lambda_n\}$ such that $P(X_{\lambda_n+1}=1|X_0,...,X_{\lambda_n}) $ can be estimated from samples $(X_0,...,X_{\lambda_n})$ such that the difference between the conditional probability and the estimate vanishes along these stoppping times for all stationary and ergodic binary time series. We will show it is not possible to estimate the above conditional probability along a stopping time sequence for all stationary and ergodic binary time series in a pointwise sense such that if the time series turns out to be a Markov chain, the predictor will predict eventually for all $n$.<|reference_end|>
arxiv
@article{morvai2007limitations, title={Limitations on intermittent forecasting}, author={Gusztav Morvai and Benjamin Weiss}, journal={Statist. Probab. Lett. 72 (2005), no. 4, 285--290}, year={2007}, archivePrefix={arXiv}, eprint={0710.3773}, primaryClass={math.PR cs.IT math.IT} }
morvai2007limitations
arxiv-1439
0710.3775
On classifying processes
<|reference_start|>On classifying processes: We prove several results concerning classifications, based on successive observations $(X_1,..., X_n)$ of an unknown stationary and ergodic process, for membership in a given class of processes, such as the class of all finite order Markov chains.<|reference_end|>
arxiv
@article{morvai2007on, title={On classifying processes}, author={Gusztav Morvai and Benjamin Weiss}, journal={Bernoulli 11 (2005), no. 3, pp. 523--532}, year={2007}, archivePrefix={arXiv}, eprint={0710.3775}, primaryClass={math.PR cs.IT math.IT} }
morvai2007on
arxiv-1440
0710.3777
A Deterministic Approach to Wireless Relay Networks
<|reference_start|>A Deterministic Approach to Wireless Relay Networks: We present a deterministic channel model which captures several key features of multiuser wireless communication. We consider a model for a wireless network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes. This result is a natural generalization of the max-flow min-cut theorem for wireline networks. Finally to demonstrate the connections between deterministic model and Gaussian model, we look at two examples: the single-relay channel and the diamond network. We show that in each of these two examples, the capacity-achieving scheme in the corresponding deterministic model naturally suggests a scheme in the Gaussian model that is within 1 bit and 2 bit respectively from cut-set upper bound, for all values of the channel gains. This is the first part of a two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut theorem of a class of deterministic networks of which our model is a special case.<|reference_end|>
arxiv
@article{avestimehr2007a, title={A Deterministic Approach to Wireless Relay Networks}, author={A. S. Avestimehr, S. N. Diggavi, D. N. C. Tse}, journal={arXiv preprint arXiv:0710.3777}, year={2007}, archivePrefix={arXiv}, eprint={0710.3777}, primaryClass={cs.IT cs.DM math.IT math.PR} }
avestimehr2007a
arxiv-1441
0710.3779
Testing D-Sequences for their Randomness
<|reference_start|>Testing D-Sequences for their Randomness: This paper examines the randomness of d-sequences, which are decimal sequences to an arbitrary base. Our motivation is to check their suitability for application to cryptography, spread-spectrum systems and use as pseudorandom sequence.<|reference_end|>
arxiv
@article{gangasani2007testing, title={Testing D-Sequences for their Randomness}, author={Sumanth Kumar Reddy Gangasani}, journal={arXiv preprint arXiv:0710.3779}, year={2007}, archivePrefix={arXiv}, eprint={0710.3779}, primaryClass={cs.CR} }
gangasani2007testing
arxiv-1442
0710.3781
Wireless Network Information Flow
<|reference_start|>Wireless Network Information Flow: We present an achievable rate for general deterministic relay networks, with broadcasting at the transmitters and interference at the receivers. In particular we show that if the optimizing distribution for the information-theoretic cut-set bound is a product distribution, then we have a complete characterization of the achievable rates for such networks. For linear deterministic finite-field models discussed in a companion paper [3], this is indeed the case, and we have a generalization of the celebrated max-flow min-cut theorem for such a network.<|reference_end|>
arxiv
@article{avestimehr2007wireless, title={Wireless Network Information Flow}, author={A. S. Avestimehr, S. N. Diggavi, D. N. C. Tse}, journal={arXiv preprint arXiv:0710.3781}, year={2007}, archivePrefix={arXiv}, eprint={0710.3781}, primaryClass={cs.IT cs.DM math.IT math.PR} }
avestimehr2007wireless
arxiv-1443
0710.3789
Frequency Analysis of Decoupling Capacitors for Three Voltage Supplies in SoC
<|reference_start|>Frequency Analysis of Decoupling Capacitors for Three Voltage Supplies in SoC: Reduction in power consumption has become a major criterion of design in modern ICs. One such scheme to reduce power consumption by an IC is the use of multiple power supplies for critical and non-critical paths. To maintain the impedance of a power distribution system below a specified level, multiple decoupling capacitors are placed at different levels of power grid hierarchy. This paper describes about three-voltage supply power distribution systems. The noise at one power supply can propagate to the other power supply, causing power and signal integrity problems in the overall system. Effects such as anti-resonance and remedies for these effects are studied. Impedance of the three-voltage supply power distribution system is calculated in terms of RLC-model of decoupling capacitors. Further the obtained impedance depends on the frequency; hence brief frequency analysis of impedance is done.<|reference_end|>
arxiv
@article{abubakr2007frequency, title={Frequency Analysis of Decoupling Capacitors for Three Voltage Supplies in SoC}, author={Mohd Abubakr}, journal={arXiv preprint arXiv:0710.3789}, year={2007}, archivePrefix={arXiv}, eprint={0710.3789}, primaryClass={cs.AR} }
abubakr2007frequency
arxiv-1444
0710.3802
A Posteriori Equivalence: A New Perspective for Design of Optimal Channel Shortening Equalizers
<|reference_start|>A Posteriori Equivalence: A New Perspective for Design of Optimal Channel Shortening Equalizers: The problem of channel shortening equalization for optimal detection in ISI channels is considered. The problem is to choose a linear equalizer and a partial response target filter such that the combination produces the best detection performance. Instead of using the traditional approach of MMSE equalization, we directly seek all equalizer and target pairs that yield optimal detection performance in terms of the sequence or symbol error rate. This leads to a new notion of a posteriori equivalence between the equalized and target channels with a simple characterization in terms of their underlying probability distributions. Using this characterization we show the surprising existence an infinite family of equalizer and target pairs for which any maximum a posteriori (MAP) based detector designed for the target channel is simultaneously MAP optimal for the equalized channel. For channels whose input symbols have equal energy, such as q-PSK, the MMSE equalizer designed with a monic target constraint yields a solution belonging to this optimal family of designs. Although, these designs produce IIR target filters, the ideas are extended to design good FIR targets. For an arbitrary choice of target and equalizer, we derive an expression for the probability of sequence detection error. This expression is used to design optimal FIR targets and IIR equalizers and to quantify the FIR approximation penalty.<|reference_end|>
arxiv
@article{venkataramani2007a, title={A Posteriori Equivalence: A New Perspective for Design of Optimal Channel Shortening Equalizers}, author={Raman Venkataramani, M. Fatih Erden}, journal={arXiv preprint arXiv:0710.3802}, year={2007}, archivePrefix={arXiv}, eprint={0710.3802}, primaryClass={cs.IT math.IT} }
venkataramani2007a
arxiv-1445
0710.3804
Random subcubes as a toy model for constraint satisfaction problems
<|reference_start|>Random subcubes as a toy model for constraint satisfaction problems: We present an exactly solvable random-subcube model inspired by the structure of hard constraint satisfaction and optimization problems. Our model reproduces the structure of the solution space of the random k-satisfiability and k-coloring problems, and undergoes the same phase transitions as these problems. The comparison becomes quantitative in the large-k limit. Distance properties, as well the x-satisfiability threshold, are studied. The model is also generalized to define a continuous energy landscape useful for studying several aspects of glassy dynamics.<|reference_end|>
arxiv
@article{mora2007random, title={Random subcubes as a toy model for constraint satisfaction problems}, author={Thierry Mora and Lenka Zdeborova}, journal={J. Stat. Phys. 131, n. 6 (2008), 1121-1138}, year={2007}, doi={10.1007/s10955-008-9543-x}, archivePrefix={arXiv}, eprint={0710.3804}, primaryClass={cs.CC cond-mat.dis-nn} }
mora2007random
arxiv-1446
0710.3817
A Note on Comparison of Error Correction Codes
<|reference_start|>A Note on Comparison of Error Correction Codes: Use of an error correction code in a given transmission channel can be regarded as the statistical experiment. Therefore, powerful results from the theory of comparison of experiments can be applied to compare the performances of different error correction codes. We present results on the comparison of block error correction codes using the representation of error correction code as a linear experiment. In this case the code comparison is based on the Loewner matrix ordering of respective code matrices. Next, we demonstrate the bit-error rate code performance comparison based on the representation of the codes as dichotomies, in which case the comparison is based on the matrix majorization ordering of their respective equivalent code matrices.<|reference_end|>
arxiv
@article{djonin2007a, title={A Note on Comparison of Error Correction Codes}, author={Dejan V. Djonin}, journal={arXiv preprint arXiv:0710.3817}, year={2007}, archivePrefix={arXiv}, eprint={0710.3817}, primaryClass={cs.IT math.IT math.ST stat.TH} }
djonin2007a
arxiv-1447
0710.3824
Deterministic Secure Positioning in Wireless Sensor Networks
<|reference_start|>Deterministic Secure Positioning in Wireless Sensor Networks: Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does \emph{not} rely on a subset of \emph{trusted} nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most $\lfloor \frac{n}{2} \rfloor-2$ faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most $\lfloor \frac{n}{2} \rfloor - 3$ misbehaving sensors. On the negative side, we prove that no deterministic protocol can identify faking sensors if their number is $\lceil \frac{n}{2}\rceil -1$. Thus our scheme is almost optimal with respect to the number of faking sensors. We discuss application of our technique in the trusted sensor model. More precisely our results can be used to minimize the number of trusted sensors that are needed to defeat faking ones.<|reference_end|>
arxiv
@article{delaët2007deterministic, title={Deterministic Secure Positioning in Wireless Sensor Networks}, author={Sylvie Dela"et (LRI), Partha Sarathi Mandal (INRIA Futurs), Mariusz Rokicki (LRI), S'ebastien Tixeuil (INRIA Futurs, LIP6)}, journal={arXiv preprint arXiv:0710.3824}, year={2007}, archivePrefix={arXiv}, eprint={0710.3824}, primaryClass={cs.CR cs.DC cs.DS cs.NI} }
delaët2007deterministic
arxiv-1448
0710.3861
Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms
<|reference_start|>Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms: In this paper will be presented methodology of encoding information in valuations of discrete lattice with some translational invariant constrains in asymptotically optimal way. The method is based on finding statistical description of such valuations and changing it into statistical algorithm, which allows to construct deterministically valuation with given statistics. Optimal statistics allow to generate valuations with uniform distribution - we get maximum information capacity this way. It will be shown that we can reach the optimum for one-dimensional models using maximal entropy random walk and that for the general case we can practically get as close to the capacity of the model as we want (found numerically: lost 10^{-10} bit/node for Hard Square). There will be also presented simpler alternative to arithmetic coding method which can be used as cryptosystem and data correction method too.<|reference_end|>
arxiv
@article{duda2007optimal, title={Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms}, author={Jarek Duda}, journal={arXiv preprint arXiv:0710.3861}, year={2007}, archivePrefix={arXiv}, eprint={0710.3861}, primaryClass={cs.IT math.IT} }
duda2007optimal
arxiv-1449
0710.3888
Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and Inter-Users Links
<|reference_start|>Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and Inter-Users Links: Cooperative technology is expected to have a great impact on the performance of cellular or, more generally, infrastructure networks. Both multicell processing (cooperation among base stations) and relaying (cooperation at the user level) are currently being investigated. In this presentation, recent results regarding the performance of multicell processing and user cooperation under the assumption of limited-capacity interbase station and inter-user links, respectively, are reviewed. The survey focuses on related results derived for non-fading uplink and downlink channels of simple cellular system models. The analytical treatment, facilitated by these simple setups, enhances the insight into the limitations imposed by limited-capacity constraints on the gains achievable by cooperative techniques.<|reference_end|>
arxiv
@article{shamai2007cooperative, title={Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and Inter-Users Links}, author={Shlomo Shamai, Oren Somekh, Osvaldo Simeone, Amichai Sanderovich, Benjamin M. Zaidel, H. Vincent Poor}, journal={In the Proceedings of the Joint Workshop on Coding and Communications, Durnstein, Austria, Oct. 14-16 2007}, year={2007}, archivePrefix={arXiv}, eprint={0710.3888}, primaryClass={cs.IT math.IT} }
shamai2007cooperative
arxiv-1450
0710.3901
A recursive linear time modular decomposition algorithm via LexBFS
<|reference_start|>A recursive linear time modular decomposition algorithm via LexBFS: A module of a graph G is a set of vertices that have the same set of neighbours outside. Modules of a graphs form a so-called partitive family and thereby can be represented by a unique tree MD(G), called the modular decomposition tree. Motivated by the central role of modules in numerous algorithmic graph theory questions, the problem of efficiently computing MD(G) has been investigated since the early 70's. To date the best algorithms run in linear time but are all rather complicated. By combining previous algorithmic paradigms developed for the problem, we are able to present a simpler linear-time that relies on very simple data-structures, namely slice decomposition and sequences of rooted ordered trees.<|reference_end|>
arxiv
@article{corneil2007a, title={A recursive linear time modular decomposition algorithm via LexBFS}, author={Derek Corneil and Michel Habib and Christophe Paul and Marc Tedder}, journal={arXiv preprint arXiv:0710.3901}, year={2007}, archivePrefix={arXiv}, eprint={0710.3901}, primaryClass={cs.DM} }
corneil2007a
arxiv-1451
0710.3916
Optimized Design of Survivable MPLS over Optical Transport Networks Optical Switching and Networking
<|reference_start|>Optimized Design of Survivable MPLS over Optical Transport Networks Optical Switching and Networking: In this paper we study different options for the survivability implementation in MPLS over Optical Transport Networks in terms of network resource usage and configuration cost. We investigate two approaches to the survivability deployment: single layer and multilayer survivability and present various methods for spare capacity allocation (SCA) to reroute disrupted traffic. The comparative analysis shows the influence of the traffic granularity on the survivability cost: for high bandwidth LSPs, close to the optical channel capacity, the multilayer survivability outperforms the single layer one, whereas for low bandwidth LSPs the single layer survivability is more cost-efficient. For the multilayer survivability we demonstrate that by mapping efficiently the spare capacity of the MPLS layer onto the resources of the optical layer one can achieve up to 22% savings in the total configuration cost and up to 37% in the optical layer cost. Further savings (up to 9 %) in the wavelength use can be obtained with the integrated approach to network configuration over the sequential one, however, at the increase in the optimization problem complexity. These results are based on a cost model with actual technology pricing and were obtained for networks targeted to a nationwide coverage.<|reference_end|>
arxiv
@article{bigos2007optimized, title={Optimized Design of Survivable MPLS over Optical Transport Networks. Optical Switching and Networking}, author={Wojtek Bigos (IRISA), St'ephane Gosselin (IRISA), Bernard Cousin (IRISA), Morgane Le Foll (IRISA), Hisao Nakajima (IRISA)}, journal={Optical Switching and Networking 3, 3-4 (2006) 202-218}, year={2007}, doi={10.1016/j.osn.2006.08.001}, archivePrefix={arXiv}, eprint={0710.3916}, primaryClass={cs.NI cs.PF} }
bigos2007optimized
arxiv-1452
0710.3917
Heuristic Solution to Protect Communications in WDM Networks using P-cycles
<|reference_start|>Heuristic Solution to Protect Communications in WDM Networks using P-cycles: Optical WDM mesh networks are able to transport huge amount of information. The use of such technology however poses the problem of protection against failures such as fibre cuts. One of the principal methods for link protection used in optical WDM networks is pre-configured protection cycle (p-cycle). The major problem of this method of protection resides in finding the optimal set of p-cycles which protect the network for a given distribution of working capacity. Existing heuristics generate a large set of p-cycle candidates which are entirely independent of the network state, and from then the good sub-set of p-cycles which will protect the network is selected. In this paper, we propose a new algorithm of generation of p-cycles based on the incremental aggregation of the shortest cycles. Our generation of p-cycles depends on the state of the network. This enables us to choose an efficient set of p-cycles which will protect the network. The set of p-cycles that we generate is the final set which will protect the network, in other words our heuristic does not go through the additional step of p-cycle selection<|reference_end|>
arxiv
@article{drid2007heuristic, title={Heuristic Solution to Protect Communications in WDM Networks using P-cycles}, author={Hamza Drid (IRISA), Bernard Cousin (IRISA), Miklos Molnar (IRISA)}, journal={Workshop on Traffic Engineering, Protection and Restoration for Futur Generation Internet, Oslo : Norv\`ege (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0710.3917}, primaryClass={cs.NI} }
drid2007heuristic
arxiv-1453
0710.3918
Dependable k-coverage algorithms for sensor networks
<|reference_start|>Dependable k-coverage algorithms for sensor networks: Redundant sensing capabilities are often required in sensor network applications due to various reasons, e.g. robustness, fault tolerance, or increased accuracy. At the same time high sensor redundancy offers the possibility of increasing network lifetime by scheduling sleep intervals for some sensors and still providing continuous service with help of the remaining active sensors. In this paper centralized and distributed algorithms are proposed to solve the k-coverage sensing problem and maximize network lifetime. When physically possible, the proposed robust Controlled Greedy Sleep Algorithm provides guaranteed service independently of node and communication errors in the network. The performance of the algorithm is illustrated and compared to results of a random solution by simulation examples.<|reference_end|>
arxiv
@article{gyula2007dependable, title={Dependable k-coverage algorithms for sensor networks}, author={Simon Gyula (IRISA), Miklos Molnar (IRISA), Laszlo Gonczy (IRISA), Bernard Cousin (IRISA)}, journal={Dans Instrumentation and Measurement Technology Conference Proceedings - IEEE Instrumentation and Measurement Technology Conference, Varsovie : Pologne (2007)}, year={2007}, doi={10.1109/IMTC.2007.379153}, archivePrefix={arXiv}, eprint={0710.3918}, primaryClass={cs.NI} }
gyula2007dependable
arxiv-1454
0710.3928
Message passing for the coloring problem: Gallager meets Alon and Kahale
<|reference_start|>Message passing for the coloring problem: Gallager meets Alon and Kahale: Message passing algorithms are popular in many combinatorial optimization problems. For example, experimental results show that {\em survey propagation} (a certain message passing algorithm) is effective in finding proper $k$-colorings of random graphs in the near-threshold regime. In 1962 Gallager introduced the concept of Low Density Parity Check (LDPC) codes, and suggested a simple decoding algorithm based on message passing. In 1994 Alon and Kahale exhibited a coloring algorithm and proved its usefulness for finding a $k$-coloring of graphs drawn from a certain planted-solution distribution over $k$-colorable graphs. In this work we show an interpretation of Alon and Kahale's coloring algorithm in light of Gallager's decoding algorithm, thus showing a connection between the two problems - coloring and decoding. This also provides a rigorous evidence for the usefulness of the message passing paradigm for the graph coloring problem. Our techniques can be applied to several other combinatorial optimization problems and networking-related issues.<|reference_end|>
arxiv
@article{ben-shimon2007message, title={Message passing for the coloring problem: Gallager meets Alon and Kahale}, author={Sonny Ben-Shimon and Dan Vilenchik}, journal={DMTCS Proceedings of the 13th Annual Conference on Analysis of Algorithms (AofA'07), Juan-les-pins, France, 2007. pp. 217--226.}, year={2007}, archivePrefix={arXiv}, eprint={0710.3928}, primaryClass={math.CO cs.DM math.PR} }
ben-shimon2007message
arxiv-1455
0710.3955
On the Behavior of the Distributed Coordination Function of IEEE 80211 with Multirate Capability under General Transmission Conditions
<|reference_start|>On the Behavior of the Distributed Coordination Function of IEEE 80211 with Multirate Capability under General Transmission Conditions: The aim of this paper is threefold. First, it presents a multi-dimensional Markovian state transition model characterizing the behavior of the IEEE 802.11 protocol at the Medium Access Control layer which accounts for packet transmission failures due to channel errors modeling both saturated and non-saturated traffic conditions. Second, it provides a throughput analysis of the IEEE 802.11 protocol at the data link layer in both saturated and non-saturated traffic conditions taking into account the impact of both the physical propagation channel and multirate transmission in Rayleigh fading environment. The general traffic model assumed is M/M/1/K. Finally, it shows that the behavior of the throughput in non-saturated traffic conditions is a linear combination of two system parameters; the payload size and the packet rates, $\lambda^{(s)}$, of each contending station. The validity interval of the proposed model is also derived. Simulation results closely match the theoretical derivations, confirming the effectiveness of the proposed models.<|reference_end|>
arxiv
@article{daneshgaran2007on, title={On the Behavior of the Distributed Coordination Function of IEEE 802.11 with Multirate Capability under General Transmission Conditions}, author={F. Daneshgaran, Massimiliano Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0710.3955}, year={2007}, archivePrefix={arXiv}, eprint={0710.3955}, primaryClass={cs.NI cs.PF} }
daneshgaran2007on
arxiv-1456
0710.3961
On a New Type of Information Processing for Efficient Management of Complex Systems
<|reference_start|>On a New Type of Information Processing for Efficient Management of Complex Systems: It is a challenge to manage complex systems efficiently without confronting NP-hard problems. To address the situation we suggest to use self-organization processes of prime integer relations for information processing. Self-organization processes of prime integer relations define correlation structures of a complex system and can be equivalently represented by transformations of two-dimensional geometrical patterns determining the dynamics of the system and revealing its structural complexity. Computational experiments raise the possibility of an optimality condition of complex systems presenting the structural complexity of a system as a key to its optimization. From this perspective the optimization of a system could be all about the control of the structural complexity of the system to make it consistent with the structural complexity of the problem. The experiments also indicate that the performance of a complex system may behave as a concave function of the structural complexity. Therefore, once the structural complexity could be controlled as a single entity, the optimization of a complex system would be potentially reduced to a one-dimensional concave optimization irrespective of the number of variables involved its description. This might open a way to a new type of information processing for efficient management of complex systems.<|reference_end|>
arxiv
@article{korotkikh2007on, title={On a New Type of Information Processing for Efficient Management of Complex Systems}, author={Victor Korotkikh and Galina Korotkikh}, journal={arXiv preprint arXiv:0710.3961}, year={2007}, archivePrefix={arXiv}, eprint={0710.3961}, primaryClass={cs.CC} }
korotkikh2007on
arxiv-1457
0710.3974
Distributed source coding in dense sensor networks
<|reference_start|>Distributed source coding in dense sensor networks: We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which a) send information to allow the reconstruction of the sensor's samples within sufficient accuracy, and then b) use these reconstructions to estimate the entire field. To implement the first stage, the heavy correlation between the sensor samples suggests the use of distributed coding schemes to reduce the total rate. We demonstrate the existence of a distributed block coding scheme that achieves, for a given fidelity criterion for the reconstruction of the field, a total information rate that is bounded by a constant, independent of the number $N$ of sensors. The constant in general depends on the autocorrelation function of the field and the desired distortion criterion for the sensor samples. We then describe a scheme which can be implemented using only scalar quantizers at the sensors, without any use of distributed source coding, and which also achieves a total information rate that is a constant, independent of the number of sensors. While this scheme operates at a rate that is greater than the rate achievable through distributed coding and entails greater delay in reconstruction, its simplicity makes it attractive for implementation in sensor networks.<|reference_end|>
arxiv
@article{kashyap2007distributed, title={Distributed source coding in dense sensor networks}, author={Akshay Kashyap, Luis Alfonso Lastras-Monta~no, Cathy Xia, Zhen Liu}, journal={arXiv preprint arXiv:0710.3974}, year={2007}, archivePrefix={arXiv}, eprint={0710.3974}, primaryClass={cs.IT math.IT} }
kashyap2007distributed
arxiv-1458
0710.3979
Toward Trusted Sharing of Network Packet Traces Using Anonymization: Single-Field Privacy/Analysis Tradeoffs
<|reference_start|>Toward Trusted Sharing of Network Packet Traces Using Anonymization: Single-Field Privacy/Analysis Tradeoffs: Network data needs to be shared for distributed security analysis. Anonymization of network data for sharing sets up a fundamental tradeoff between privacy protection versus security analysis capability. This privacy/analysis tradeoff has been acknowledged by many researchers but this is the first paper to provide empirical measurements to characterize the privacy/analysis tradeoff for an enterprise dataset. Specifically we perform anonymization options on single-fields within network packet traces and then make measurements using intrusion detection system alarms as a proxy for security analysis capability. Our results show: (1) two fields have a zero sum tradeoff (more privacy lessens security analysis and vice versa) and (2) eight fields have a more complex tradeoff (that is not zero sum) in which both privacy and analysis can both be simultaneously accomplished.<|reference_end|>
arxiv
@article{yurcik2007toward, title={Toward Trusted Sharing of Network Packet Traces Using Anonymization: Single-Field Privacy/Analysis Tradeoffs}, author={William Yurcik, Clay Woolam, Greg Hellings, Latifur Khan, Bhavani Thuraisingham}, journal={arXiv preprint arXiv:0710.3979}, year={2007}, archivePrefix={arXiv}, eprint={0710.3979}, primaryClass={cs.CR cs.NI} }
yurcik2007toward
arxiv-1459
0710.4031
On the critical exponent of generalized Thue-Morse words
<|reference_start|>On the critical exponent of generalized Thue-Morse words: For certain generalized Thue-Morse words t, we compute the "critical exponent", i.e., the supremum of the set of rational numbers that are exponents of powers in t, and determine exactly the occurrences of powers realizing it.<|reference_end|>
arxiv
@article{blondin-massé2007on, title={On the critical exponent of generalized Thue-Morse words}, author={Alexandre Blondin-Mass'e, Srecko Brlek, Amy Glen, S'ebastien Labb'e}, journal={Discrete Mathematics and Theoretical Computer Science 9 (2007) 293-304}, year={2007}, archivePrefix={arXiv}, eprint={0710.4031}, primaryClass={math.CO cs.DM} }
blondin-massé2007on
arxiv-1460
0710.4046
Bit-interleaved coded modulation in the wideband regime
<|reference_start|>Bit-interleaved coded modulation in the wideband regime: The wideband regime of bit-interleaved coded modulation (BICM) in Gaussian channels is studied. The Taylor expansion of the coded modulation capacity for generic signal constellations at low signal-to-noise ratio (SNR) is derived and used to determine the corresponding expansion for the BICM capacity. Simple formulas for the minimum energy per bit and the wideband slope are given. BICM is found to be suboptimal in the sense that its minimum energy per bit can be larger than the corresponding value for coded modulation schemes. The minimum energy per bit using standard Gray mapping on M-PAM or M^2-QAM is given by a simple formula and shown to approach -0.34 dB as M increases. Using the low SNR expansion, a general trade-off between power and bandwidth in the wideband regime is used to show how a power loss can be traded off against a bandwidth gain.<|reference_end|>
arxiv
@article{martinez2007bit-interleaved, title={Bit-interleaved coded modulation in the wideband regime}, author={Alfonso Martinez, Albert Guillen i Fabregas, Giuseppe Caire, and Frans Willems}, journal={arXiv preprint arXiv:0710.4046}, year={2007}, archivePrefix={arXiv}, eprint={0710.4046}, primaryClass={cs.IT math.IT} }
martinez2007bit-interleaved
arxiv-1461
0710.4051
On the capacity achieving covariance matrix for Rician MIMO channels: an asymptotic approach
<|reference_start|>On the capacity achieving covariance matrix for Rician MIMO channels: an asymptotic approach: The capacity-achieving input covariance matrices for coherent block-fading correlated MIMO Rician channels are determined. In this case, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are available. An approximation of the average mutual information is evaluated in this paper in the asymptotic regime where the number of transmit and receive antennas converge to $+\infty$. New results related to the accuracy of the corresponding large system approximation are provided. An attractive optimization algorithm of this approximation is proposed and we establish that it yields an effective way to compute the capacity achieving covariance matrix for the average mutual information. Finally, numerical simulation results show that, even for a moderate number of transmit and receive antennas, the new approach provides the same results as direct maximization approaches of the average mutual information, while being much more computationally attractive.<|reference_end|>
arxiv
@article{dumont2007on, title={On the capacity achieving covariance matrix for Rician MIMO channels: an asymptotic approach}, author={Julien Dumont (IGM-LabInfo), W. Hachem (LTCI), Samson Lasaulce, Philippe Loubaton (IGM-LabInfo), Jamal Najim (LTCI)}, journal={IEEE Transactions on Information Theory 56, 3 (2010) 1048--1069}, year={2007}, archivePrefix={arXiv}, eprint={0710.4051}, primaryClass={math.PR cs.IT math.IT} }
dumont2007on
arxiv-1462
0710.4076
Some information-theoretic computations related to the distribution of prime numbers
<|reference_start|>Some information-theoretic computations related to the distribution of prime numbers: We illustrate how elementary information-theoretic ideas may be employed to provide proofs for well-known, nontrivial results in number theory. Specifically, we give an elementary and fairly short proof of the following asymptotic result: The sum of (log p)/p, taken over all primes p not exceeding n, is asymptotic to log n as n tends to infinity. We also give finite-n bounds refining the above limit. This result, originally proved by Chebyshev in 1852, is closely related to the celebrated prime number theorem.<|reference_end|>
arxiv
@article{kontoyiannis2007some, title={Some information-theoretic computations related to the distribution of prime numbers}, author={Ioannis Kontoyiannis}, journal={arXiv preprint arXiv:0710.4076}, year={2007}, archivePrefix={arXiv}, eprint={0710.4076}, primaryClass={cs.IT math.IT math.NT math.PR} }
kontoyiannis2007some
arxiv-1463
0710.4105
A Note on the Secrecy Capacity of the Multi-antenna Wiretap Channel
<|reference_start|>A Note on the Secrecy Capacity of the Multi-antenna Wiretap Channel: Recently, the secrecy capacity of the multi-antenna wiretap channel was characterized by Khisti and Wornell [1] using a Sato-like argument. This note presents an alternative characterization using a channel enhancement argument. This characterization relies on an extremal entropy inequality recently proved in the context of multi-antenna broadcast channels, and is directly built on the physical intuition regarding to the optimal transmission strategy in this communication scenario.<|reference_end|>
arxiv
@article{liu2007a, title={A Note on the Secrecy Capacity of the Multi-antenna Wiretap Channel}, author={Tie Liu and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0710.4105}, year={2007}, archivePrefix={arXiv}, eprint={0710.4105}, primaryClass={cs.IT math.IT} }
liu2007a
arxiv-1464
0710.4117
From the entropy to the statistical structure of spike trains
<|reference_start|>From the entropy to the statistical structure of spike trains: We use statistical estimates of the entropy rate of spike train data in order to make inferences about the underlying structure of the spike train itself. We first examine a number of different parametric and nonparametric estimators (some known and some new), including the ``plug-in'' method, several versions of Lempel-Ziv-based compression algorithms, a maximum likelihood estimator tailored to renewal processes, and the natural estimator derived from the Context-Tree Weighting method (CTW). The theoretical properties of these estimators are examined, several new theoretical results are developed, and all estimators are systematically applied to various types of synthetic data and under different conditions. Our main focus is on the performance of these entropy estimators on the (binary) spike trains of 28 neurons recorded simultaneously for a one-hour period from the primary motor and dorsal premotor cortices of a monkey. We show how the entropy estimates can be used to test for the existence of long-term structure in the data, and we construct a hypothesis test for whether the renewal process model is appropriate for these spike trains. Further, by applying the CTW algorithm we derive the maximum a posterior (MAP) tree model of our empirical data, and comment on the underlying structure it reveals.<|reference_end|>
arxiv
@article{gao2007from, title={From the entropy to the statistical structure of spike trains}, author={Yun Gao, Ioannis Kontoyiannis and Elie Bienenstock}, journal={In Proceedings of the 2006 International Symposium on Information Theory, Seattle, WA, July 2006}, year={2007}, archivePrefix={arXiv}, eprint={0710.4117}, primaryClass={q-bio.NC cs.IT math.IT math.PR stat.AP} }
gao2007from
arxiv-1465
0710.4180
A quick search method for audio signals based on a piecewise linear representation of feature trajectories
<|reference_start|>A quick search method for audio signals based on a piecewise linear representation of feature trajectories: This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-L\'{o}eve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experiment results indicate significant improvements in search speed. For example the proposed method reduced the total search time to approximately 1/12 that of previous methods and detected queries in approximately 0.3 seconds from a 200-hour audio database.<|reference_end|>
arxiv
@article{kimura2007a, title={A quick search method for audio signals based on a piecewise linear representation of feature trajectories}, author={Akisato Kimura, Kunio Kashino, Takayuki Kurozumi, Hiroshi Murase}, journal={IEEE Transactions on Audio, Speech and Language Processing, Vol.16, No.2, pp.396-407, February 2008.}, year={2007}, doi={10.1109/TASL.2007.912362}, archivePrefix={arXiv}, eprint={0710.4180}, primaryClass={cs.MM cs.DB} }
kimura2007a
arxiv-1466
0710.4182
Beyond Feedforward Models Trained by Backpropagation: a Practical Training Tool for a More Efficient Universal Approximator
<|reference_start|>Beyond Feedforward Models Trained by Backpropagation: a Practical Training Tool for a More Efficient Universal Approximator: Cellular Simultaneous Recurrent Neural Network (SRN) has been shown to be a function approximator more powerful than the MLP. This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application. Present work improves the previous results by training the network with extended Kalman filter (EKF). We implemented a generic Cellular SRN and applied it for solving two challenging problems: 2D maze navigation and a subset of the connectedness problem. The speed of convergence has been improved by several orders of magnitude in comparison with the earlier results in the case of maze navigation, and superior generalization has been demonstrated in the case of connectedness. The implications of this improvements are discussed.<|reference_end|>
arxiv
@article{ilin2007beyond, title={Beyond Feedforward Models Trained by Backpropagation: a Practical Training Tool for a More Efficient Universal Approximator}, author={Roman Ilin, Robert Kozma, Paul J. Werbos}, journal={arXiv preprint arXiv:0710.4182}, year={2007}, archivePrefix={arXiv}, eprint={0710.4182}, primaryClass={cs.NE} }
ilin2007beyond
arxiv-1467
0710.4187
Universal coding for correlated sources with complementary delivery
<|reference_start|>Universal coding for correlated sources with complementary delivery: This paper deals with a universal coding problem for a certain kind of multiterminal source coding system that we call the complementary delivery coding system. In this system, messages from two correlated sources are jointly encoded, and each decoder has access to one of the two messages to enable it to reproduce the other message. Both fixed-to-fixed length and fixed-to-variable length lossless coding schemes are considered. Explicit constructions of universal codes and bounds of the error probabilities are clarified via type-theoretical and graph-theoretical analyses. [[Keywords]] multiterminal source coding, complementary delivery, universal coding, types of sequences, bipartite graphs<|reference_end|>
arxiv
@article{kimura2007universal, title={Universal coding for correlated sources with complementary delivery}, author={Akisato Kimura, Tomohiko Uyematsu, Shigeaki Kuzuoka}, journal={IEICE Transactions on Fundamentals, Vol.E90-A, No.9, pp.1840-1847, September 2007}, year={2007}, doi={10.1093/ietfec/e90-a.9.1840}, archivePrefix={arXiv}, eprint={0710.4187}, primaryClass={cs.IT math.IT} }
kimura2007universal
arxiv-1468
0710.4231
Analyzing covert social network foundation behind terrorism disaster
<|reference_start|>Analyzing covert social network foundation behind terrorism disaster: This paper addresses a method to analyze the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory, and computational data processing. The social network responsible for the 9/11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method.<|reference_end|>
arxiv
@article{maeno2007analyzing, title={Analyzing covert social network foundation behind terrorism disaster}, author={Yoshiharu Maeno, and Yukio Ohsawa}, journal={International Journal of Services Sciences Vol.2, pp.125-141 (2009)}, year={2007}, doi={10.1504/IJSSCI.2009.024936}, archivePrefix={arXiv}, eprint={0710.4231}, primaryClass={cs.AI} }
maeno2007analyzing
arxiv-1469
0710.4255
Analysis of a Mixed Strategy for Multiple Relay Networks
<|reference_start|>Analysis of a Mixed Strategy for Multiple Relay Networks: In their landmark paper Cover and El Gamal proposed different coding strategies for the relay channel with a single relay supporting a communication pair. These strategies are the decode-and-forward and compress-and-forward approach, as well as a general lower bound on the capacity of a relay network which relies on the mixed application of the previous two strategies. So far, only parts of their work - the decode-and-forward and the compress-and-forward strategy - have been applied to networks with multiple relays. This paper derives a mixed strategy for multiple relay networks using a combined approach of partial decode-and-forward with N +1 levels and the ideas of successive refinement with different side information at the receivers. After describing the protocol structure, we present the achievable rates for the discrete memoryless relay channel as well as Gaussian multiple relay networks. Using these results we compare the mixed strategy with some special cases, e. g., multilevel decode-and-forward, distributed compress-and-forward and a mixed approach where one relay node operates in decode-and-forward and the other in compress-and-forward mode.<|reference_end|>
arxiv
@article{rost2007analysis, title={Analysis of a Mixed Strategy for Multiple Relay Networks}, author={P. Rost and G. Fettweis}, journal={arXiv preprint arXiv:0710.4255}, year={2007}, archivePrefix={arXiv}, eprint={0710.4255}, primaryClass={cs.IT math.IT} }
rost2007analysis
arxiv-1470
0710.4261
Survivable MPLS Over Optical Transport Networks: Cost and Resource Usage Analysis
<|reference_start|>Survivable MPLS Over Optical Transport Networks: Cost and Resource Usage Analysis: In this paper we study different options for the survivability implementation in MPLS over Optical Transport Networks (OTN) in terms of network resource usage and configuration cost. We investigate two approaches to the survivability deployment: single layer and multilayer survivability and present various methods for spare capacity allocation (SCA) to reroute disrupted traffic. The comparative analysis shows the influence of the offered traffic granularity and the physical network structure on the survivability cost: for high bandwidth LSPs, close to the optical channel capacity, the multilayer survivability outperforms the single layer one, whereas for low bandwidth LSPs the single layer survivability is more cost-efficient. On the other hand, sparse networks of low connectivity parameter use more wavelengths for optical path routing and increase the configuration cost, as compared with dense networks. We demonstrate that by mapping efficiently the spare capacity of the MPLS layer onto the resources of the optical layer one can achieve up to 22% savings in the total configuration cost and up to 37% in the optical layer cost. Further savings (up to 9 %) in the wavelength use can be obtained with the integrated approach to network configuration over the sequential one, however, at the increase in the optimization problem complexity. These results are based on a cost model with different cost variations, and were obtained for networks targeted to a nationwide coverage.<|reference_end|>
arxiv
@article{bigos2007survivable, title={Survivable MPLS Over Optical Transport Networks: Cost and Resource Usage Analysis}, author={Wojtek Bigos (FT R&D), Bernard Cousin (IRISA), St'ephane Gosselin (FT R&D), Morgane Le Foll (FT R&D), Hisao Nakajima (FT R&D)}, journal={IEEE Journal on Selected Areas in Communications 25, 5 (2007) 949-962}, year={2007}, doi={10.1109/JSAC.2007.070608}, archivePrefix={arXiv}, eprint={0710.4261}, primaryClass={cs.NI cs.PF} }
bigos2007survivable
arxiv-1471
0710.4272
An approximation trichotomy for Boolean #CSP
<|reference_start|>An approximation trichotomy for Boolean #CSP: We give a trichotomy theorem for the complexity of approximately counting the number of satisfying assignments of a Boolean CSP instance. Such problems are parameterised by a constraint language specifying the relations that may be used in constraints. If every relation in the constraint language is affine then the number of satisfying assignments can be exactly counted in polynomial time. Otherwise, if every relation in the constraint language is in the co-clone IM_2 from Post's lattice, then the problem of counting satisfying assignments is complete with respect to approximation-preserving reductions in the complexity class #RH\Pi_1. This means that the problem of approximately counting satisfying assignments of such a CSP instance is equivalent in complexity to several other known counting problems, including the problem of approximately counting the number of independent sets in a bipartite graph. For every other fixed constraint language, the problem is complete for #P with respect to approximation-preserving reductions, meaning that there is no fully polynomial randomised approximation scheme for counting satisfying assignments unless NP=RP.<|reference_end|>
arxiv
@article{dyer2007an, title={An approximation trichotomy for Boolean #CSP}, author={Martin Dyer, Leslie Ann Goldberg and Mark Jerrum}, journal={arXiv preprint arXiv:0710.4272}, year={2007}, archivePrefix={arXiv}, eprint={0710.4272}, primaryClass={cs.CC} }
dyer2007an
arxiv-1472
0710.4318
Differential invariants of a Lie group action: syzygies on a generating set
<|reference_start|>Differential invariants of a Lie group action: syzygies on a generating set: Given a group action, known by its infinitesimal generators, we exhibit a complete set of syzygies on a generating set of differential invariants. For that we elaborate on the reinterpretation of Cartan's moving frame by Fels and Olver (1999). This provides constructive tools for exploring algebras of differential invariants.<|reference_end|>
arxiv
@article{hubert2007differential, title={Differential invariants of a Lie group action: syzygies on a generating set}, author={Evelyne Hubert}, journal={arXiv preprint arXiv:0710.4318}, year={2007}, doi={10.1016/j.jsc.2008.08.003}, archivePrefix={arXiv}, eprint={0710.4318}, primaryClass={cs.SC math.DG} }
hubert2007differential
arxiv-1473
0710.4410
A Multi-level Blocking Distinct Degree Factorization Algorithm
<|reference_start|>A Multi-level Blocking Distinct Degree Factorization Algorithm: We give a new algorithm for performing the distinct-degree factorization of a polynomial P(x) over GF(2), using a multi-level blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our approach is that a finer level of blocking replaces multiplications by squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval polynomials when P(x) is sparse. As an application we give a fast algorithm to search for all irreducible trinomials x^r + x^s + 1 of degree r over GF(2), while producing a certificate that can be checked in less time than the full search. Naive algorithms cost O(r^2) per trinomial, thus O(r^3) to search over all trinomials of given degree r. Under a plausible assumption about the distribution of factors of trinomials, the new algorithm has complexity O(r^2 (log r)^{3/2}(log log r)^{1/2}) for the search over all trinomials of degree r. Our implementation achieves a speedup of greater than a factor of 560 over the naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our program, we have found two new primitive trinomials of degree 24036583 over GF(2) (the previous record degree was 6972593).<|reference_end|>
arxiv
@article{brent2007a, title={A Multi-level Blocking Distinct Degree Factorization Algorithm}, author={Richard Brent, Paul Zimmermann (INRIA Lorraine - LORIA)}, journal={Contemporary Mathematics 461 (2008) 47-58}, year={2007}, number={INRIA Tech. Report RR-6331, Oct. 2007}, archivePrefix={arXiv}, eprint={0710.4410}, primaryClass={cs.DS} }
brent2007a
arxiv-1474
0710.4486
Non-linear estimation is easy
<|reference_start|>Non-linear estimation is easy: Non-linear state estimation and some related topics, like parametric estimation, fault diagnosis, and perturbation attenuation, are tackled here via a new methodology in numerical differentiation. The corresponding basic system theoretic definitions and properties are presented within the framework of differential algebra, which permits to handle system variables and their derivatives of any order. Several academic examples and their computer simulations, with on-line estimations, are illustrating our viewpoint.<|reference_end|>
arxiv
@article{fliess2007non-linear, title={Non-linear estimation is easy}, author={Michel Fliess (INRIA Futurs), C'edric Join (INRIA Futurs, CRAN), Hebertt Sira-Ramirez}, journal={Int. J. Modelling Identification and Control 4, 1 (2008) 12-27}, year={2007}, doi={10.1504/IJMIC.2008.020996}, archivePrefix={arXiv}, eprint={0710.4486}, primaryClass={cs.CE cs.NA cs.PF math.AC math.NA math.OC} }
fliess2007non-linear
arxiv-1475
0710.4499
Remarks on Jurdzinski and Lorys' proof that palindromes are not a Church-Rosser language
<|reference_start|>Remarks on Jurdzinski and Lorys' proof that palindromes are not a Church-Rosser language: In 2002 Jurdzinski and Lorys settled a long-standing conjecture that palindromes are not a Church-Rosser language. Their proof required a sophisticated theory about computation graphs of 2-stack automata. We present their proof in terms of 1-tape Turing machines.We also provide an alternative proof of Buntrock and Otto's result that the set of non-square bitstrings, which is context-free, is not Church-Rosser.<|reference_end|>
arxiv
@article{dunlaing2007remarks, title={Remarks on Jurdzinski and Lorys' proof that palindromes are not a Church-Rosser language}, author={Colm O. Dunlaing and Natalie Schluter}, journal={arXiv preprint arXiv:0710.4499}, year={2007}, number={TCDMATH 07-10}, archivePrefix={arXiv}, eprint={0710.4499}, primaryClass={cs.LO} }
dunlaing2007remarks
arxiv-1476
0710.4508
A Numerical Algorithm for Zero Counting I: Complexity and Accuracy
<|reference_start|>A Numerical Algorithm for Zero Counting I: Complexity and Accuracy: We describe an algorithm to count the number of distinct real zeros of a polynomial (square) system f. The algorithm performs O(n D kappa(f)) iterations where n is the number of polynomials (as well as the dimension of the ambient space), D is a bound on the polynomials' degree, and kappa(f) is a condition number for the system. Each iteration uses an exponential number of operations. The algorithm uses finite-precision arithmetic and a polynomial bound for the precision required to ensure the returned output is correct is exhibited. This bound is a major feature of our algorithm since it is in contrast with the exponential precision required by the existing (symbolic) algorithms for counting real zeros. The algorithm parallelizes well in the sense that each iteration can be computed in parallel polynomial time with an exponential number of processors.<|reference_end|>
arxiv
@article{cucker2007a, title={A Numerical Algorithm for Zero Counting. I: Complexity and Accuracy}, author={Felipe Cucker, Teresa Krick, Gregorio Malajovich and Mario Wschebor}, journal={Journal of Complexity 24 Issues 5-6, pp 582-605 (Oct-Dec 2008)}, year={2007}, doi={10.1016/j.jco.2008.03.001}, archivePrefix={arXiv}, eprint={0710.4508}, primaryClass={cs.CC cs.NA cs.SC math.NA} }
cucker2007a
arxiv-1477
0710.4516
The predictability of letters in written english
<|reference_start|>The predictability of letters in written english: We show that the predictability of letters in written English texts depends strongly on their position in the word. The first letters are usually the least easy to predict. This agrees with the intuitive notion that words are well defined subunits in written languages, with much weaker correlations across these units than within them. It implies that the average entropy of a letter deep inside a word is roughly 4 times smaller than the entropy of the first letter.<|reference_end|>
arxiv
@article{schürmann2007the, title={The predictability of letters in written english}, author={Thomas Sch"urmann and Peter Grassberger}, journal={Fractals, Vol. 4, No. 1 (1996) 1-5}, year={2007}, doi={10.1142/S0218348X96000029}, archivePrefix={arXiv}, eprint={0710.4516}, primaryClass={physics.soc-ph cs.CL stat.ML} }
schürmann2007the
arxiv-1478
0710.4596
Discrete differential geometry of tetrahedrons and encoding of local protein structure
<|reference_start|>Discrete differential geometry of tetrahedrons and encoding of local protein structure: Local protein structure analysis is informative to protein structure analysis and has been used successfully in protein structure prediction and others. Proteins have recurring structural features, such as helix caps and beta turns, which often have strong amino acid sequence preferences. And the challenges for local structure analysis have been identification and assignment of such common short structural motifs. This paper proposes a new mathematical framework that can be applied to analysis of the local structure of proteins, where local conformations of protein backbones are described using differential geometry of folded tetrahedron sequences. Using the framework, we could capture the recurring structural features without any structural templates, which makes local structure analysis not only simpler, but also more objective. Programs and examples are available from http://www.genocript.com .<|reference_end|>
arxiv
@article{morikawa2007discrete, title={Discrete differential geometry of tetrahedrons and encoding of local protein structure}, author={Naoto Morikawa}, journal={arXiv preprint arXiv:0710.4596}, year={2007}, archivePrefix={arXiv}, eprint={0710.4596}, primaryClass={math.CO cs.CG math.MG q-bio.BM} }
morikawa2007discrete
arxiv-1479
0710.4629
Space-Efficient Bounded Model Checking
<|reference_start|>Space-Efficient Bounded Model Checking: Current algorithms for bounded model checking use SAT methods for checking satisfiability of Boolean formulae. These methods suffer from the potential memory explosion problem. Methods based on the validity of Quantified Boolean Formulae (QBF) allow an exponentially more succinct representation of formulae to be checked, because no "unrolling" of the transition relation is required. These methods have not been widely used, because of the lack of an efficient decision procedure for QBF. We evaluate the usage of QBF in bounded model checking (BMC), using general-purpose SAT and QBF solvers. We develop a special-purpose decision procedure for QBF used in BMC, and compare our technique with the methods using general-purpose SAT and QBF solvers on real-life industrial benchmarks.<|reference_end|>
arxiv
@article{katz2007space-efficient, title={Space-Efficient Bounded Model Checking}, author={Jacob Katz, Ziyad Hanna, Nachum Dershowitz}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.276}, archivePrefix={arXiv}, eprint={0710.4629}, primaryClass={cs.LO} }
katz2007space-efficient
arxiv-1480
0710.4630
CAFFEINE: Template-Free Symbolic Model Generation of Analog Circuits via Canonical Form Functions and Genetic Programming
<|reference_start|>CAFFEINE: Template-Free Symbolic Model Generation of Analog Circuits via Canonical Form Functions and Genetic Programming: This paper presents a method to automatically generate compact symbolic performance models of analog circuits with no prior specification of an equation template. The approach takes SPICE simulation data as input, which enables modeling of any nonlinear circuits and circuit characteristics. Genetic programming is applied as a means of traversing the space of possible symbolic expressions. A grammar is specially designed to constrain the search to a canonical form for functions. Novel evolutionary search operators are designed to exploit the structure of the grammar. The approach generates a set of symbolic models which collectively provide a tradeoff between error and model complexity. Experimental results show that the symbolic models generated are compact and easy to understand, making this an effective method for aiding understanding in analog design. The models also demonstrate better prediction quality than posynomials.<|reference_end|>
arxiv
@article{mcconaghy2007caffeine:, title={CAFFEINE: Template-Free Symbolic Model Generation of Analog Circuits via Canonical Form Functions and Genetic Programming}, author={Trent Mcconaghy, Tom Eeckelaert, Georges Gielen}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4630}, primaryClass={cs.AR} }
mcconaghy2007caffeine:
arxiv-1481
0710.4632
Hardware Support for Arbitrarily Complex Loop Structures in Embedded Applications
<|reference_start|>Hardware Support for Arbitrarily Complex Loop Structures in Embedded Applications: In this paper, the program control unit of an embedded RISC processor is enhanced with a novel zero-overhead loop controller (ZOLC) supporting arbitrary loop structures with multiple-entry/exit nodes. The ZOLC has been incorporated to an open RISC processor core to evaluate the performance of the proposed unit for alternative configurations of the selected processor. It is proven that speed improvements of 8.4% to 48.2% are feasible for the used benchmarks.<|reference_end|>
arxiv
@article{kavvadias2007hardware, title={Hardware Support for Arbitrarily Complex Loop Structures in Embedded Applications}, author={Nikolaos Kavvadias, Spiridon Nikolaidis}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4632}, primaryClass={cs.AR} }
kavvadias2007hardware
arxiv-1482
0710.4633
Nano-Sim: A Step Wise Equivalent Conductance based Statistical Simulator for Nanotechnology Circuit Design
<|reference_start|>Nano-Sim: A Step Wise Equivalent Conductance based Statistical Simulator for Nanotechnology Circuit Design: New nanotechnology based devices are replacing CMOS devices to overcome CMOS technology's scaling limitations. However, many such devices exhibit non-monotonic I-V characteristics and uncertain properties which lead to the negative differential resistance (NDR) problem and the chaotic performance. This paper proposes a new circuit simulation approach that can effectively simulate nanotechnology devices with uncertain input sources and negative differential resistance (NDR) problem. The experimental results show a 20-30 times speedup comparing with existing simulators.<|reference_end|>
arxiv
@article{sukhwani2007nano-sim:, title={Nano-Sim: A Step Wise Equivalent Conductance based Statistical Simulator for Nanotechnology Circuit Design}, author={Bharat Sukhwani, Uday Padmanabhan, Janet M. Wang}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.221}, archivePrefix={arXiv}, eprint={0710.4633}, primaryClass={cs.PF} }
sukhwani2007nano-sim:
arxiv-1483
0710.4634
A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching
<|reference_start|>A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching: Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.<|reference_end|>
arxiv
@article{kumar2007a, title={A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching}, author={Y. Satish Kumar, Jun Li, Claudio Talarico, Janet Wang}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.31}, archivePrefix={arXiv}, eprint={0710.4634}, primaryClass={cs.AR} }
kumar2007a
arxiv-1484
0710.4635
OS Debugging Method Using a Lightweight Virtual Machine Monitor
<|reference_start|>OS Debugging Method Using a Lightweight Virtual Machine Monitor: Demands for implementing original OSs that can achieve high I/O performance on PC/AT compatible hardware have recently been increasing, but conventional OS debugging environments have not been able to simultaneously assure their stability, be easily customized to new OSs and new I/O devices, and assure efficient execution of I/O operations. We therefore developed a novel OS debugging method using a lightweight virtual machine. We evaluated this debugging method experimentally and confirmed that it can transfer data about 5.4 times as fast as the conventional virtual machine monitor.<|reference_end|>
arxiv
@article{takeuchi2007os, title={OS Debugging Method Using a Lightweight Virtual Machine Monitor}, author={Tadashi Takeuchi}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4635}, primaryClass={cs.OS} }
takeuchi2007os
arxiv-1485
0710.4636
Why Systems-on-Chip Needs More UML like a Hole in the Head
<|reference_start|>Why Systems-on-Chip Needs More UML like a Hole in the Head: Let's be clear from the outset: SoC can most certainly make use of UML; SoC just doesn't need more UML, or even all of it. The advent of model mappings, coupled with marks that indicate which mapping rule to apply, enable a major simplification of the use of UML in SoC.<|reference_end|>
arxiv
@article{mellor2007why, title={Why Systems-on-Chip Needs More UML like a Hole in the Head}, author={Stephen J. Mellor, John R. Wolfe, Campbell Mccausland}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4636}, primaryClass={cs.AR} }
mellor2007why
arxiv-1486
0710.4637
The Accidental Detection Index as a Fault Ordering Heuristic for Full-Scan Circuits
<|reference_start|>The Accidental Detection Index as a Fault Ordering Heuristic for Full-Scan Circuits: We investigate a new fault ordering heuristic for test generation in full-scan circuits. The heuristic is referred to as the accidental detection index. It associates a value ADI (f) with every circuit fault f. The heuristic estimates the number of faults that will be detected by a test generated for f. Fault ordering is done such that a fault with a higher accidental detection index appears earlier in the ordered fault set and targeted earlier during test generation. This order is effective for generating compact test sets, and for obtaining a test set with a steep fault coverage curve. Such a test set has several applications. We present experimental results to demonstrate the effectiveness of the heuristic.<|reference_end|>
arxiv
@article{pomeranz2007the, title={The Accidental Detection Index as a Fault Ordering Heuristic for Full-Scan Circuits}, author={Irith Pomeranz, Sudhakar M. Reddy}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4637}, primaryClass={cs.OH} }
pomeranz2007the
arxiv-1487
0710.4638
Buffer Insertion for Bridges and Optimal Buffer Sizing for Communication Sub-System of Systems-on-Chip
<|reference_start|>Buffer Insertion for Bridges and Optimal Buffer Sizing for Communication Sub-System of Systems-on-Chip: We have presented an optimal buffer sizing and buffer insertion methodology which uses stochastic models of the architecture and Continuous Time Markov Decision Processes CTMDPs. Such a methodology is useful in managing the scarce buffer resources available on chip as compared to network based data communication which can have large buffer space. The modeling of this problem in terms of a CT-MDP framework lead to a nonlinear formulation due to usage of bridges in the bus architecture. We present a methodology to split the problem into several smaller though linear systems and we then solve these subsystems.<|reference_end|>
arxiv
@article{kallakuri2007buffer, title={Buffer Insertion for Bridges and Optimal Buffer Sizing for Communication Sub-System of Systems-on-Chip}, author={Sankalp S. Kallakuri, Alex Doboli, Eugene A. Feinberg}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4638}, primaryClass={cs.AR} }
kallakuri2007buffer
arxiv-1488
0710.4639
Modeling the Non-Linear Behavior of Library Cells for an Accurate Static Noise Analysis
<|reference_start|>Modeling the Non-Linear Behavior of Library Cells for an Accurate Static Noise Analysis: In signal integrity analysis, the joint effect of propagated noise through library cells, and of the noise injected on a quiet net by neighboring switching nets through coupling capacitances, must be considered in order to accurately estimate the overall noise impact on design functionality and performances. In this work the impact of the cell non-linearity on the noise glitch waveform is analyzed in detail, and a new macromodel that allows to accurately and efficiently modeling the non-linear effects of the victim driver in noise analysis is presented. Experimental results demonstrate the effectiveness of our method, and confirm that existing noise analysis approaches based on linear superposition of the propagated and crosstalk-injected noise can be highly inaccurate, thus impairing the sign-off functional verification phase.<|reference_end|>
arxiv
@article{forzan2007modeling, title={Modeling the Non-Linear Behavior of Library Cells for an Accurate Static Noise Analysis}, author={Cristiano Forzan, Davide Pandini}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4639}, primaryClass={cs.AR} }
forzan2007modeling
arxiv-1489
0710.4640
FORAY-GEN: Automatic Generation of Affine Functions for Memory Optimizations
<|reference_start|>FORAY-GEN: Automatic Generation of Affine Functions for Memory Optimizations: In today's embedded applications a significant portion of energy is spent in the memory subsystem. Several approaches have been proposed to minimize this energy, including the use of scratch pad memories, with many based on static analysis of a program. However, often it is not possible to perform static analysis and optimization of a program's memory access behavior unless the program is specifically written for this purpose. In this paper we introduce the FORAY model of a program that permits aggressive analysis of the application's memory behavior that further enables such optimizations since it consists of 'for' loops and array accesses which are easily analyzable. We present FORAY-GEN: an automated profile-based approach for extraction of the FORAY model from the original program. We also demonstrate how FORAY-GEN enhances applicability of other memory subsystem optimization approaches, resulting in an average of two times increase in the number of memory references that can be analyzed by existing static approaches.<|reference_end|>
arxiv
@article{issenin2007foray-gen:, title={FORAY-GEN: Automatic Generation of Affine Functions for Memory Optimizations}, author={Ilya Issenin, Nikil Dutt}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4640}, primaryClass={cs.PL} }
issenin2007foray-gen:
arxiv-1490
0710.4641
UML 20 - Overview and Perspectives in SoC Design
<|reference_start|>UML 20 - Overview and Perspectives in SoC Design: The design productivity gap requires more efficient design methods. Software systems have faced the same challenge and seem to have mastered it with the introduction of more abstract design methods. The UML has become the standard for software systems modeling and thus the foundation of new design methods. Although the UML is defined as a general purpose modeling language, its application to hardware and hardware/software codesign is very limited. In order to successfully apply the UML at these fields, it is essential to understand its capabilities and to map it to a new domain.<|reference_end|>
arxiv
@article{schattkowsky2007uml, title={UML 2.0 - Overview and Perspectives in SoC Design}, author={Tim Schattkowsky}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4641}, primaryClass={cs.SE} }
schattkowsky2007uml
arxiv-1491
0710.4642
Modeling and Propagation of Noisy Waveforms in Static Timing Analysis
<|reference_start|>Modeling and Propagation of Noisy Waveforms in Static Timing Analysis: A technique based on the sensitivity of the output to input waveform is presented for accurate propagation of delay information through a gate for the purpose of static timing analysis (STA) in the presence of noise. Conventional STA tools represent a waveform by its arrival time and slope. However, this is not an accurate way of modeling the waveform for the purpose of noise analysis. The key contribution of our work is the development of a method that allows efficient propagation of equivalent waveforms throughout the circuit. Experimental results demonstrate higher accuracy of the proposed sensitivity-based gate delay propagation technique, SGDP, compared to the best of existing approaches. SGDP is compatible with the current level of gate characterization in conventional ASIC cell libraries, and as a result, it can be easily incorporated into commercial STA tools to improve their accuracy.<|reference_end|>
arxiv
@article{nazarian2007modeling, title={Modeling and Propagation of Noisy Waveforms in Static Timing Analysis}, author={Shahin Nazarian, Massoud Pedram, Emre Tuncer, Tao Lin, Amir H. Ajami}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.211}, archivePrefix={arXiv}, eprint={0710.4642}, primaryClass={cs.OH} }
nazarian2007modeling
arxiv-1492
0710.4643
Generic Pipelined Processor Modeling and High Performance Cycle-Accurate Simulator Generation
<|reference_start|>Generic Pipelined Processor Modeling and High Performance Cycle-Accurate Simulator Generation: Detailed modeling of processors and high performance cycle-accurate simulators are essential for today's hardware and software design. These problems are challenging enough by themselves and have seen many previous research efforts. Addressing both simultaneously is even more challenging, with many existing approaches focusing on one over another. In this paper, we propose the Reduced Colored Petri Net (RCPN) model that has two advantages: first, it offers a very simple and intuitive way of modeling pipelined processors; second, it can generate high performance cycle-accurate simulators. RCPN benefits from all the useful features of Colored Petri Nets without suffering from their exponential growth in complexity. RCPN processor models are very intuitive since they are a mirror image of the processor pipeline block diagram. Furthermore, in our experiments on the generated cycle-accurate simulators for XScale and StrongArm processor models, we achieved an order of magnitude (~15 times) speedup over the popular SimpleScalar ARM simulator.<|reference_end|>
arxiv
@article{reshadi2007generic, title={Generic Pipelined Processor Modeling and High Performance Cycle-Accurate Simulator Generation}, author={Mehrdad Reshadi, Nikil Dutt}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, doi={10.1109/DATE.2005.166}, archivePrefix={arXiv}, eprint={0710.4643}, primaryClass={cs.AR cs.PF} }
reshadi2007generic
arxiv-1493
0710.4644
Cycle Accurate Binary Translation for Simulation Acceleration in Rapid Prototyping of SoCs
<|reference_start|>Cycle Accurate Binary Translation for Simulation Acceleration in Rapid Prototyping of SoCs: In this paper, the application of a cycle accurate binary translator for rapid prototyping of SoCs will be presented. This translator generates code to run on a rapid prototyping system consisting of a VLIW processor and FPGAs. The generated code is annotated with information that triggers cycle generation for the hardware in parallel to the execution of the translated program. The VLIW processor executes the translated program whereas the FPGAs contain the hardware for the parallel cycle generation and the bus interface that adapts the bus of the VLIW processor to the SoC bus of the emulated processor core.<|reference_end|>
arxiv
@article{schnerr2007cycle, title={Cycle Accurate Binary Translation for Simulation Acceleration in Rapid Prototyping of SoCs}, author={Jurgen Schnerr, Oliver Bringmann, Wolfgang Rosenstiel}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4644}, primaryClass={cs.AR} }
schnerr2007cycle
arxiv-1494
0710.4645
At-Speed Logic BIST for IP Cores
<|reference_start|>At-Speed Logic BIST for IP Cores: This paper describes a flexible logic BIST scheme that features high fault coverage achieved by fault-simulation guided test point insertion, real at-speed test capability for multi-clock designs without clock frequency manipulation, and easy physical implementation due to the use of a low-speed SE signal. Application results of this scheme to two widely used IP cores are also reported.<|reference_end|>
arxiv
@article{cheon2007at-speed, title={At-Speed Logic BIST for IP Cores}, author={B. Cheon, E. Lee, L.-T. Wang, X. Wen, P. Hsu, J. Cho, J. Park, H. Chao, S. Wu}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4645}, primaryClass={cs.AR} }
cheon2007at-speed
arxiv-1495
0710.4646
Fast Dynamic Memory Integration in Co-Simulation Frameworks for Multiprocessor System on-Chip
<|reference_start|>Fast Dynamic Memory Integration in Co-Simulation Frameworks for Multiprocessor System on-Chip: In this paper is proposed a technique to integrate and simulate a dynamic memory in a multiprocessor framework based on C/C++/SystemC. Using host machine's memory management capabilities, dynamic data processing is supported without compromising speed and accuracy of the simulation. A first prototype in a shared memory context is presented.<|reference_end|>
arxiv
@article{villa2007fast, title={Fast Dynamic Memory Integration in Co-Simulation Frameworks for Multiprocessor System on-Chip}, author={O. Villa, P. Schaumont, I. Verbauwhede, M. Monchiero, G. Palermo}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4646}, primaryClass={cs.AR} }
villa2007fast
arxiv-1496
0710.4649
Stochastic Power Grid Analysis Considering Process Variations
<|reference_start|>Stochastic Power Grid Analysis Considering Process Variations: In this paper, we investigate the impact of interconnect and device process variations on voltage fluctuations in power grids. We consider random variations in the power grid's electrical parameters as spatial stochastic processes and propose a new and efficient method to compute the stochastic voltage response of the power grid. Our approach provides an explicit analytical representation of the stochastic voltage response using orthogonal polynomials in a Hilbert space. The approach has been implemented in a prototype software called OPERA (Orthogonal Polynomial Expansions for Response Analysis). Use of OPERA on industrial power grids demonstrated speed-ups of up to two orders of magnitude. The results also show a significant variation of about $\pm$ 35% in the nominal voltage drops at various nodes of the power grids and demonstrate the need for variation-aware power grid analysis.<|reference_end|>
arxiv
@article{ghanta2007stochastic, title={Stochastic Power Grid Analysis Considering Process Variations}, author={Praveen Ghanta, Sarma Vrudhula, Rajendran Panda, Janet Wang}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4649}, primaryClass={cs.AR} }
ghanta2007stochastic
arxiv-1497
0710.4652
Locality-Aware Process Scheduling for Embedded MPSoCs
<|reference_start|>Locality-Aware Process Scheduling for Embedded MPSoCs: Utilizing on-chip caches in embedded multiprocessor-system-on-a-chip (MPSoC) based systems is critical from both performance and power perspectives. While most of the prior work that targets at optimizing cache behavior are performed at hardware and compilation levels, operating system (OS) can also play major role as it sees the global access pattern information across applications. This paper proposes a cache-conscious OS process scheduling strategy based on data reuse. The proposed scheduler implements two complementary approaches. First, the processes that do not share any data between them are scheduled at different cores if it is possible to do so. Second, the processes that could not be executed at the same time (due to dependences) but share data among each other are mapped to the same processor core so that they share the cache contents. Our experimental results using this new data locality aware OS scheduling strategy are promising, and show significant improvements in task completion times.<|reference_end|>
arxiv
@article{kandemir2007locality-aware, title={Locality-Aware Process Scheduling for Embedded MPSoCs}, author={Mahmut Kandemir, Guilin Chen}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4652}, primaryClass={cs.AR} }
kandemir2007locality-aware
arxiv-1498
0710.4653
Simultaneous Reduction of Dynamic and Static Power in Scan Structures
<|reference_start|>Simultaneous Reduction of Dynamic and Static Power in Scan Structures: Power dissipation during test is a major challenge in testing integrated circuits. Dynamic power has been the dominant part of power dissipation in CMOS circuits, however, in future technologies the static portion of power dissipation will outreach the dynamic portion. This paper proposes an efficient technique to reduce both dynamic and static power dissipation in scan structures. Scan cell outputs which are not on the critical path(s) are multiplexed to fixed values during scan mode. These constant values and primary inputs are selected such that the transitions occurred on non-multiplexed scan cells are suppressed and the leakage current during scan mode is decreased. A method for finding these vectors is also proposed. Effectiveness of this technique is proved by experiments performed on ISCAS89 benchmark circuits.<|reference_end|>
arxiv
@article{sharifi2007simultaneous, title={Simultaneous Reduction of Dynamic and Static Power in Scan Structures}, author={Shervin Sharifi, Javid Jaffari, Mohammad Hosseinabady, Ali Afzali-Kusha, Zainalabedin Navabi}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4653}, primaryClass={cs.AR} }
sharifi2007simultaneous
arxiv-1499
0710.4654
Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction
<|reference_start|>Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction: Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.<|reference_end|>
arxiv
@article{li2007modeling, title={Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction}, author={Peng Li, Frank Liu, Xin Li, Lawrence T. Pileggi, Sani R. Nassif}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4654}, primaryClass={cs.AR} }
li2007modeling
arxiv-1500
0710.4655
A Fast Diagnosis Scheme for Distributed Small Embedded SRAMs
<|reference_start|>A Fast Diagnosis Scheme for Distributed Small Embedded SRAMs: This paper proposes a diagnosis scheme aimed at reducing diagnosis time of distributed small embedded SRAMs (e-SRAMs). This scheme improves the one proposed in [A parallel built-in self-diagnostic method for embedded memory buffers, A parallel built-in self-diagnostic method for embedded memory arrays]. The improvements are mainly two-fold. On one hand, the diagnosis of time-consuming Data Retention Faults (DRFs), which is neglected by the diagnosis architecture in [A parallel built-in self-diagnostic method for embedded memory buffers, A parallel built-in self-diagnostic method for embedded memory arrays], is now considered and performed via a DFT technique referred to as the "No Write Recovery Test Mode (NWRTM)". On the other hand, a pair comprising a Serial to Parallel Converter (SPC) and a Parallel to Serial Converter (PSC) is utilized to replace the bi-directional serial interface, to avoid the problems of serial fault masking and defect rate dependent diagnosis. Results from our evaluations show that the proposed diagnosis scheme achieves an increased diagnosis coverage and reduces diagnosis time compared to those obtained in [A parallel built-in self-diagnostic method for embedded memory buffers, A parallel built-in self-diagnostic method for embedded memory arrays], with neglectable extra area cost.<|reference_end|>
arxiv
@article{wang2007a, title={A Fast Diagnosis Scheme for Distributed Small Embedded SRAMs}, author={Baosheng Wang, Yuejian Wu, Andre Ivanov}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4655}, primaryClass={cs.AR} }
wang2007a