corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-1601
0710.4798
System Synthesis for Networks of Programmable Blocks
<|reference_start|>System Synthesis for Networks of Programmable Blocks: The advent of sensor networks presents untapped opportunities for synthesis. We examine the problem of synthesis of behavioral specifications into networks of programmable sensor blocks. The particular behavioral specification we consider is an intuitive user-created network diagram of sensor blocks, each block having a pre-defined combinational or sequential behavior. We synthesize this specification to a new network that utilizes a minimum number of programmable blocks in place of the pre-defined blocks, thus reducing network size and hence network cost and power. We focus on the main task of this synthesis problem, namely partitioning pre-defined blocks onto a minimum number of programmable blocks, introducing the efficient but effective PareDown decomposition algorithm for the task. We describe the synthesis and simulation tools we developed. We provide results showing excellent network size reductions through such synthesis, and significant speedups of our algorithm over exhaustive search while obtaining near-optimal results for 15 real network designs as well as nearly 10,000 randomly generated designs.<|reference_end|>
arxiv
@article{mannion2007system, title={System Synthesis for Networks of Programmable Blocks}, author={Ryan Mannion, Harry Hsieh, Susan Cotterell, Frank Vahid}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4798}, primaryClass={cs.OH} }
mannion2007system
arxiv-1602
0710.4799
Access Pattern-Based Code Compression for Memory-Constrained Embedded Systems
<|reference_start|>Access Pattern-Based Code Compression for Memory-Constrained Embedded Systems: As compared to a large spectrum of performance optimizations, relatively little effort has been dedicated to optimize other aspects of embedded applications such as memory space requirements, power, real-time predictability, and reliability. In particular, many modern embedded systems operate under tight memory space constraints. One way of satisfying these constraints is to compress executable code and data as much as possible. While research on code compression have studied efficient hardware and software based code strategies, many of these techniques do not take application behavior into account, that is, the same compression/decompression strategy is used irrespective of the application being optimized. This paper presents a code compression strategy based on control flow graph (CFG) representation of the embedded program. The idea is to start with a memory image wherein all basic blocks are compressed, and decompress only the blocks that are predicted to be needed in the near future. When the current access to a basic block is over, our approach also decides the point at which the block could be compressed. We propose several compression and decompression strategies that try to reduce memory requirements without excessively increasing the original instruction cycle counts.<|reference_end|>
arxiv
@article{ozturk2007access, title={Access Pattern-Based Code Compression for Memory-Constrained Embedded Systems}, author={O. Ozturk, H. Saputra, M. Kandemir, I. Kolcu}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4799}, primaryClass={cs.OH} }
ozturk2007access
arxiv-1603
0710.4800
An Improved FPGA Implementation of the Modified Hybrid Hiding Encryption Algorithm (MHHEA) for Data Communication Security
<|reference_start|>An Improved FPGA Implementation of the Modified Hybrid Hiding Encryption Algorithm (MHHEA) for Data Communication Security: The hybrid hiding encryption algorithm, as its name implies, embraces concepts from both steganography and cryptography. In this exertion, an improved micro-architecture Field Programmable Gate Array (FPGA) implementation of this algorithm is presented. This design overcomes the observed limitations of a previously-designed micro-architecture. These observed limitations are: no exploitation of the possibility of parallel bit replacement, and the fact that the input plaintext was encrypted serially, which caused a dependency between the throughput and the nature of the used secret key. This dependency can be viewed by some as vulnerability in the security of the implemented micro-architecture. The proposed modified micro-architecture is constructed using five basic modules. These modules are; the message cache, the message alignment module, the key cache, the comparator, and at last the encryption module. In this work, we provide comprehensive simulation and implementation results. These are: the timing diagrams, the post-implementation timing and routing reports, and finally the floor plan. Moreover, a detailed comparison with other FPGA implementations is made available and discussed.<|reference_end|>
arxiv
@article{farouk2007an, title={An Improved FPGA Implementation of the Modified Hybrid Hiding Encryption Algorithm (MHHEA) for Data Communication Security}, author={Hala A. Farouk, Magdy Saeb}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4800}, primaryClass={cs.CR} }
farouk2007an
arxiv-1604
0710.4801
Behavioural Transformation to Improve Circuit Performance in High-Level Synthesis
<|reference_start|>Behavioural Transformation to Improve Circuit Performance in High-Level Synthesis: Early scheduling algorithms usually adjusted the clock cycle duration to the execution time of the slowest operation. This resulted in large slack times wasted in those cycles executing faster operations. To reduce the wasted times multi-cycle and chaining techniques have been employed. While these techniques have produced successful designs, its effectiveness is often limited due to the area increment that may derive from chaining, and the extra latencies that may derive from multicycling. In this paper we present an optimization method that solves the time-constrained scheduling problem by transforming behavioural specifications into new ones whose subsequent synthesis substantially improves circuit performance. Our proposal breaks up some of the specification operations, allowing their execution during several possibly unconsecutive cycles, and also the calculation of several data-dependent operation fragments in the same cycle. To do so, it takes into account the circuit latency and the execution time of every specification operation. The experimental results carried out show that circuits obtained from the optimized specification are on average 60% faster than those synthesized from the original specification, with only slight increments in the circuit area.<|reference_end|>
arxiv
@article{ruiz-sautua2007behavioural, title={Behavioural Transformation to Improve Circuit Performance in High-Level Synthesis}, author={R. Ruiz-Sautua, M. C. Molina, J. M. Mendias, R. Hermida}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4801}, primaryClass={cs.AR} }
ruiz-sautua2007behavioural
arxiv-1605
0710.4802
Mutation Sampling Technique for the Generation of Structural Test Data
<|reference_start|>Mutation Sampling Technique for the Generation of Structural Test Data: Our goal is to produce validation data that can be used as an efficient (pre) test set for structural stuck-at faults. In this paper, we detail an original test-oriented mutation sampling technique used for generating such data and we present a first evaluation on these validation data with regard to a structural test.<|reference_end|>
arxiv
@article{scholive2007mutation, title={Mutation Sampling Technique for the Generation of Structural Test Data}, author={M. Scholive, V. Beroulle, C. Robach, M. L. Flottes (LIRMM), B. Rouzeyre (LIRMM)}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4802}, primaryClass={cs.OH} }
scholive2007mutation
arxiv-1606
0710.4803
Hardware Engines for Bus Encryption: A Survey of Existing Techniques
<|reference_start|>Hardware Engines for Bus Encryption: A Survey of Existing Techniques: The widening spectrum of applications and services provided by portable and embedded devices bring a new dimension of concerns in security. Most of those embedded systems (pay-TV, PDAs, mobile phones, etc...) make use of external memory. As a result, the main problem is that data and instructions are constantly exchanged between memory (RAM) and CPU in clear form on the bus. This memory may contain confidential data like commercial software or private contents, which either the end-user or the content provider is willing to protect. The goal of this paper is to clearly describe the problem of processor-memory bus communications in this regard and the existing techniques applied to secure the communication channel through encryption - Performance overheads implied by those solutions will be extensively discussed in this paper.<|reference_end|>
arxiv
@article{elbaz2007hardware, title={Hardware Engines for Bus Encryption: A Survey of Existing Techniques}, author={R. Elbaz, L. Torres (LIRMM), G. Sassatelli (LIRMM), P. Guillemin, C. Anguille, M. Bardouillet, C. Buatois, J. B. Rigaud}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4803}, primaryClass={cs.CR} }
elbaz2007hardware
arxiv-1607
0710.4805
Modeling of a Reconfigurable OFDM IP Block Family For an RF System Simulator
<|reference_start|>Modeling of a Reconfigurable OFDM IP Block Family For an RF System Simulator: The idea of design domain specific Mother Model of IP block family as a base of modeling of system integration is presented here. A common reconfigurable Mother Model for ten different standardized digital OFDM transmitters has been developed. By means of a set of parameters, the mother model can be reconfigured to any of the ten selected standards. So far the applicability of the proposed reconfiguration and analog-digital co-modeling methods have been proved by modeling the function of the digital parts of three, 802.11a, ADSL and DRM, transmitters in an RF system simulator. The model is intended to be used as signal source template in RF system simulations. The concept is not restricted to signal sources, it can be applied to any IP block development. The idea of the Mother Model will be applied in other design domains to prove that in certain application areas, OFDM transceivers in this case, the design process can progress simultaneously in different design domains - mixed signal, system and RTL-architectural - without the need of high-level synthesis. Only the Mother Models of three design domains are needed to be formally proved to function as specified.<|reference_end|>
arxiv
@article{heusala2007modeling, title={Modeling of a Reconfigurable OFDM IP Block Family For an RF System Simulator}, author={Hannu Heusala, Jussi Liedes}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4805}, primaryClass={cs.AR} }
heusala2007modeling
arxiv-1608
0710.4806
A VLSI Design Flow for Secure Side-Channel Attack Resistant ICs
<|reference_start|>A VLSI Design Flow for Secure Side-Channel Attack Resistant ICs: This paper presents a digital VLSI design flow to create secure, side-channel attack (SCA) resistant integrated circuits. The design flow starts from a normal design in a hardware description language such as VHDL or Verilog and provides a direct path to a SCA resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. We discuss the basis for side-channel attack resistance and adjust the library databases and constraints files of the synthesis and place & route procedures accordingly. Experimental results show that a DPA attack on a regular single ended CMOS standard cell implementation of a module of the DES algorithm discloses the secret key after 200 measurements. The same attack on a secure version still does not disclose the secret key after more than 2000 measurements.<|reference_end|>
arxiv
@article{tiri2007a, title={A VLSI Design Flow for Secure Side-Channel Attack Resistant ICs}, author={Kris Tiri, Ingrid Verbauwhede}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4806}, primaryClass={cs.AR} }
tiri2007a
arxiv-1609
0710.4807
A Constraint Network Based Approach to Memory Layout Optimization
<|reference_start|>A Constraint Network Based Approach to Memory Layout Optimization: While loop restructuring based code optimization for array intensive applications has been successful in the past, it has several problems such as the requirement of checking dependences (legality issues) and transformation of all of the array references within the loop body indiscriminately (while some of the references can benefit from the transformation, others may not). As a result, data transformations, i.e., transformations that modify memory layout of array data instead of loop structure have been proposed. One of the problems associated with data transformations is the difficulty of selecting a memory layout for an array that is acceptable to the entire program (not just to a single loop). In this paper, we formulate the problem of determining the memory layouts of arrays as a constraint network, and explore several methods of solution in a systematic way. Our experiments provide strong support in favor of employing constraint processing, and point out future research directions.<|reference_end|>
arxiv
@article{chen2007a, title={A Constraint Network Based Approach to Memory Layout Optimization}, author={G. Chen, M. Kandemir, M. Karakoy}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4807}, primaryClass={cs.PL} }
chen2007a
arxiv-1610
0710.4808
Fast and Accurate Transaction Level Modeling of an Extended AMBA20 Bus Architecture
<|reference_start|>Fast and Accurate Transaction Level Modeling of an Extended AMBA20 Bus Architecture: Transaction Level Modeling (TLM) approach is used to meet the simulation speed as well as cycle accuracy for large scale SoC performance analysis. We implemented a transaction-level model of a proprietary bus called AHB+ which supports an extended AMBA2.0 protocol. The AHB+ transaction-level model shows 353 times faster than pin-accurate RTL model while maintaining 97% of accuracy on average. We also present the development procedure of TLM of a bus architecture.<|reference_end|>
arxiv
@article{kim2007fast, title={Fast and Accurate Transaction Level Modeling of an Extended AMBA2.0 Bus Architecture}, author={Young-Taek Kim, Taehun Kim, Youngduk Kim, Chulho Shin, Eui-Young Chung, Kyu-Myung Choi, Jeong-Taek Kong, Soo-Kwan Eo}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4808}, primaryClass={cs.AR} }
kim2007fast
arxiv-1611
0710.4809
C Based Hardware Design for Wireless Applications
<|reference_start|>C Based Hardware Design for Wireless Applications: The algorithms used in wireless applications are increasingly more sophisticated and consequently more challenging to implement in hardware. Traditional design flows require developing the micro architecture, coding the RTL, and verifying the generated RTL against the original functional C or MATLAB specification. This paper describes a C-based design flow that is well suited for the hardware implementation of DSP algorithms commonly found in wireless applications. The C design flow relies on guided synthesis to generate the RTL directly from the untimed C algorithm. The specifics of the C-based design flow are described using a simple DSP filtering algorithm consisting of a forward adaptive equalizer, a 64-QAM slicer and an adaptive decision feedback equalizer. The example illustrates some of the capabilities and advantages offered by this flow.<|reference_end|>
arxiv
@article{takach2007c, title={C Based Hardware Design for Wireless Applications}, author={Andres Takach, Bryan Bowyer, Thomas Bollaert}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4809}, primaryClass={cs.AR} }
takach2007c
arxiv-1612
0710.4810
Area Efficient Hardware Implementation of Elliptic Curve Cryptography by Iteratively Applying Karatsuba's Method
<|reference_start|>Area Efficient Hardware Implementation of Elliptic Curve Cryptography by Iteratively Applying Karatsuba's Method: Securing communication channels is especially needed in wireless environments. But applying cipher mechanisms in software is limited by the calculation and energy resources of the mobile devices. If hardware is applied to realize cryptographic operations cost becomes an issue. In this paper we describe an approach which tackles all these three points. We implemented a hardware accelerator for polynomial multiplication in extended Galois fields (GF) applying Karatsuba's method iteratively. With this approach the area consumption is reduced to 2.1 mm^2 in comparison to. 6.2 mm^2 for the standard application of Karatsuba's method i.e. for recursive application. Our approach also reduces the energy consumption to 60 per cent of the original approach. The price we have to pay for these achievement is the increased execution time. In our implementation a polynomial multiplication takes 3 clock cycles whereas the recurisve Karatsuba approach needs only one clock cycle. But considering area, energy and calculation speed we are convinced that the benefits of our approach outweigh its drawback.<|reference_end|>
arxiv
@article{dyka2007area, title={Area Efficient Hardware Implementation of Elliptic Curve Cryptography by Iteratively Applying Karatsuba's Method}, author={Zoya Dyka, Peter Langendoerfer}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4810}, primaryClass={cs.CR} }
dyka2007area
arxiv-1613
0710.4811
System Level Analysis of the Bluetooth Standard
<|reference_start|>System Level Analysis of the Bluetooth Standard: The SystemC modules of the Link Manager Layer and Baseband Layer have been designed in this work at behavioral level to analyze the performances of the Bluetooth standard. In particular the probability of the creation of a piconet in presence of noise in the channel and the power reduction using the sniff and hold mode have been investigated.<|reference_end|>
arxiv
@article{conti2007system, title={System Level Analysis of the Bluetooth Standard}, author={Massimo Conti, Daniele Moretti}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4811}, primaryClass={cs.OH} }
conti2007system
arxiv-1614
0710.4812
Area and Throughput Trade-Offs in the Design of Pipelined Discrete Wavelet Transform Architectures
<|reference_start|>Area and Throughput Trade-Offs in the Design of Pipelined Discrete Wavelet Transform Architectures: The JPEG2000 standard defines the discrete wavelet transform (DWT) as a linear space-to-frequency transform of the image domain in an irreversible compression. This irreversible discrete wavelet transform is implemented by FIR filter using 9/7 Daubechies coefficients or a lifting scheme of factorizated coefficients from 9/7 Daubechies coefficients. This work investigates the tradeoffs between area, power and data throughput (or operating frequency) of several implementations of the Discrete Wavelet Transform using the lifting scheme in various pipeline designs. This paper shows the results of five different architectures synthesized and simulated in FPGAs. It concludes that the descriptions with pipelined operators provide the best area-power-operating frequency trade-off over non-pipelined operators descriptions. Those descriptions require around 40% more hardware to increase the maximum operating frequency up to 100% and reduce power consumption to less than 50%. Starting from behavioral HDL descriptions provide the best area-power-operating frequency trade-off, improving hardware cost and maximum operating frequency around 30% in comparison to structural descriptions for the same power requirement.<|reference_end|>
arxiv
@article{silva2007area, title={Area and Throughput Trade-Offs in the Design of Pipelined Discrete Wavelet Transform Architectures}, author={Sandro V. Silva, Sergio Bampi}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4812}, primaryClass={cs.AR} }
silva2007area
arxiv-1615
0710.4813
Queue Management in Network Processors
<|reference_start|>Queue Management in Network Processors: One of the main bottlenecks when designing a network processing system is very often its memory subsystem. This is mainly due to the state-of-the-art network links operating at very high speeds and to the fact that in order to support advanced Quality of Service (QoS), a large number of independent queues is desirable. In this paper we analyze the performance bottlenecks of various data memory managers integrated in typical Network Processing Units (NPUs). We expose the performance limitations of software implementations utilizing the RISC processing cores typically found in most NPU architectures and we identify the requirements for hardware assisted memory management in order to achieve wire-speed operation at gigabit per second rates. Furthermore, we describe the architecture and performance of a hardware memory manager that fulfills those requirements. This memory manager, although it is implemented in a reconfigurable technology, it can provide up to 6.2Gbps of aggregate throughput, while handling 32K independent queues.<|reference_end|>
arxiv
@article{papaefstathiou2007queue, title={Queue Management in Network Processors}, author={I. Papaefstathiou, T. Orphanoudakis, G. Kornaros, C. Kachris, I. Mavroidis, A. Nikologiannis}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4813}, primaryClass={cs.AR} }
papaefstathiou2007queue
arxiv-1616
0710.4814
picoArray Technology: The Tool's Story
<|reference_start|>picoArray Technology: The Tool's Story: This paper briefly describes the picoArray? architecture, and in particular the deterministic internal communication fabric. The methods that have been developed for debugging and verifying systems using devices from the picoArray family are explained. In order to maximize the computational ability of these devices, hardware debugging support has been kept to a minimum and the methods and tools developed to take this into account.<|reference_end|>
arxiv
@article{duller2007picoarray, title={picoArray Technology: The Tool's Story}, author={Andrew Duller, Daniel Towner, Gajinder Panesar, Alan Gray, Will Robbins}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4814}, primaryClass={cs.AR} }
duller2007picoarray
arxiv-1617
0710.4815
Direct Conversion Pulsed UWB Transceiver Architecture
<|reference_start|>Direct Conversion Pulsed UWB Transceiver Architecture: Ultra-wideband (UWB) communication is an emerging wireless technology that promises high data rates over short distances and precise locationing. The large available bandwidth and the constraint of a maximum power spectral density drives a unique set of system challenges. This paper addresses these challenges using two UWB transceivers and a discrete prototype platform.<|reference_end|>
arxiv
@article{blazquez2007direct, title={Direct Conversion Pulsed UWB Transceiver Architecture}, author={Raul Blazquez, Fred Lee, David Wentzloff, Brian Ginsburg, Johnna Powell, Anantha Chandrakasan}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4815}, primaryClass={cs.NI} }
blazquez2007direct
arxiv-1618
0710.4816
Power Saving Techniques for Wireless LANs
<|reference_start|>Power Saving Techniques for Wireless LANs: Fast wireless access has rapidly become commonplace. Wireless access points and Hotspot servers are sprouting everywhere. Battery lifetime continues to be a critical issue in mobile computing. This paper first gives an overview of WLAN energy saving strategies, followed by an illustration of a system-level methodology for saving power in heterogeneous wireless environments.<|reference_end|>
arxiv
@article{simunic2007power, title={Power Saving Techniques for Wireless LANs}, author={T. Simunic}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4816}, primaryClass={cs.NI} }
simunic2007power
arxiv-1619
0710.4817
Performance Considerations for an Embedded Implementation of OMA DRM 2
<|reference_start|>Performance Considerations for an Embedded Implementation of OMA DRM 2: As digital content services gain importance in the mobile world, Digital Rights Management (DRM) applications will become a key component of mobile terminals. This paper examines the effect dedicated hardware macros for specific cryptographic functions have on the performance of a mobile terminal that supports version 2 of the open standard for Digital Rights Management defined by the Open Mobile Alliance (OMA). Following a general description of the standard, the paper contains a detailed analysis of the cryptographic operations that have to be carried out before protected content can be accessed. The combination of this analysis with data on execution times for specific algorithms realized in hardware and software has made it possible to build a model which has allowed us to assert that hardware acceleration for specific cryptographic algorithms can significantly reduce the impact DRM has on a mobile terminal's processing performance and battery life.<|reference_end|>
arxiv
@article{thull2007performance, title={Performance Considerations for an Embedded Implementation of OMA DRM 2}, author={Daniel Thull, Roberto Sannino}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4817}, primaryClass={cs.CR} }
thull2007performance
arxiv-1620
0710.4818
Wireless LAN: Past, Present, and Future
<|reference_start|>Wireless LAN: Past, Present, and Future: This paper retraces the historical development of wireless LAN technology in the context of the pursuit of ever higher data rate, describes the significant technical breakthroughs that are now occurring, and speculates on future directions that the technology may take over the remainder of the decade. The challenges that these developments have created for low power operation are considered, as well as some of the opportunities that are presented to mitigate them. The importance of MIMO as an emerging technology for 802.11 is specifically highlighted, both in terms of the significant increase in data rate and range that it enables as well as the considerable challenge that it presents for the development of low power wireless LAN products.<|reference_end|>
arxiv
@article{holt2007wireless, title={Wireless LAN: Past, Present, and Future}, author={Keith Holt}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4818}, primaryClass={cs.NI} }
holt2007wireless
arxiv-1621
0710.4819
A High Quality/Low Computational Cost Technique for Block Matching Motion Estimation
<|reference_start|>A High Quality/Low Computational Cost Technique for Block Matching Motion Estimation: Motion estimation is the most critical process in video coding systems. First of all, it has a definitive impact on the rate-distortion performance given by the video encoder. Secondly, it is the most computationally intensive process within the encoding loop. For these reasons, the design of high-performance low-cost motion estimators is a crucial task in the video compression field. An adaptive cost block matching (ACBM) motion estimation technique is presented in this paper, featuring an excellent tradeoff between the quality of the reconstructed video sequences and the computational effort. Simulation results demonstrate that the ACBM algorithm achieves a slight better rate-distortion performance than the one given by the well-known full search algorithm block matching algorithm with reductions of up to 95% in the computational load.<|reference_end|>
arxiv
@article{lopez2007a, title={A High Quality/Low Computational Cost Technique for Block Matching Motion Estimation}, author={S. Lopez, G. M. Callico, J. F. Lopez, R. Sarmiento}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4819}, primaryClass={cs.MM} }
lopez2007a
arxiv-1622
0710.4820
ISEGEN: Generation of High-Quality Instruction Set Extensions by Iterative Improvement
<|reference_start|>ISEGEN: Generation of High-Quality Instruction Set Extensions by Iterative Improvement: Customization of processor architectures through Instruction Set Extensions (ISEs) is an effective way to meet the growing performance demands of embedded applications. A high-quality ISE generation approach needs to obtain results close to those achieved by experienced designers, particularly for complex applications that exhibit regularity: expert designers are able to exploit manually such regularity in the data flow graphs to generate high-quality ISEs. In this paper, we present ISEGEN, an approach that identifies high-quality ISEs by iterative improvement following the basic principles of the well-known Kernighan-Lin (K-L) min-cut heuristic. Experimental results on a number of MediaBench, EEMBC and cryptographic applications show that our approach matches the quality of the optimal solution obtained by exhaustive search. We also show that our ISEGEN technique is on average 20x faster than a genetic formulation that generates equivalent solutions. Furthermore, the ISEs identified by our technique exhibit 35% more speedup than the genetic solution on a large cryptographic application (AES) by effectively exploiting its regular structure.<|reference_end|>
arxiv
@article{biswas2007isegen:, title={ISEGEN: Generation of High-Quality Instruction Set Extensions by Iterative Improvement}, author={Partha Biswas, Sudarshan Banerjee, Nikil Dutt, Laura Pozzi, Paolo Ienne}, journal={Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4820}, primaryClass={cs.AR} }
biswas2007isegen:
arxiv-1623
0710.4821
Multimedia Applications of Multiprocessor Systems-on-Chips
<|reference_start|>Multimedia Applications of Multiprocessor Systems-on-Chips: This paper surveys the characteristics of multimedia systems. Multimedia applications today are dominated by compression and decompression, but multimedia devices must also implement many other functions such as security and file management. We introduce some basic concepts of multimedia algorithms and the larger set of functions that multimedia systems-on-chips must implement.<|reference_end|>
arxiv
@article{wolf2007multimedia, title={Multimedia Applications of Multiprocessor Systems-on-Chips}, author={Wayne Wolf}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4821}, primaryClass={cs.MM} }
wolf2007multimedia
arxiv-1624
0710.4823
A Coprocessor for Accelerating Visual Information Processing
<|reference_start|>A Coprocessor for Accelerating Visual Information Processing: Visual information processing will play an increasingly important role in future electronics systems. In many applications, e.g. video surveillance cameras, data throughput of microprocessors is not sufficient and power consumption is too high. Instruction profiling on a typical test algorithm has shown that pixel address calculations are the dominant operations to be optimized. Therefore AddressLib, a structured scheme for pixel addressing was developed, that can be accelerated by AddressEngine, a coprocessor for visual information processing. In this paper, the architectural design of AddressEngine is described, which in the first step supports a subset of the AddressLib. Dataflow and memory organization are optimized during architectural design. AddressEngine was implemented in a FPGA and was tested with MPEG-7 Global Motion Estimation algorithm. Results on processing speed and circuit complexity are given and compared to a pure software implementation. The next step will be the support for the full AddressLib, including segment addressing. An outlook on further investigations on dynamic reconfiguration capabilities is given.<|reference_end|>
arxiv
@article{stechele2007a, title={A Coprocessor for Accelerating Visual Information Processing}, author={W. Stechele, L. Alvado Carcel, S. Herrmann, J. Lidon Simon}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4823}, primaryClass={cs.MM} }
stechele2007a
arxiv-1625
0710.4824
FPGA based Agile Algorithm-On-Demand Co-Processor
<|reference_start|>FPGA based Agile Algorithm-On-Demand Co-Processor: With growing computational needs of many real-world applications, frequently changing specifications of standards, and the high design and NRE costs of ASICs, an algorithm-agile FPGA based co-processor has become a viable alternative. In this article, we report about the general design of an algorith-agile co-processor and the proof-of-concept implementation.<|reference_end|>
arxiv
@article{pradeep2007fpga, title={FPGA based Agile Algorithm-On-Demand Co-Processor}, author={R. Pradeep, S. Vinay, Sanjay Burman, V. Kamakoti}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4824}, primaryClass={cs.AR} }
pradeep2007fpga
arxiv-1626
0710.4825
Meeting the Embedded Design Needs of Automotive Applications
<|reference_start|>Meeting the Embedded Design Needs of Automotive Applications: The importance of embedded systems in driving innovation in automotive applications continues to grow. Understanding the specific needs of developers targeting this market is also helping to drive innovation in RISC core design. This paper describes how a RISC instruction set architecture has evolved to better meet those needs, and the key implementation features in two very different RISC cores are used to demonstrate the challenges of designing for real-time automotive systems.<|reference_end|>
arxiv
@article{lyons2007meeting, title={Meeting the Embedded Design Needs of Automotive Applications}, author={Wayne Lyons}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4825}, primaryClass={cs.AR} }
lyons2007meeting
arxiv-1627
0710.4826
The Integration of On-Line Monitoring and Reconfiguration Functions using EDAA - European design and Automation Association11494 Into a Safety Critical Automotive Electronic Control Unit
<|reference_start|>The Integration of On-Line Monitoring and Reconfiguration Functions using EDAA - European design and Automation Association11494 Into a Safety Critical Automotive Electronic Control Unit: This paper presents an innovative application of EDAA - European design and Automation Association 1149.4 and the Integrated Diagnostic Reconfiguration (IDR) as tools for the implementation of an embedded test solution for an Automotive Electronic Control Unit implemented as a fully integrated mixed signal system. The paper described how the test architecture can be used for fault avoidance with results from a hardware prototype presented. The paper concludes that fault avoidance can be integrated into mixed signal electronic systems to handle key failure modes.<|reference_end|>
arxiv
@article{jeffrey2007the, title={The Integration of On-Line Monitoring and Reconfiguration Functions using EDAA - European design and Automation Association1149.4 Into a Safety Critical Automotive Electronic Control Unit}, author={C. Jeffrey, R. Cutajar, S. Prosser, M. Lickess, A. Richardson, S. Riches}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4826}, primaryClass={cs.AR} }
jeffrey2007the
arxiv-1628
0710.4827
Debug Support, Calibration and Emulation for Multiple Processor and Powertrain Control SoCs
<|reference_start|>Debug Support, Calibration and Emulation for Multiple Processor and Powertrain Control SoCs: The introduction of complex SoCs with multiple processor cores presents new development challenges, such that development support is now a decisive factor when choosing a System-on-Chip (SoC). The presented developments support strategy addresses the challenges using both architecture and technology approaches. The Multi-Core Debug Support (MCDS) architecture provides flexible triggering using cross triggers and a multiple core break and suspend switch. Temporal trace ordering is guaranteed down to cycle level by on-chip time stamping. The Package Sized-ICE (PSI) approach is a novel method of including trace buffers, overlay memories, processing resources and communication interfaces without changing device behavior. PSI requires no external emulation box, as the debug host interfaces directly with the SoC using a standard interface.<|reference_end|>
arxiv
@article{mayer2007debug, title={Debug Support, Calibration and Emulation for Multiple Processor and Powertrain Control SoCs}, author={A. Mayer, H. Siebert, K.D. Mcdonald-Maier}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4827}, primaryClass={cs.AR} }
mayer2007debug
arxiv-1629
0710.4828
Bounds for Visual Cryptography Schemes
<|reference_start|>Bounds for Visual Cryptography Schemes: In this paper, we investigate the best pixel expansion of the various models of visual cryptography schemes. In this regard, we consider visual cryptography schemes introduced by Tzeng and Hu [13]. In such a model, only minimal qualified sets can recover the secret image and that the recovered secret image can be darker or lighter than the background. Blundo et al. [4] introduced a lower bound for the best pixel expansion of this scheme in terms of minimal qualified sets. We present another lower bound for the best pixel expansion of the scheme. As a corollary, we introduce a lower bound, based on an induced matching of hypergraph of qualified sets, for the best pixel expansion of the aforementioned model and the traditional model of visual cryptography realized by basis matrices. Finally, we study access structures based on graphs and we present an upper bound for the smallest pixel expansion in terms of strong chromatic index.<|reference_end|>
arxiv
@article{hajiabolhassan2007bounds, title={Bounds for Visual Cryptography Schemes}, author={Hossein Hajiabolhassan and Abbas Cheraghi}, journal={arXiv preprint arXiv:0710.4828}, year={2007}, archivePrefix={arXiv}, eprint={0710.4828}, primaryClass={cs.CR} }
hajiabolhassan2007bounds
arxiv-1630
0710.4829
AutoMoDe - Model-Based Development of Automotive Software
<|reference_start|>AutoMoDe - Model-Based Development of Automotive Software: This paper describes first results from the AutoMoDe (Automotive Model-Based Development) project. The overall goal of the project is to develop an integrated methodology for model-based development of automotive control software, based on problem-specific design notations with an explicit formal foundation. Based on the existing AutoFOCUS framework, a tool prototype is being developed in order to illustrate and validate the key elements of our approach.<|reference_end|>
arxiv
@article{ziegenbein2007automode, title={AutoMoDe - Model-Based Development of Automotive Software}, author={Dirk Ziegenbein, Peter Braun, Ulrich Freund, Andreas Bauer, Jan Romberg, Bernhard Schatz}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4829}, primaryClass={cs.SE} }
ziegenbein2007automode
arxiv-1631
0710.4831
LC Oscillator Driver for Safety Critical Applications
<|reference_start|>LC Oscillator Driver for Safety Critical Applications: A CMOS harmonic signal LC oscillator driver for automotive applications working in a harsh environment with high safety critical requirements is described. The driver can be used with a wide range of external components parameters (LC resonance network of a sensor). Quality factor of the external LC network can vary two decades. Amplitude regulation of the driver is digitally controlled and the DAC is constructed as exponential with piece-wise-linear (PWL) approximation. Low current consumption for high quality resonance networks is achieved. Realized oscillator is robust, used in safety critical application and has low EMC emissions.<|reference_end|>
arxiv
@article{horsky2007lc, title={LC Oscillator Driver for Safety Critical Applications}, author={Pavel Horsky}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4831}, primaryClass={cs.OH} }
horsky2007lc
arxiv-1632
0710.4832
SystemC Analysis of a New Dynamic Power Management Architecture
<|reference_start|>SystemC Analysis of a New Dynamic Power Management Architecture: This paper presents a new dynamic power management architecture of a System on Chip. The Power State Machine describing the status of the core follows the recommendations of the ACPI standard. The algorithm controls the power states of each block on the basis of battery status, chip temperature and a user defined task priority.<|reference_end|>
arxiv
@article{conti2007systemc, title={SystemC Analysis of a New Dynamic Power Management Architecture}, author={Massimo Conti}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4832}, primaryClass={cs.AR} }
conti2007systemc
arxiv-1633
0710.4833
Exploiting Real-Time FPGA Based Adaptive Systems Technology for Real-Time Sensor Fusion in Next Generation Automotive Safety Systems
<|reference_start|>Exploiting Real-Time FPGA Based Adaptive Systems Technology for Real-Time Sensor Fusion in Next Generation Automotive Safety Systems: We present a system for the boresighting of sensors using inertial measurement devices as the basis for developing a range of dynamic real-time sensor fusion applications. The proof of concept utilizes a COTS FPGA platform for sensor fusion and real-time correction of a misaligned video sensor. We exploit a custom-designed 32-bit soft processor core and C-based design & synthesis for rapid, platform-neutral development. Kalman filter and sensor fusion techniques established in advanced aviation systems are applied to automotive vehicles with results exceeding typical industry requirements for sensor alignment. Results of the static and the dynamic tests demonstrate that using inexpensive accelerometers mounted on (or during assembly of) a sensor and an Inertial Measurement Unit (IMU) fixed to a vehicle can be used to compute the misalignment of the sensor to the IMU and thus vehicle. In some cases the model predications and test results exceeded the requirements by an order of magnitude with a 3-sigma or 99% confidence.<|reference_end|>
arxiv
@article{chappell2007exploiting, title={Exploiting Real-Time FPGA Based Adaptive Systems Technology for Real-Time Sensor Fusion in Next Generation Automotive Safety Systems}, author={Steve Chappell, Alistair Macarthur, Dan Preston, Dave Olmstead, Bob Flint, Chris Sullivan}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4833}, primaryClass={cs.AR} }
chappell2007exploiting
arxiv-1634
0710.4834
Platform Based Design for Automotive Sensor Conditioning
<|reference_start|>Platform Based Design for Automotive Sensor Conditioning: In this paper a general architecture suitable to interface several kinds of sensors for automotive applications is presented. A platform based design approach is pursued to improve system performance while minimizing time-to-market.. The platform is composed by an analog front-end and a digital section. The latter is based on a microcontroller core (8051 IP by Oregano) plus a set of dedicated hardware dedicated to the complex signal processing required for sensor conditioning. The microcontroller handles also the communication with external devices (as a PC) for data output and fast prototyping. A case study is presented concerning the conditioning of a Gyro yaw rate sensor for automotive applications. Measured performance results outperform current state-of-the-art commercial devices.<|reference_end|>
arxiv
@article{fanucci2007platform, title={Platform Based Design for Automotive Sensor Conditioning}, author={L. Fanucci, A. Giambastiani, F. Iozzi, C. Marino, A. Rocchi}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4834}, primaryClass={cs.AR} }
fanucci2007platform
arxiv-1635
0710.4835
A CMOS-Based Tactile Sensor for Continuous Blood Pressure Monitoring
<|reference_start|>A CMOS-Based Tactile Sensor for Continuous Blood Pressure Monitoring: A monolithic integrated tactile sensor array is presented, which is used to perform non-invasive blood pressure monitoring of a patient. The advantage of this device compared to a hand cuff based approach is the capability of recording continuous blood pressure data. The capacitive, membrane-based sensor device is fabricated in an industrial CMOS-technology combined with post-CMOS micromachining. The capacitance change is detected by a S?-modulator. The modulator is operated at a sampling rate of 128kS/s and achieves a resolution of 12bit with an external decimation filter and an OSR of 128.<|reference_end|>
arxiv
@article{kirstein2007a, title={A CMOS-Based Tactile Sensor for Continuous Blood Pressure Monitoring}, author={K.-U. Kirstein, J. Sedivy, T. Salo, C. Hagleitner, T. Vancura, A. Hierlemann}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4835}, primaryClass={cs.OH} }
kirstein2007a
arxiv-1636
0710.4836
A Tool and Methodology for AC-Stability Analysis of Continuous-Time Closed-Loop Systems
<|reference_start|>A Tool and Methodology for AC-Stability Analysis of Continuous-Time Closed-Loop Systems: Presented are a methodology and a DFII-based tool for AC-stability analysis of a wide variety of closed-loop continuous-time (operational amplifiers and other linear circuits). The methodology used allows for easy identification and diagnostics of ac-stability problems including not only main-loop effects but also local-instability loops in current mirrors, bias circuits and emitter or source followers without breaking the loop. The results of the analysis are easy to interpret. Estimated phase margin is readily available. Instability nodes and loops along with their respective oscillation frequencies are immediately identified and mapped to the existing circuit nodes thus offering significant advantages compared to traditional "black-box" methods of stability analysis (Transient Overshoot, Bode and Phase margin plots etc.). The tool for AC-Stability analysis is written in SKILL? and is fully integrated in DFII? environment. Its "push-button" graphical user interface (GUI) is easy to use and understand. The tool can be invoked directly from Composer? schematic and does not require active Analog Artist? session. The tool is not dependent on the use of a specific fabrication technology or Process Design Kit customization. It requires OCEAN?, Spectre? and Waveform calculator capabilities to run.<|reference_end|>
arxiv
@article{milev2007a, title={A Tool and Methodology for AC-Stability Analysis of Continuous-Time Closed-Loop Systems}, author={Momchil Milev, Rod Burt}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4836}, primaryClass={cs.OH} }
milev2007a
arxiv-1637
0710.4838
A 6bit, 12GSps Low-Power Flash-ADC in 013$\mu$m Digital CMOS
<|reference_start|>A 6bit, 12GSps Low-Power Flash-ADC in 013$\mu$m Digital CMOS: A 6bit flash-ADC with 1.2GSps, wide analog bandwidth and low power, realized in a standard digital 0.13 $\mu$m CMOS copper technology is presented. Employing capacitive interpolation gives various advantages when designing for low power: no need for a reference resistor ladder, implicit sample-and-hold operation, no edge effects in the interpolation network (as compared to resistive interpolation), and a very low input capacitance of only 400fF, which leads to an easily drivable analog converter interface. Operating at 1.2GSps the ADC achieves an effective resolution bandwidth (ERBW) of 700MHz, while consuming 160mW of power. At 600MSps we achieve an ERBW of 600MHz with only 90mW power consumption, both from a 1.5V supply. This corresponds to outstanding Figure-of-Merit numbers (FoM) of 2.2 and 1.5pJ/convstep, respectively. The module area is 0.12mm^2.<|reference_end|>
arxiv
@article{sandner2007a, title={A 6bit, 1.2GSps Low-Power Flash-ADC in 0.13$\mu$m Digital CMOS}, author={Christoph Sandner, Martin Clara, Andreas Santner, Thomas Hartig, Franz Kuttner}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4838}, primaryClass={cs.AR} }
sandner2007a
arxiv-1638
0710.4839
A 97mW 110MS/s 12b Pipeline ADC Implemented in 018$\mu$m Digital CMOS
<|reference_start|>A 97mW 110MS/s 12b Pipeline ADC Implemented in 018$\mu$m Digital CMOS: A 12 bit Pipeline ADC fabricated in a 0.18 $\mu$m pure digital CMOS technology is presented. Its nominal conversion rate is 110MS/s and the nominal supply voltage is 1.8V. The effective number of bits is 10.4 when a 10MHz input signal with 2V_{P-P} signal swing is applied. The occupied silicon area is 0.86mm^2 and the power consumption equals 97mW. A switched capacitor bias current circuit scale the bias current automatically with the conversion rate, which gives scaleable power consumption and full performance of the ADC from 20 to 140MS/s.<|reference_end|>
arxiv
@article{andersen2007a, title={A 97mW 110MS/s 12b Pipeline ADC Implemented in 0.18$\mu$m Digital CMOS}, author={Terje N. Andersen, Atle Briskemyr, Frode Telsto, Johnny Bjornsen, Thomas E. Bonnerud, Bjornar Hernes, Oystein Moldsvor}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4839}, primaryClass={cs.AR} }
andersen2007a
arxiv-1639
0710.4840
Testing Logic Cores using a BIST P1500 Compliant Approach: A Case of Study
<|reference_start|>Testing Logic Cores using a BIST P1500 Compliant Approach: A Case of Study: In this paper we describe how we applied a BIST-based approach to the test of a logic core to be included in System-on-a-chip (SoC) environments. The approach advantages are the ability to protect the core IP, the simple test interface (thanks also to the adoption of the P1500 standard), the possibility to run the test at-speed, the reduced test time, and the good diagnostic capabilities. The paper reports figures about the achieved fault coverage, the required area overhead, and the performance slowdown, and compares the figures with those for alternative approaches, such as those based on full scan and sequential ATPG.<|reference_end|>
arxiv
@article{bernardi2007testing, title={Testing Logic Cores using a BIST P1500 Compliant Approach: A Case of Study}, author={P. Bernardi, G. Masera, F. Quaglio, M. Sonza Reorda}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4840}, primaryClass={cs.AR} }
bernardi2007testing
arxiv-1640
0710.4842
Using Mobilize Power Management IP for Dynamic & Static Power Reduction in SoC at 130 nm
<|reference_start|>Using Mobilize Power Management IP for Dynamic & Static Power Reduction in SoC at 130 nm: At 130 nm and 90 nm, power consumption (both dynamic and static) has become a barrier in the roadmap for SoC designs targeting battery powered, mobile applications. This paper presents the results of dynamic and static power reduction achieved implementing Tensilica's 32-bit Xtensa microprocessor core, using Virtual Silicon's Power Management IP. Independent voltage islands are created using Virtual Silicon's VIP PowerSaver standard cells by using voltage level shifting cells and voltage isolation cells to implement power islands. The VIP PowerSaver standard cells are characterized at 1.2V, 1.0V and 0.8V, to accommodate voltage scaling. Power islands can also be turned off completely. Designers can significantly lower both the dynamic power and the quiescent or leakage power of their SoC designs, with very little impact on speed or area using Virtual Silicon's VIP Gate Bias standard cells.<|reference_end|>
arxiv
@article{hillman2007using, title={Using Mobilize Power Management IP for Dynamic & Static Power Reduction in SoC at 130 nm}, author={Dan Hillman}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4842}, primaryClass={cs.AR} }
hillman2007using
arxiv-1641
0710.4843
MultiNoC: A Multiprocessing System Enabled by a Network on Chip
<|reference_start|>MultiNoC: A Multiprocessing System Enabled by a Network on Chip: The MultiNoC system implements a programmable on-chip multiprocessing platform built on top of an efficient, low area overhead intra-chip interconnection scheme. The employed interconnection structure is a Network on Chip, or NoC. NoCs are emerging as a viable alternative to increasing demands on interconnection architectures, due to the following characteristics: (i) energy efficiency and reliability; (ii) scalability of bandwidth, when compared to traditional bus architectures; (iii) reusability; (iv) distributed routing decisions. An external host computer feeds MultiNoC with application instructions and data. After this initialization procedure, MultiNoC executes some algorithm. After finishing execution of the algorithm, output data can be read back by the host. Sequential or parallel algorithms conveniently adapted to the MultiNoC structure can be executed. The main motivation to propose this design is to enable the investigation of current trends to increase the number of embedded processors in SoCs, leading to the concept of "sea of processors" systems.<|reference_end|>
arxiv
@article{mello2007multinoc:, title={MultiNoC: A Multiprocessing System Enabled by a Network on Chip}, author={Aline Mello, Leandro Moller, Ney Calazans, Fernando Moraes}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4843}, primaryClass={cs.AR} }
mello2007multinoc:
arxiv-1642
0710.4844
A Partitioning Methodology for Accelerating Applications in Hybrid Reconfigurable Platforms
<|reference_start|>A Partitioning Methodology for Accelerating Applications in Hybrid Reconfigurable Platforms: In this paper, we propose a methodology for partitioning and mapping computational intensive applications in reconfigurable hardware blocks of different granularity. A generic hybrid reconfigurable architecture is considered so as the methodology can be applicable to a large number of heterogeneous reconfigurable platforms. The methodology mainly consists of two stages, the analysis and the mapping of the application onto fine and coarse-grain hardware resources. A prototype framework consisting of analysis, partitioning and mapping tools has been also developed. For the coarse-grain reconfigurable hardware, we use our previous-developed high-performance coarse-grain data-path. In this work, the methodology is validated using two real-world applications, an OFDM transmitter and a JPEG encoder. In the case of the OFDM transmitter, a maximum clock cycles decrease of 82% relative to the ones in an all fine-grain mapping solution is achieved. The corresponding performance improvement for the JPEG is 43%.<|reference_end|>
arxiv
@article{galanis2007a, title={A Partitioning Methodology for Accelerating Applications in Hybrid Reconfigurable Platforms}, author={M. D. Galanis, A. Milidonis, G. Theodoridis, D. Soudris, C. E. Goutis}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4844}, primaryClass={cs.AR} }
galanis2007a
arxiv-1643
0710.4845
Evaluation of SystemC Modelling of Reconfigurable Embedded Systems
<|reference_start|>Evaluation of SystemC Modelling of Reconfigurable Embedded Systems: This paper evaluates the use of pin and cycle accurate SystemC models for embedded system design exploration and early software development. The target system is MicroBlaze VanillaNet Platform running MicroBlaze uClinux operating system. The paper compares Register Transfer Level (RTL) Hardware Description Language (HDL) simulation speed to the simulation speed of several different SystemC models. It is shown that simulation speed of pin and cycle accurate models can go up to 150 kHz, compared to 100 Hz range of HDL simulation. Furthermore, utilising techniques that temporarily compromise cycle accuracy, effective simulation speed of up to 500 kHz can be obtained.<|reference_end|>
arxiv
@article{rissa2007evaluation, title={Evaluation of SystemC Modelling of Reconfigurable Embedded Systems}, author={Tero Rissa, Adam Donlin, Wayne Luk}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4845}, primaryClass={cs.AR} }
rissa2007evaluation
arxiv-1644
0710.4846
An Integrated Design and Verification Methodology for Reconfigurable Multimedia Systems
<|reference_start|>An Integrated Design and Verification Methodology for Reconfigurable Multimedia Systems: Recently a lot of multimedia applications are emerging on portable appliances. They require both the flexibility of upgradeable devices (traditionally software based) and a powerful computing engine (typically hardware). In this context, programmable HW and dynamic reconfiguration allow novel approaches to the migration of algorithms from SW to HW. Thus, in the frame of the Symbad project, we propose an industrial design flow for reconfigurable SoC's. The goal of Symbad consists of developing a system level design platform for hardware and software SoC systems including formal and semi-formal verification techniques.<|reference_end|>
arxiv
@article{borgatti2007an, title={An Integrated Design and Verification Methodology for Reconfigurable Multimedia Systems}, author={M. Borgatti, A. Capello, U. Rossi, J.-L. Lambert, I. Moussa, F. Fummi, G. Pravadelli}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4846}, primaryClass={cs.MM cs.LO} }
borgatti2007an
arxiv-1645
0710.4847
Bayesian sequential change diagnosis
<|reference_start|>Bayesian sequential change diagnosis: Sequential change diagnosis is the joint problem of detection and identification of a sudden and unobservable change in the distribution of a random sequence. In this problem, the common probability law of a sequence of i.i.d. random variables suddenly changes at some disorder time to one of finitely many alternatives. This disorder time marks the start of a new regime, whose fingerprint is the new law of observations. Both the disorder time and the identity of the new regime are unknown and unobservable. The objective is to detect the regime-change as soon as possible, and, at the same time, to determine its identity as accurately as possible. Prompt and correct diagnosis is crucial for quick execution of the most appropriate measures in response to the new regime, as in fault detection and isolation in industrial processes, and target detection and identification in national defense. The problem is formulated in a Bayesian framework. An optimal sequential decision strategy is found, and an accurate numerical scheme is described for its implementation. Geometrical properties of the optimal strategy are illustrated via numerical examples. The traditional problems of Bayesian change-detection and Bayesian sequential multi-hypothesis testing are solved as special cases. In addition, a solution is obtained for the problem of detection and identification of component failure(s) in a system with suspended animation.<|reference_end|>
arxiv
@article{dayanik2007bayesian, title={Bayesian sequential change diagnosis}, author={Savas Dayanik, Christian Goulding, H. Vincent Poor}, journal={arXiv preprint arXiv:0710.4847}, year={2007}, archivePrefix={arXiv}, eprint={0710.4847}, primaryClass={math.PR cs.IT math.IT math.ST stat.TH} }
dayanik2007bayesian
arxiv-1646
0710.4848
A Formal Verification Methodology for Checking Data Integrity
<|reference_start|>A Formal Verification Methodology for Checking Data Integrity: Formal verification techniques have been playing an important role in pre-silicon validation processes. One of the most important points considered in performing formal verification is to define good verification scopes; we should define clearly what to be verified formally upon designs under tests. We considered the following three practical requirements when we defined the scope of formal verification. They are (a) hard to verify (b) small to handle, and (c) easy to understand. Our novel approach is to break down generic properties for system into stereotype properties in block level and to define requirements for Verifiable RTL. Consequently, each designer instead of verification experts can describe properties of the design easily, and formal model checking can be applied systematically and thoroughly to all the leaf modules. During the development of a component chip for server platforms, we focused on RAS (Reliability, Availability, and Serviceability) features and described more than 2000 properties in PSL. As a result of the formal verification, we found several critical logic bugs in a short time with limited resources, and successfully verified all of them. This paper presents a study of the functional verification methodology.<|reference_end|>
arxiv
@article{umezawa2007a, title={A Formal Verification Methodology for Checking Data Integrity}, author={Yasushi Umezawa, Takeshi Shimizu}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4848}, primaryClass={cs.LO} }
umezawa2007a
arxiv-1647
0710.4850
Hardware Support for QoS-based Function Allocation in Reconfigurable Systems
<|reference_start|>Hardware Support for QoS-based Function Allocation in Reconfigurable Systems: This contribution presents a new approach for allocating suitable function-implementation variants depending on given quality-of-service function-requirements for run-time reconfigurable multi-device systems. Our approach adapts methodologies from the domain of knowledge-based systems which can be used for doing run-time hardware/software resource usage optimizations.<|reference_end|>
arxiv
@article{ullmann2007hardware, title={Hardware Support for QoS-based Function Allocation in Reconfigurable Systems}, author={Michael Ullmann, Wansheng Jin, Jurgen Becker}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4850}, primaryClass={cs.AR} }
ullmann2007hardware
arxiv-1648
0710.4851
Common Reusable Verification Environment for BCA and RTL Models
<|reference_start|>Common Reusable Verification Environment for BCA and RTL Models: This paper deals with a common verification methodology and environment for SystemC BCA and RTL models. The aim is to save effort by avoiding the same work done twice by different people and to reuse the same environment for the two design views. Applying this methodology the verification task starts as soon as the functional specification is signed off and it runs in parallel to the models and design development. The verification environment is modeled with the aid of dedicated verification languages and it is applied to both the models. The test suite is exactly the same and thus it's possible to verify the alignment between the two models. In fact the final step is to check the cycle-by-cycle match of the interface behavior. A regression tool and a bus analyzer have been developed to help the verification and the alignment process. The former is used to automate the testbench generation and to run the two test suites. The latter is used to verify the alignment between the two models comparing the waveforms obtained in each run. The quality metrics used to validate the flow are full functional coverage and full alignment at each IP port.<|reference_end|>
arxiv
@article{falconeri2007common, title={Common Reusable Verification Environment for BCA and RTL Models}, author={Giuseppe Falconeri, Walid Naifer, Nizar Romdhane}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4851}, primaryClass={cs.LO} }
falconeri2007common
arxiv-1649
0710.4852
An Assembler Driven Verification Methodology (ADVM)
<|reference_start|>An Assembler Driven Verification Methodology (ADVM): This paper presents an overview of an assembler driven verification methodology (ADVM) that was created and implemented for a chip card project at Infineon Technologies AG. The primary advantage of this methodology is that it enables rapid porting of directed tests to new targets and derivatives, with only a minimum amount of code refactoring. As a consequence, considerable verification development time and effort was saved.<|reference_end|>
arxiv
@article{macbeth2007an, title={An Assembler Driven Verification Methodology (ADVM)}, author={John S. Macbeth, Dietmar Heinz, Ken Gray}, journal={Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4852}, primaryClass={cs.OH} }
macbeth2007an
arxiv-1650
0710.4903
Anonymous Networking amidst Eavesdroppers
<|reference_start|>Anonymous Networking amidst Eavesdroppers: The problem of security against timing based traffic analysis in wireless networks is considered in this work. An analytical measure of anonymity in eavesdropped networks is proposed using the information theoretic concept of equivocation. For a physical layer with orthogonal transmitter directed signaling, scheduling and relaying techniques are designed to maximize achievable network performance for any given level of anonymity. The network performance is measured by the achievable relay rates from the sources to destinations under latency and medium access constraints. In particular, analytical results are presented for two scenarios: For a two-hop network with maximum anonymity, achievable rate regions for a general m x 1 relay are characterized when nodes generate independent Poisson transmission schedules. The rate regions are presented for both strict and average delay constraints on traffic flow through the relay. For a multihop network with an arbitrary anonymity requirement, the problem of maximizing the sum-rate of flows (network throughput) is considered. A selective independent scheduling strategy is designed for this purpose, and using the analytical results for the two-hop network, the achievable throughput is characterized as a function of the anonymity level. The throughput-anonymity relation for the proposed strategy is shown to be equivalent to an information theoretic rate-distortion function.<|reference_end|>
arxiv
@article{venkitasubramaniam2007anonymous, title={Anonymous Networking amidst Eavesdroppers}, author={Parvathinathan Venkitasubramaniam, Ting He and Lang Tong}, journal={arXiv preprint arXiv:0710.4903}, year={2007}, archivePrefix={arXiv}, eprint={0710.4903}, primaryClass={cs.IT math.IT} }
venkitasubramaniam2007anonymous
arxiv-1651
0710.4905
Distributed Source Coding in the Presence of Byzantine Sensors
<|reference_start|>Distributed Source Coding in the Presence of Byzantine Sensors: The distributed source coding problem is considered when the sensors, or encoders, are under Byzantine attack; that is, an unknown group of sensors have been reprogrammed by a malicious intruder to undermine the reconstruction at the fusion center. Three different forms of the problem are considered. The first is a variable-rate setup, in which the decoder adaptively chooses the rates at which the sensors transmit. An explicit characterization of the variable-rate achievable sum rates is given for any number of sensors and any groups of traitors. The converse is proved constructively by letting the traitors simulate a fake distribution and report the generated values as the true ones. This fake distribution is chosen so that the decoder cannot determine which sensors are traitors while maximizing the required rate to decode every value. Achievability is proved using a scheme in which the decoder receives small packets of information from a sensor until its message can be decoded, before moving on to the next sensor. The sensors use randomization to choose from a set of coding functions, which makes it probabilistically impossible for the traitors to cause the decoder to make an error. Two forms of the fixed-rate problem are considered, one with deterministic coding and one with randomized coding. The achievable rate regions are given for both these problems, and it is shown that lower rates can be achieved with randomized coding.<|reference_end|>
arxiv
@article{kosut2007distributed, title={Distributed Source Coding in the Presence of Byzantine Sensors}, author={Oliver Kosut, Lang Tong}, journal={arXiv preprint arXiv:0710.4905}, year={2007}, doi={10.1109/TIT.2008.921867}, archivePrefix={arXiv}, eprint={0710.4905}, primaryClass={cs.IT math.IT} }
kosut2007distributed
arxiv-1652
0710.4911
Social Media as Windows on the Social Life of the Mind
<|reference_start|>Social Media as Windows on the Social Life of the Mind: This is a programmatic paper, marking out two directions in which the study of social media can contribute to broader problems of social science: understanding cultural evolution and understanding collective cognition. Under the first heading, I discuss some difficulties with the usual, adaptationist explanations of cultural phenomena, alternative explanations involving network diffusion effects, and some ways these could be tested using social-media data. Under the second I describe some of the ways in which social media could be used to study how the social organization of an epistemic community supports its collective cognitive performance.<|reference_end|>
arxiv
@article{shalizi2007social, title={Social Media as Windows on the Social Life of the Mind}, author={Cosma Rohilla Shalizi}, journal={arXiv preprint arXiv:0710.4911}, year={2007}, archivePrefix={arXiv}, eprint={0710.4911}, primaryClass={cs.CY physics.soc-ph} }
shalizi2007social
arxiv-1653
0710.4965
On compositions of numbers and graphs
<|reference_start|>On compositions of numbers and graphs: The main purpose of this note is to pose a couple of problems which are easily formulated thought some seem to be not yet solved. These problems are of general interest for discrete mathematics including a new twig of a bough of theory of graphs i.e. a given graph compositions. The problems result from and are served in the entourage of series of exercises with hints based predominantly on the second reference and other related recent papers.<|reference_end|>
arxiv
@article{kwasniewski2007on, title={On compositions of numbers and graphs}, author={A.K.Kwasniewski}, journal={Bull. Soc. Sci. Lett. Lodz, 59, No1 , (2009),103-116}, year={2007}, archivePrefix={arXiv}, eprint={0710.4965}, primaryClass={math.CO cs.DM} }
kwasniewski2007on
arxiv-1654
0710.4975
Node discovery problem for a social network
<|reference_start|>Node discovery problem for a social network: Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.<|reference_end|>
arxiv
@article{maeno2007node, title={Node discovery problem for a social network}, author={Yoshiharu Maeno}, journal={Connections vol.29, pp.62-76 (2009)}, year={2007}, archivePrefix={arXiv}, eprint={0710.4975}, primaryClass={cs.AI} }
maeno2007node
arxiv-1655
0710.4982
First to Market is not Everything: an Analysis of Preferential Attachment with Fitness
<|reference_start|>First to Market is not Everything: an Analysis of Preferential Attachment with Fitness: In this paper, we provide a rigorous analysis of preferential attachment with fitness, a random graph model introduced by Bianconi and Barabasi. Depending on the shape of the fitness distribution, we observe three distinct phases: a first-mover-advantage phase, a fit-get-richer phase and an innovation-pays-off phase.<|reference_end|>
arxiv
@article{borgs2007first, title={First to Market is not Everything: an Analysis of Preferential Attachment with Fitness}, author={Christian Borgs, Jennifer Chayes, Constantinos Daskalakis, Sebastien Roch}, journal={arXiv preprint arXiv:0710.4982}, year={2007}, archivePrefix={arXiv}, eprint={0710.4982}, primaryClass={math.PR cs.SI} }
borgs2007first
arxiv-1656
0710.4987
Universal source coding over generalized complementary delivery networks
<|reference_start|>Universal source coding over generalized complementary delivery networks: This paper deals with a universal coding problem for a certain kind of multiterminal source coding network called a generalized complementary delivery network. In this network, messages from multiple correlated sources are jointly encoded, and each decoder has access to some of the messages to enable it to reproduce the other messages. Both fixed-to-fixed length and fixed-to-variable length lossless coding schemes are considered. Explicit constructions of universal codes and the bounds of the error probabilities are clarified by using methods of types and graph-theoretical analysis.<|reference_end|>
arxiv
@article{kimura2007universal, title={Universal source coding over generalized complementary delivery networks}, author={Akisato Kimura, Tomohiko Uyematsu, Shigeaki Kuzuoka, Shun Watanabe}, journal={IEEE Transactions on Information Theory, Vol.55, No.3, pp.1360-1373, March 2009.}, year={2007}, doi={10.1109/TIT.2008.2011438}, archivePrefix={arXiv}, eprint={0710.4987}, primaryClass={cs.IT math.IT} }
kimura2007universal
arxiv-1657
0710.4999
L'analyse de l'expertise du point de vue de l'ergonomie cognitive
<|reference_start|>L'analyse de l'expertise du point de vue de l'ergonomie cognitive: This paper presents a review of methods for collecting and analysing data on complex activities. Starting with methods developed for design, we examine the possibility to transpose them to other complex activities, especially activities referring to sensorial expertise. R\'esum\'e Ce texte pr\'esente une revue de m\'ethodes pour recueillir et analyser des donn\'ees sur des actvit\'es complexes. A partir de m\'ethodes d\'evelopp\'ees pour des actvit\'es de conception, nous examinons la possibilit\'e de les transposer \`a d'autres actvit\'es complexes, notamment des actvit\'es faisant \`a appel \`a des expertises sensorielles.<|reference_end|>
arxiv
@article{visser2007l'analyse, title={L'analyse de l'expertise du point de vue de l'ergonomie cognitive}, author={Willemien Visser (INRIA Rocquencourt)}, journal={Dans Les expertises sensorielles : Nature et acquisition (2006) 1-12}, year={2007}, archivePrefix={arXiv}, eprint={0710.4999}, primaryClass={cs.HC} }
visser2007l'analyse
arxiv-1658
0710.5002
The entropy of keys derived from laser speckle
<|reference_start|>The entropy of keys derived from laser speckle: Laser speckle has been proposed in a number of papers as a high-entropy source of unpredictable bits for use in security applications. Bit strings derived from speckle can be used for a variety of security purposes such as identification, authentication, anti-counterfeiting, secure key storage, random number generation and tamper protection. The choice of laser speckle as a source of random keys is quite natural, given the chaotic properties of speckle. However, this same chaotic behaviour also causes reproducibility problems. Cryptographic protocols require either zero noise or very low noise in their inputs; hence the issue of error rates is critical to applications of laser speckle in cryptography. Most of the literature uses an error reduction method based on Gabor filtering. Though the method is successful, it has not been thoroughly analysed. In this paper we present a statistical analysis of Gabor-filtered speckle patterns. We introduce a model in which perturbations are described as random phase changes in the source plane. Using this model we compute the second and fourth order statistics of Gabor coefficients. We determine the mutual information between perturbed and unperturbed Gabor coefficients and the bit error rate in the derived bit string. The mutual information provides an absolute upper bound on the number of secure bits that can be reproducibly extracted from noisy measurements.<|reference_end|>
arxiv
@article{skoric2007the, title={The entropy of keys derived from laser speckle}, author={B. Skoric}, journal={arXiv preprint arXiv:0710.5002}, year={2007}, archivePrefix={arXiv}, eprint={0710.5002}, primaryClass={cs.CR cs.CV} }
skoric2007the
arxiv-1659
0710.5006
CANE: The Content Addressed Network Environment
<|reference_start|>CANE: The Content Addressed Network Environment: The fragmented nature and asymmetry of local and remote file access and network access, combined with the current lack of robust authenticity and privacy, hamstrings the current internet. The collection of disjoint and often ad-hoc technologies currently in use are at least partially responsible for the magnitude and potency of the plagues besetting the information economy, of which spam and email borne virii are canonical examples. The proposed replacement for the internet, Internet Protocol Version 6 (IPv6), does little to tackle these underlying issues, instead concentrating on addressing the technical issues of a decade ago. This paper introduces CANE, a Content Addressed Network Environment, and compares it against current internet and related technologies. Specifically, CANE presents a simple computing environment in which location is abstracted away in favour of identity, and trust is explicitly defined. Identity is cryptographically verified and yet remains pervasively open in nature. It is argued that this approach is capable of being generalised such that file storage and network access can be unified and subsequently combined with human interfaces to result in a Unified Theory of Access, which addresses many of the significant problems besetting the internet community of the early 21st century.<|reference_end|>
arxiv
@article{gardner-stephen2007cane:, title={CANE: The Content Addressed Network Environment}, author={Paul Gardner-Stephen}, journal={arXiv preprint arXiv:0710.5006}, year={2007}, archivePrefix={arXiv}, eprint={0710.5006}, primaryClass={cs.NI cs.CR cs.DC} }
gardner-stephen2007cane:
arxiv-1660
0710.5054
WWW Spiders: an introduction
<|reference_start|>WWW Spiders: an introduction: In recent years, the study of complex networks has received a lot of attention. Real systems have gained importance in scientific publications, despite of an important drawback: the difficulty of retrieving and manage such great quantity of information. This paper wants to be an introduction to the construction of spiders and scrapers: specifically, how to program and deploy safely these kind of software applications. The aim is to show how software can be prepared to automatically surf the net and retrieve information for the user with high efficiency and safety.<|reference_end|>
arxiv
@article{zanin2007www, title={WWW Spiders: an introduction}, author={Massimiliano Zanin}, journal={arXiv preprint arXiv:0710.5054}, year={2007}, archivePrefix={arXiv}, eprint={0710.5054}, primaryClass={cs.CY} }
zanin2007www
arxiv-1661
0710.5116
Combining haplotypers
<|reference_start|>Combining haplotypers: Statistically resolving the underlying haplotype pair for a genotype measurement is an important intermediate step in gene mapping studies, and has received much attention recently. Consequently, a variety of methods for this problem have been developed. Different methods employ different statistical models, and thus implicitly encode different assumptions about the nature of the underlying haplotype structure. Depending on the population sample in question, their relative performance can vary greatly, and it is unclear which method to choose for a particular sample. Instead of choosing a single method, we explore combining predictions returned by different methods in a principled way, and thereby circumvent the problem of method selection. We propose several techniques for combining haplotype reconstructions and analyze their computational properties. In an experimental study on real-world haplotype data we show that such techniques can provide more accurate and robust reconstructions, and are useful for outlier detection. Typically, the combined prediction is at least as accurate as or even more accurate than the best individual method, effectively circumventing the method selection problem.<|reference_end|>
arxiv
@article{kääriäinen2007combining, title={Combining haplotypers}, author={Matti K"a"ari"ainen, Niels Landwehr, Sampsa Lappalainen and Taneli Mielik"ainen}, journal={arXiv preprint arXiv:0710.5116}, year={2007}, number={C-2007-57}, archivePrefix={arXiv}, eprint={0710.5116}, primaryClass={cs.LG cs.CE q-bio.QM} }
kääriäinen2007combining
arxiv-1662
0710.5130
A Proof of the Factorization Forest Theorem
<|reference_start|>A Proof of the Factorization Forest Theorem: We show that for every homomorphism $\Gamma^+ \to S$ where $S$ is a finite semigroup there exists a factorization forest of height $\leq 3 \abs{S}$. The proof is based on Green's relations.<|reference_end|>
arxiv
@article{kufleitner2007a, title={A Proof of the Factorization Forest Theorem}, author={Manfred Kufleitner (LaBRI)}, journal={arXiv preprint arXiv:0710.5130}, year={2007}, archivePrefix={arXiv}, eprint={0710.5130}, primaryClass={cs.LO} }
kufleitner2007a
arxiv-1663
0710.5144
Forecasting for stationary binary time series
<|reference_start|>Forecasting for stationary binary time series: The forecasting problem for a stationary and ergodic binary time series $\{X_n\}_{n=0}^{\infty}$ is to estimate the probability that $X_{n+1}=1$ based on the observations $X_i$, $0\le i\le n$ without prior knowledge of the distribution of the process $\{X_n\}$. It is known that this is not possible if one estimates at all values of $n$. We present a simple procedure which will attempt to make such a prediction infinitely often at carefully selected stopping times chosen by the algorithm. We show that the proposed procedure is consistent under certain conditions, and we estimate the growth rate of the stopping times.<|reference_end|>
arxiv
@article{morvai2007forecasting, title={Forecasting for stationary binary time series}, author={Gusztav Morvai and Benjamin Weiss}, journal={Acta Appl. Math. 79 (2003), no. 1-2, 25--34}, year={2007}, archivePrefix={arXiv}, eprint={0710.5144}, primaryClass={math.PR cs.IT math.IT} }
morvai2007forecasting
arxiv-1664
0710.5161
Decomposable Subspaces, Linear Sections of Grassmann Varieties, and Higher Weights of Grassmann Codes
<|reference_start|>Decomposable Subspaces, Linear Sections of Grassmann Varieties, and Higher Weights of Grassmann Codes: Given a homogeneous component of an exterior algebra, we characterize those subspaces in which every nonzero element is decomposable. In geometric terms, this corresponds to characterizing the projective linear subvarieties of the Grassmann variety with its Plucker embedding. When the base field is finite, we consider the more general question of determining the maximum number of points on sections of Grassmannians by linear subvarieties of a fixed (co)dimension. This corresponds to a known open problem of determining the complete weight hierarchy of linear error correcting codes associated to Grassmann varieties. We recover most of the known results as well as prove some new results. In the process we obtain, and utilize, a simple generalization of the Griesmer-Wei bound for arbitrary linear codes.<|reference_end|>
arxiv
@article{ghorpade2007decomposable, title={Decomposable Subspaces, Linear Sections of Grassmann Varieties, and Higher Weights of Grassmann Codes}, author={Sudhir R. Ghorpade, Arunkumar R. Patil, and Harish K. Pillai}, journal={Finite Fields Appl. 15 (2009), no. 1, 54--68.}, year={2007}, doi={10.1016/j.ffa.2008.08.001}, archivePrefix={arXiv}, eprint={0710.5161}, primaryClass={math.AG cs.IT math.IT} }
ghorpade2007decomposable
arxiv-1665
0710.5190
Identifying statistical dependence in genomic sequences via mutual information estimates
<|reference_start|>Identifying statistical dependence in genomic sequences via mutual information estimates: Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA) that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5' untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's Combined DNA Index System (CODIS), we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeats, an application of importance in genetic profiling.<|reference_end|>
arxiv
@article{aktulga2007identifying, title={Identifying statistical dependence in genomic sequences via mutual information estimates}, author={H.M. Aktulga, I. Kontoyiannis, L.A. Lyznik, L. Szpankowski, A.Y. Grama and W. Szpankowski}, journal={arXiv preprint arXiv:0710.5190}, year={2007}, archivePrefix={arXiv}, eprint={0710.5190}, primaryClass={q-bio.GN cs.IT math.IT} }
aktulga2007identifying
arxiv-1666
0710.5194
Rate-Constrained Wireless Networks with Fading Channels: Interference-Limited and Noise-Limited Regimes
<|reference_start|>Rate-Constrained Wireless Networks with Fading Channels: Interference-Limited and Noise-Limited Regimes: A network of $n$ wireless communication links is considered in a Rayleigh fading environment. It is assumed that each link can be active and transmit with a constant power $P$ or remain silent. The objective is to maximize the number of active links such that each active link can transmit with a constant rate $\lambda$. An upper bound is derived that shows the number of active links scales at most like $\frac{1}{\lambda} \log n$. To obtain a lower bound, a decentralized link activation strategy is described and analyzed. It is shown that for small values of $\lambda$, the number of supported links by this strategy meets the upper bound; however, as $\lambda$ grows, this number becomes far below the upper bound. To shrink the gap between the upper bound and the achievability result, a modified link activation strategy is proposed and analyzed based on some results from random graph theory. It is shown that this modified strategy performs very close to the optimum. Specifically, this strategy is \emph{asymptotically almost surely} optimum when $\lambda$ approaches $\infty$ or 0. It turns out the optimality results are obtained in an interference-limited regime. It is demonstrated that, by proper selection of the algorithm parameters, the proposed scheme also allows the network to operate in a noise-limited regime in which the transmission rates can be adjusted by the transmission powers. The price for this flexibility is a decrease in the throughput scaling law by a multiplicative factor of $\log \log n$.<|reference_end|>
arxiv
@article{ebrahimi2007rate-constrained, title={Rate-Constrained Wireless Networks with Fading Channels: Interference-Limited and Noise-Limited Regimes}, author={Masoud Ebrahimi and Amir K. Khandani}, journal={arXiv preprint arXiv:0710.5194}, year={2007}, archivePrefix={arXiv}, eprint={0710.5194}, primaryClass={cs.IT math.IT} }
ebrahimi2007rate-constrained
arxiv-1667
0710.5230
Generalized reliability-based syndrome decoding for LDPC codes
<|reference_start|>Generalized reliability-based syndrome decoding for LDPC codes: Aiming at bridging the gap between the maximum likelihood decoding (MLD) and the suboptimal iterative decodings for short or medium length LDPC codes, we present a generalized ordered statistic decoding (OSD) in the form of syndrome decoding, to cascade with the belief propagation (BP) or enhanced min-sum decoding. The OSD is invoked only when the decoding failures are obtained for the preceded iterative decoding method. With respect to the existing OSD which is based on the accumulated log-likelihood ratio (LLR) metric, we extend the accumulative metric to the situation where the BP decoding is in the probability domain. Moreover, after generalizing the accumulative metric to the context of the normalized or offset min-sum decoding, the OSD shows appealing tradeoff between performance and complexity. In the OSD implementation, when deciding the true error pattern among many candidates, an alternative proposed proves to be effective to reduce the number of real additions without performance loss. Simulation results demonstrate that the cascade connection of enhanced min-sum and OSD decodings outperforms the BP alone significantly, in terms of either performance or complexity.<|reference_end|>
arxiv
@article{li2007generalized, title={Generalized reliability-based syndrome decoding for LDPC codes}, author={Guangwen Li, Guangzeng Feng}, journal={arXiv preprint arXiv:0710.5230}, year={2007}, archivePrefix={arXiv}, eprint={0710.5230}, primaryClass={cs.IT math.IT} }
li2007generalized
arxiv-1668
0710.5235
Unsaturated Throughput Analysis of IEEE 80211 in Presence of Non Ideal Transmission Channel and Capture Effects
<|reference_start|>Unsaturated Throughput Analysis of IEEE 80211 in Presence of Non Ideal Transmission Channel and Capture Effects: In this paper, we provide a throughput analysis of the IEEE 802.11 protocol at the data link layer in non-saturated traffic conditions taking into account the impact of both transmission channel and capture effects in Rayleigh fading environment. The impact of both non-ideal channel and capture become important in terms of the actual observed throughput in typical network conditions whereby traffic is mainly unsaturated, especially in an environment of high interference. We extend the multi-dimensional Markovian state transition model characterizing the behavior at the MAC layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel, along with a state characterizing the system when there are no packets to be transmitted in the buffer of a station. Finally, we derive a linear model of the throughput along with its interval of validity. Simulation results closely match the theoretical derivations confirming the effectiveness of the proposed model.<|reference_end|>
arxiv
@article{daneshgaran2007unsaturated, title={Unsaturated Throughput Analysis of IEEE 802.11 in Presence of Non Ideal Transmission Channel and Capture Effects}, author={F. Daneshgaran, Massimiliano Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0710.5235}, year={2007}, doi={10.1109/TWC.2008.060859}, archivePrefix={arXiv}, eprint={0710.5235}, primaryClass={cs.NI} }
daneshgaran2007unsaturated
arxiv-1669
0710.5236
Saturation Throughput Analysis of IEEE 80211 in Presence of Non Ideal Transmission Channel and Capture Effects
<|reference_start|>Saturation Throughput Analysis of IEEE 80211 in Presence of Non Ideal Transmission Channel and Capture Effects: In this paper, we provide a saturation throughput analysis of the IEEE 802.11 protocol at the data link layer by including the impact of both transmission channel and capture effects in Rayleigh fading environment. Impacts of both non-ideal channel and capture effects, specially in an environment of high interference, become important in terms of the actual observed throughput. As far as the 4-way handshaking mechanism is concerned, we extend the multi-dimensional Markovian state transition model characterizing the behavior at the MAC layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel. This way, any channel model characterizing the physical transmission medium can be accommodated, including AWGN and fading channels. We also extend the Markov model in order to consider the behavior of the contention window when employing the basic 2-way handshaking mechanism. Under the usual assumptions regarding the traffic generated per node and independence of packet collisions, we solve for the stationary probabilities of the Markov chain and develop expressions for the saturation throughput as a function of the number of terminals, packet sizes, raw channel error rates, capture probability, and other key system parameters. The theoretical derivations are then compared to simulation results confirming the effectiveness of the proposed models.<|reference_end|>
arxiv
@article{daneshgaran2007saturation, title={Saturation Throughput Analysis of IEEE 802.11 in Presence of Non Ideal Transmission Channel and Capture Effects}, author={F. Daneshgaran, Massimiliano Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0710.5236}, year={2007}, doi={10.1109/TCOMM.2008.060397}, archivePrefix={arXiv}, eprint={0710.5236}, primaryClass={cs.NI} }
daneshgaran2007saturation
arxiv-1670
0710.5238
On The Linear Behaviour of the Throughput of IEEE 80211 DCF in Non-Saturated Conditions
<|reference_start|>On The Linear Behaviour of the Throughput of IEEE 80211 DCF in Non-Saturated Conditions: We propose a linear model of the throughput of the IEEE 802.11 Distributed Coordination Function (DCF) protocol at the data link layer in non-saturated traffic conditions. We show that the throughput is a linear function of the packet arrival rate (PAR) $\lambda$ with a slope depending on both the number of contending stations and the average payload length. We also derive the interval of validity of the proposed model by showing the presence of a critical $\lambda$, above which the station begins operating in saturated traffic conditions. The analysis is based on the multi-dimensional Markovian state transition model proposed by Liaw \textit{et al.} with the aim of describing the behaviour of the MAC layer in unsaturated traffic conditions. Simulation results closely match the theoretical derivations, confirming the effectiveness of the proposed linear model.<|reference_end|>
arxiv
@article{daneshgaran2007on, title={On The Linear Behaviour of the Throughput of IEEE 802.11 DCF in Non-Saturated Conditions}, author={F. Daneshgaran, Massimiliano Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0710.5238}, year={2007}, doi={10.1109/LCOMM.2007.071149}, archivePrefix={arXiv}, eprint={0710.5238}, primaryClass={cs.NI} }
daneshgaran2007on
arxiv-1671
0710.5241
Connection Between System Parameters and Localization Probability in Network of Randomly Distributed Nodes
<|reference_start|>Connection Between System Parameters and Localization Probability in Network of Randomly Distributed Nodes: This article deals with localization probability in a network of randomly distributed communication nodes contained in a bounded domain. A fraction of the nodes denoted as L-nodes are assumed to have localization information while the rest of the nodes denoted as NL nodes do not. The basic model assumes each node has a certain radio coverage within which it can make relative distance measurements. We model both the case radio coverage is fixed and the case radio coverage is determined by signal strength measurements in a Log-Normal Shadowing environment. We apply the probabilistic method to determine the probability of NL-node localization as a function of the coverage area to domain area ratio and the density of L-nodes. We establish analytical expressions for this probability and the transition thresholds with respect to key parameters whereby marked change in the probability behavior is observed. The theoretical results presented in the article are supported by simulations.<|reference_end|>
arxiv
@article{daneshgaran2007connection, title={Connection Between System Parameters and Localization Probability in Network of Randomly Distributed Nodes}, author={F. Daneshgaran, Massimiliano Laddomada, M. Mondin}, journal={arXiv preprint arXiv:0710.5241}, year={2007}, doi={10.1109/TWC.2007.05785}, archivePrefix={arXiv}, eprint={0710.5241}, primaryClass={cs.NI cs.DM cs.IT math.IT} }
daneshgaran2007connection
arxiv-1672
0710.5242
A Model of the IEEE 80211 DCF in Presence of Non Ideal Transmission Channel and Capture Effects
<|reference_start|>A Model of the IEEE 80211 DCF in Presence of Non Ideal Transmission Channel and Capture Effects: In this paper, we provide a throughput analysis of the IEEE 802.11 protocol at the data link layer in non-saturated traffic conditions taking into account the impact of both transmission channel and capture effects in Rayleigh fading environment. Impacts of both non-ideal channel and capture become important in terms of the actual observed throughput in typical network conditions whereby traffic is mainly unsaturated, specially in an environment of high interference. We extend the multi-dimensional Markovian state transition model characterizing the behavior at the MAC layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel, along with a state characterizing the system when there are no packets to be transmitted in the buffer of a station.<|reference_end|>
arxiv
@article{daneshgaran2007a, title={A Model of the IEEE 802.11 DCF in Presence of Non Ideal Transmission Channel and Capture Effects}, author={F. Daneshgaran, Massimiliano Laddomada, F. Mesiti, M. Mondin}, journal={arXiv preprint arXiv:0710.5242}, year={2007}, doi={10.1109/GLOCOM.2007.969}, archivePrefix={arXiv}, eprint={0710.5242}, primaryClass={cs.NI} }
daneshgaran2007a
arxiv-1673
0710.5327
Escalating The War On SPAM Through Practical POW Exchange
<|reference_start|>Escalating The War On SPAM Through Practical POW Exchange: Proof-of-work (POW) schemes have been proposed in the past. One prominent system is HASHCASH (Back, 2002) which uses cryptographic puzzles . However, work by Laurie and Clayton (2004) has shown that for a uniform proof-of-work scheme on email to have an impact on SPAM, it would also be onerous enough to impact on senders of "legitimate" email. I suggest that a non-uniform proof-of-work scheme on email may be a solution to this problem, and describe a framework that has the potential to limit SPAM, without unduly penalising legitimate senders, and is constructed using only current SPAM filter technology, and a small change to the SMTP (Simple Mail Transfer Protocol). Specifically, I argue that it is possible to make sending SPAM 1,000 times more expensive than sending "legitimate" email (so called HAM). Also, unlike the system proposed by Debin Liu and Jean Camp (2006), it does not require the complications of maintaining a reputation system.<|reference_end|>
arxiv
@article{gardner-stephen2007escalating, title={Escalating The War On SPAM Through Practical POW Exchange}, author={Paul Gardner-Stephen}, journal={arXiv preprint arXiv:0710.5327}, year={2007}, doi={10.1109/ICON.2007.4444132}, archivePrefix={arXiv}, eprint={0710.5327}, primaryClass={cs.NI cs.CR} }
gardner-stephen2007escalating
arxiv-1674
0710.5333
Neutrosophic Relational Data Model
<|reference_start|>Neutrosophic Relational Data Model: In this paper, we present a generalization of the relational data model based on interval neutrosophic set. Our data model is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or intuitionistic fuzzy relation can only handle incomplete information. Associated with each relation are two membership functions one is called truth-membership function T which keeps track of the extent to which we believe the tuple is in the relation, another is called falsity-membership function F which keeps track of the extent to which we believe that it is not in the relation. A neutrosophic relation is inconsistent if there exists one tuple a such that T(a) + F(a) > 1 . In order to handle inconsistent situation, we propose an operator called "split" to transform inconsistent neutrosophic relations into pseudo-consistent neutrosophic relations and do the set-theoretic and relation-theoretic operations on them and finally use another operator called "combine" to transform the result back to neutrosophic relation. For this data model, we define algebraic operators that are generalizations of the usual operators such as intersection, union, selection, join on fuzzy relations. Our data model can underlie any database and knowledge-base management system that deals with incomplete and inconsistent information.<|reference_end|>
arxiv
@article{wang2007neutrosophic, title={Neutrosophic Relational Data Model}, author={Haibin Wang, Rajshekhar Sunderraman, Florentin Smarandache, Andre Rogatko}, journal={arXiv preprint arXiv:0710.5333}, year={2007}, archivePrefix={arXiv}, eprint={0710.5333}, primaryClass={cs.DB} }
wang2007neutrosophic
arxiv-1675
0710.5338
Weighted Random Popular Matchings
<|reference_start|>Weighted Random Popular Matchings: For a set A of n applicants and a set I of m items, we consider a problem of computing a matching of applicants to items, i.e., a function M mapping A to I; here we assume that each applicant $x \in A$ provides a preference list on items in I. We say that an applicant $x \in A$ prefers an item p than an item q if p is located at a higher position than q in its preference list, and we say that x prefers a matching M over a matching M' if x prefers M(x) over M'(x). For a given matching problem A, I, and preference lists, we say that M is more popular than M' if the number of applicants preferring M over M' is larger than that of applicants preferring M' over M, and M is called a popular matching if there is no other matching that is more popular than M. Here we consider the situation that A is partitioned into $A_{1},A_{2},...,A_{k}$, and that each $A_{i}$ is assigned a weight $w_{i}>0$ such that w_{1}>w_{2}>...>w_{k}>0$. For such a matching problem, we say that M is more popular than M' if the total weight of applicants preferring M over M' is larger than that of applicants preferring M' over M, and we call M an k-weighted popular matching if there is no other matching that is more popular than M. In this paper, we analyze the 2-weighted matching problem, and we show that (lower bound) if $m/n^{4/3}=o(1)$, then a random instance of the 2-weighted matching problem with $w_{1} \geq 2w_{2}$ has a 2-weighted popular matching with probability o(1); and (upper bound) if $n^{4/3}/m = o(1)$, then a random instance of the 2-weighted matching problem with $w_{1} \geq 2w_{2}$ has a 2-weighted popular matching with probability 1-o(1).<|reference_end|>
arxiv
@article{itoh2007weighted, title={Weighted Random Popular Matchings}, author={Toshiya Itoh and Osamu Watanabe}, journal={Random Structures and Algorithms, 37(4), pp.477-494, 2010}, year={2007}, doi={10.1002/rsa.20316}, archivePrefix={arXiv}, eprint={0710.5338}, primaryClass={cs.DM cs.CC} }
itoh2007weighted
arxiv-1676
0710.5340
Bounds on the Network Coding Capacity for Wireless Random Networks
<|reference_start|>Bounds on the Network Coding Capacity for Wireless Random Networks: Recently, it has been shown that the max flow capacity can be achieved in a multicast network using network coding. In this paper, we propose and analyze a more realistic model for wireless random networks. We prove that the capacity of network coding for this model is concentrated around the expected value of its minimum cut. Furthermore, we establish upper and lower bounds for wireless nodes using Chernoff bound. Our experiments show that our theoretical predictions are well matched by simulation results.<|reference_end|>
arxiv
@article{aly2007bounds, title={Bounds on the Network Coding Capacity for Wireless Random Networks}, author={Salah A. Aly, Vishal Kapoor, Jie Meng, Andreas Klappenecker}, journal={arXiv preprint arXiv:0710.5340}, year={2007}, archivePrefix={arXiv}, eprint={0710.5340}, primaryClass={cs.IT cs.NI math.IT} }
aly2007bounds
arxiv-1677
0710.5348
Towards Grid Monitoring and deployment in Jade, using ProActive
<|reference_start|>Towards Grid Monitoring and deployment in Jade, using ProActive: This document describes our current effort to gridify Jade, a java-based environment for the autonomic management of clustered J2EE application servers, developed in the INRIA SARDES research team. Towards this objective, we use the java ProActive grid technology. We first present some of the challenges to turn such an autonomic management system initially dedicated to distributed applications running on clusters of machines, into one that can provide self-management capabilities to large-scale systems, i.e. deployed on grid infrastructures. This leads us to a brief state of the art on grid monitoring systems. Then, we recall the architecture of Jade, and consequently propose to reorganize it in a potentially more scalable way. Practical experiments pertain to the use of the grid deployment feature offered by ProActive to easily conduct the deployment of the Jade system or its revised version on any sort of grid.<|reference_end|>
arxiv
@article{ruz2007towards, title={Towards Grid Monitoring and deployment in Jade, using ProActive}, author={Cristian Ruz (INRIA Sophia Antipolis), Franc{c}oise Baude (INRIA Sophia Antipolis), Virginie Legrand Contes (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0710.5348}, year={2007}, archivePrefix={arXiv}, eprint={0710.5348}, primaryClass={cs.DC} }
ruz2007towards
arxiv-1678
0710.5376
Broadcasting Correlated Gaussians
<|reference_start|>Broadcasting Correlated Gaussians: We consider the transmission of a memoryless bivariate Gaussian source over an average-power-constrained one-to-two Gaussian broadcast channel. The transmitter observes the source and describes it to the two receivers by means of an average-power-constrained signal. Each receiver observes the transmitted signal corrupted by a different additive white Gaussian noise and wishes to estimate the source component intended for it. That is, Receiver~1 wishes to estimate the first source component and Receiver~2 wishes to estimate the second source component. Our interest is in the pairs of expected squared-error distortions that are simultaneously achievable at the two receivers. We prove that an uncoded transmission scheme that sends a linear combination of the source components achieves the optimal power-versus-distortion trade-off whenever the signal-to-noise ratio is below a certain threshold. The threshold is a function of the source correlation and the distortion at the receiver with the weaker noise.<|reference_end|>
arxiv
@article{bross2007broadcasting, title={Broadcasting Correlated Gaussians}, author={Shraga Bross, Amos Lapidoth, Stephan Tinguely}, journal={arXiv preprint arXiv:0710.5376}, year={2007}, archivePrefix={arXiv}, eprint={0710.5376}, primaryClass={cs.IT math.IT} }
bross2007broadcasting
arxiv-1679
0710.5382
Some Reflections on the Task of Content Determination in the Context of Multi-Document Summarization of Evolving Events
<|reference_start|>Some Reflections on the Task of Content Determination in the Context of Multi-Document Summarization of Evolving Events: Despite its importance, the task of summarizing evolving events has received small attention by researchers in the field of multi-document summariztion. In a previous paper (Afantenos et al. 2007) we have presented a methodology for the automatic summarization of documents, emitted by multiple sources, which describe the evolution of an event. At the heart of this methodology lies the identification of similarities and differences between the various documents, in two axes: the synchronic and the diachronic. This is achieved by the introduction of the notion of Synchronic and Diachronic Relations. Those relations connect the messages that are found in the documents, resulting thus in a graph which we call grid. Although the creation of the grid completes the Document Planning phase of a typical NLG architecture, it can be the case that the number of messages contained in a grid is very large, exceeding thus the required compression rate. In this paper we provide some initial thoughts on a probabilistic model which can be applied at the Content Determination stage, and which tries to alleviate this problem.<|reference_end|>
arxiv
@article{afantenos2007some, title={Some Reflections on the Task of Content Determination in the Context of Multi-Document Summarization of Evolving Events}, author={Stergos D. Afantenos}, journal={Edited by Galia Angelova, Kalina Bontcheva, Ruslan Mitkov, Nicolas Nicolov, and Nikolai Nikolov, Recent Advances in Natural Language Processing (RANLP 2007). Borovets, Bulgaria: INCOMA, 12-16}, year={2007}, archivePrefix={arXiv}, eprint={0710.5382}, primaryClass={cs.CL} }
afantenos2007some
arxiv-1680
0710.5386
Acquisition of Information is Achieved by the Measurement Process in Classical and Quantum Physics
<|reference_start|>Acquisition of Information is Achieved by the Measurement Process in Classical and Quantum Physics: No consensus seems to exist as to what constitutes a measurement which is still considered somewhat mysterious in many respects in quantum mechanics. At successive stages mathematical theory of measure, metrology and measurement theory tried to systematize this field but significant questions remain open about the nature of measurement, about the characterization of the observer, about the reliability of measurement processes etc. The present paper attempts to talk about these questions through the information science. We start from the idea, rather common and intuitive, that the measurement process basically acquires information. Next we expand this idea through four formal definitions and infer some corollaries regarding the measurement process from those definitions. Relativity emerges as the basic property of measurement from the present logical framework and this rather surprising result collides with the feeling of physicists who take measurement as a myth. In the closing this paper shows how the measurement relativity wholly consists with some effects calculated in QM and in Einstein's theory.<|reference_end|>
arxiv
@article{rocchi2007acquisition, title={Acquisition of Information is Achieved by the Measurement Process in Classical and Quantum Physics}, author={Paolo Rocchi and Orlando Panella}, journal={AIP Conf. Proc. -- December 3, 2007 -- Volume 962, pp. 206-214. ISBN:978-0-7354-0479-3, ISSN: 0094-243X}, year={2007}, doi={10.1063/1.2827305}, archivePrefix={arXiv}, eprint={0710.5386}, primaryClass={quant-ph cs.IT hep-th math.IT} }
rocchi2007acquisition
arxiv-1681
0710.5425
Fuzzy Private Matching (Extended Abstract)
<|reference_start|>Fuzzy Private Matching (Extended Abstract): In the private matching problem, a client and a server each hold a set of $n$ input elements. The client wants to privately compute the intersection of these two sets: he learns which elements he has in common with the server (and nothing more), while the server gains no information at all. In certain applications it would be useful to have a private matching protocol that reports a match even if two elements are only similar instead of equal. Such a private matching protocol is called \emph{fuzzy}, and is useful, for instance, when elements may be inaccurate or corrupted by errors. We consider the fuzzy private matching problem, in a semi-honest environment. Elements are similar if they match on $t$ out of $T$ attributes. First we show that the original solution proposed by Freedman et al. is incorrect. Subsequently we present two fuzzy private matching protocols. The first, simple, protocol has bit message complexity $O(n \binom{T}{t} (T \log{|D|}+k))$. The second, improved, protocol has a much better bit message complexity of $O(n T (\log{|D|}+k))$, but here the client incurs a O(n) factor time complexity. Additionally, we present protocols based on the computation of the Hamming distance and on oblivious transfer, that have different, sometimes more efficient, performance characteristics.<|reference_end|>
arxiv
@article{chmielewski2007fuzzy, title={Fuzzy Private Matching (Extended Abstract)}, author={{L}ukasz Chmielewski and Jaap-Henk Hoepman}, journal={arXiv preprint arXiv:0710.5425}, year={2007}, archivePrefix={arXiv}, eprint={0710.5425}, primaryClass={cs.CR} }
chmielewski2007fuzzy
arxiv-1682
0710.5455
Analog Chaos-based Secure Communications and Cryptanalysis: A Brief Survey
<|reference_start|>Analog Chaos-based Secure Communications and Cryptanalysis: A Brief Survey: A large number of analog chaos-based secure communication systems have been proposed since the early 1990s exploiting the technique of chaos synchronization. A brief survey of these chaos-based cryptosystems and of related cryptanalytic results is given. Some recently proposed countermeasures against known attacks are also introduced.<|reference_end|>
arxiv
@article{li2007analog, title={Analog Chaos-based Secure Communications and Cryptanalysis: A Brief Survey}, author={Shujun Li, Gonzalo Alvarez, Zhong Li and Wolfgang A. Halang}, journal={3rd International IEEE Scientific Conference on Physics and Control (PhysCon 2007), http://lib.physcon.ru/?item=1368}, year={2007}, archivePrefix={arXiv}, eprint={0710.5455}, primaryClass={nlin.CD cs.CR} }
li2007analog
arxiv-1683
0710.5465
Cryptanalysis of an image encryption scheme based on a new total shuffling algorithm
<|reference_start|>Cryptanalysis of an image encryption scheme based on a new total shuffling algorithm: Chaotic systems have been broadly exploited through the last two decades to build encryption methods. Recently, two new image encryption schemes have been proposed, where the encryption process involves a permutation operation and an XOR-like transformation of the shuffled pixels, which are controlled by three chaotic systems. This paper discusses some defects of the schemes and how to break them with a chosen-plaintext attack.<|reference_end|>
arxiv
@article{arroyo2007cryptanalysis, title={Cryptanalysis of an image encryption scheme based on a new total shuffling algorithm}, author={David Arroyo, Chengqing Li, Shujun Li, Gonzalo Alvarez and Wolfgang A. Halang}, journal={Chaos, Solitons & Fractals, vol. 41, no. 5, pp. 2613-2616, 2009}, year={2007}, doi={10.1016/j.chaos.2008.09.051}, archivePrefix={arXiv}, eprint={0710.5465}, primaryClass={nlin.CD cs.CR cs.MM} }
arroyo2007cryptanalysis
arxiv-1684
0710.5471
Cryptanalysis of a computer cryptography scheme based on a filter bank
<|reference_start|>Cryptanalysis of a computer cryptography scheme based on a filter bank: This paper analyzes the security of a recently-proposed signal encryption scheme based on a filter bank. A very critical weakness of this new signal encryption procedure is exploited in order to successfully recover the associated secret key.<|reference_end|>
arxiv
@article{arroyo2007cryptanalysis, title={Cryptanalysis of a computer cryptography scheme based on a filter bank}, author={David Arroyo, Chengqing Li, Shujun Li and Gonzalo Alvarez}, journal={Chaos, Solitons & Fractals, vol. 41, no. 1, pp. 410-413, 2009}, year={2007}, doi={10.1016/j.chaos.2008.01.020}, archivePrefix={arXiv}, eprint={0710.5471}, primaryClass={nlin.CD cs.CR} }
arroyo2007cryptanalysis
arxiv-1685
0710.5501
Discriminated Belief Propagation
<|reference_start|>Discriminated Belief Propagation: Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase.<|reference_end|>
arxiv
@article{sorger2007discriminated, title={Discriminated Belief Propagation}, author={Uli Sorger}, journal={arXiv preprint arXiv:0710.5501}, year={2007}, number={TR-CSC-07-01, University of Luxembourg}, archivePrefix={arXiv}, eprint={0710.5501}, primaryClass={cs.IT cs.AI math.IT} }
sorger2007discriminated
arxiv-1686
0710.5512
Risk Minimization and Optimal Derivative Design in a Principal Agent Game
<|reference_start|>Risk Minimization and Optimal Derivative Design in a Principal Agent Game: We consider the problem of Adverse Selection and optimal derivative design within a Principal-Agent framework. The principal's income is exposed to non-hedgeable risk factors arising, for instance, from weather or climate phenomena. She evaluates her risk using a coherent and law invariant risk measure and tries minimize her exposure by selling derivative securities on her income to individual agents. The agents have mean-variance preferences with heterogeneous risk aversion coefficients. An agent's degree of risk aversion is private information and hidden to the principal who only knows the overall distribution. We show that the principal's risk minimization problem has a solution and illustrate the effects of risk transfer on her income by means of two specific examples. Our model extends earlier work of Barrieu and El Karoui (2005) and Carlier, Ekeland and Touzi (2007).<|reference_end|>
arxiv
@article{horst2007risk, title={Risk Minimization and Optimal Derivative Design in a Principal Agent Game}, author={U. Horst, S. Moreno}, journal={arXiv preprint arXiv:0710.5512}, year={2007}, archivePrefix={arXiv}, eprint={0710.5512}, primaryClass={cs.CE} }
horst2007risk
arxiv-1687
0710.5547
Code Similarity on High Level Programs
<|reference_start|>Code Similarity on High Level Programs: This paper presents a new approach for code similarity on High Level programs. Our technique is based on Fast Dynamic Time Warping, that builds a warp path or points relation with local restrictions. The source code is represented into Time Series using the operators inside programming languages that makes possible the comparison. This makes possible subsequence detection that represent similar code instructions. In contrast with other code similarity algorithms, we do not make features extraction. The experiments show that two source codes are similar when their respective Time Series are similar.<|reference_end|>
arxiv
@article{bernal2007code, title={Code Similarity on High Level Programs}, author={M. Miron Bernal, H. Coyote Estrada, J. Figueroa Nazuno}, journal={arXiv preprint arXiv:0710.5547}, year={2007}, archivePrefix={arXiv}, eprint={0710.5547}, primaryClass={cs.CV cs.DS} }
bernal2007code
arxiv-1688
0710.5582
Computing Equilibria in Anonymous Games
<|reference_start|>Computing Equilibria in Anonymous Games: We present efficient approximation algorithms for finding Nash equilibria in anonymous games, that is, games in which the players utilities, though different, do not differentiate between other players. Our results pertain to such games with many players but few strategies. We show that any such game has an approximate pure Nash equilibrium, computable in polynomial time, with approximation O(s^2 L), where s is the number of strategies and L is the Lipschitz constant of the utilities. Finally, we show that there is a PTAS for finding an epsilon<|reference_end|>
arxiv
@article{daskalakis2007computing, title={Computing Equilibria in Anonymous Games}, author={Constantinos Daskalakis, Christos Papadimitriou}, journal={arXiv preprint arXiv:0710.5582}, year={2007}, archivePrefix={arXiv}, eprint={0710.5582}, primaryClass={cs.GT cs.CC cs.DM} }
daskalakis2007computing
arxiv-1689
0710.5640
LDPC-Based Iterative Algorithm for Compression of Correlated Sources at Rates Approaching the Slepian-Wolf Bound
<|reference_start|>LDPC-Based Iterative Algorithm for Compression of Correlated Sources at Rates Approaching the Slepian-Wolf Bound: This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation.<|reference_end|>
arxiv
@article{daneshgaran2007ldpc-based, title={LDPC-Based Iterative Algorithm for Compression of Correlated Sources at Rates Approaching the Slepian-Wolf Bound}, author={F. Daneshgaran, Massimiliano Laddomada, M. Mondin}, journal={arXiv preprint arXiv:0710.5640}, year={2007}, archivePrefix={arXiv}, eprint={0710.5640}, primaryClass={cs.IT math.IT} }
daneshgaran2007ldpc-based
arxiv-1690
0710.5659
Model Checking Synchronized Products of Infinite Transition Systems
<|reference_start|>Model Checking Synchronized Products of Infinite Transition Systems: Formal verification using the model checking paradigm has to deal with two aspects: The system models are structured, often as products of components, and the specification logic has to be expressive enough to allow the formalization of reachability properties. The present paper is a study on what can be achieved for infinite transition systems under these premises. As models we consider products of infinite transition systems with different synchronization constraints. We introduce finitely synchronized transition systems, i.e. product systems which contain only finitely many (parameterized) synchronized transitions, and show that the decidability of FO(R), first-order logic extended by reachability predicates, of the product system can be reduced to the decidability of FO(R) of the components. This result is optimal in the following sense: (1) If we allow semifinite synchronization, i.e. just in one component infinitely many transitions are synchronized, the FO(R)-theory of the product system is in general undecidable. (2) We cannot extend the expressive power of the logic under consideration. Already a weak extension of first-order logic with transitive closure, where we restrict the transitive closure operators to arity one and nesting depth two, is undecidable for an asynchronous (and hence finitely synchronized) product, namely for the infinite grid.<|reference_end|>
arxiv
@article{wöhrle2007model, title={Model Checking Synchronized Products of Infinite Transition Systems}, author={Stefan W"ohrle and Wolfgang Thomas}, journal={Logical Methods in Computer Science, Volume 3, Issue 4 (November 5, 2007) lmcs:755}, year={2007}, doi={10.2168/LMCS-3(4:5)2007}, archivePrefix={arXiv}, eprint={0710.5659}, primaryClass={cs.LO} }
wöhrle2007model
arxiv-1691
0710.5666
The Entropy Photon-Number Inequality and its Consequences
<|reference_start|>The Entropy Photon-Number Inequality and its Consequences: Determining the ultimate classical information carrying capacity of electromagnetic waves requires quantum-mechanical analysis to properly account for the bosonic nature of these waves. Recent work has established capacity theorems for bosonic single-user, broadcast, and wiretap channels, under the presumption of two minimum output entropy conjectures. Despite considerable accumulated evidence that supports the validity of these conjectures, they have yet to be proven. Here we show that the preceding minimum output entropy conjectures are simple consequences of an Entropy Photon-Number Inequality, which is a conjectured quantum-mechanical analog of the Entropy Power Inequality (EPI) from classical information theory.<|reference_end|>
arxiv
@article{guha2007the, title={The Entropy Photon-Number Inequality and its Consequences}, author={Saikat Guha, Baris I. Erkmen, Jeffrey H. Shapiro}, journal={arXiv preprint arXiv:0710.5666}, year={2007}, archivePrefix={arXiv}, eprint={0710.5666}, primaryClass={quant-ph cs.IT math.IT} }
guha2007the
arxiv-1692
0710.5674
Key Substitution in the Symbolic Analysis of Cryptographic Protocols (extended version)
<|reference_start|>Key Substitution in the Symbolic Analysis of Cryptographic Protocols (extended version): Key substitution vulnerable signature schemes are signature schemes that permit an intruder, given a public verification key and a signed message, to compute a pair of signature and verification keys such that the message appears to be signed with the new signature key. A digital signature scheme is said to be vulnerable to destructive exclusive ownership property (DEO) If it is computationaly feasible for an intruder, given a public verification key and a pair of message and its valid signature relatively to the given public key, to compute a pair of signature and verification keys and a new message such that the given signature appears to be valid for the new message relatively to the new verification key. In this paper, we prove decidability of the insecurity problem of cryptographic protocols where the signature schemes employed in the concrete realisation have this two properties.<|reference_end|>
arxiv
@article{chevalier2007key, title={Key Substitution in the Symbolic Analysis of Cryptographic Protocols (extended version)}, author={Yannick Chevalier (IRIT), Mounira Kourjieh (IRIT)}, journal={arXiv preprint arXiv:0710.5674}, year={2007}, archivePrefix={arXiv}, eprint={0710.5674}, primaryClass={cs.CR} }
chevalier2007key
arxiv-1693
0710.5697
Social Browsing & Information Filtering in Social Media
<|reference_start|>Social Browsing & Information Filtering in Social Media: Social networks are a prominent feature of many social media sites, a new generation of Web sites that allow users to create and share content. Sites such as Digg, Flickr, and Del.icio.us allow users to designate others as "friends" or "contacts" and provide a single-click interface to track friends' activity. How are these social networks used? Unlike pure social networking sites (e.g., LinkedIn and Facebook), which allow users to articulate their online professional and personal relationships, social media sites are not, for the most part, aimed at helping users create or foster online relationships. Instead, we claim that social media users create social networks to express their tastes and interests, and use them to filter the vast stream of new submissions to find interesting content. Social networks, in fact, facilitate new ways of interacting with information: what we call social browsing. Through an extensive analysis of data from Digg and Flickr, we show that social browsing is one of the primary usage modalities on these social media sites. This finding has implications for how social media sites rate and personalize content.<|reference_end|>
arxiv
@article{lerman2007social, title={Social Browsing & Information Filtering in Social Media}, author={Kristina Lerman}, journal={arXiv preprint arXiv:0710.5697}, year={2007}, archivePrefix={arXiv}, eprint={0710.5697}, primaryClass={cs.CY cs.HC} }
lerman2007social
arxiv-1694
0710.5758
Grassmannian Beamforming for MIMO Amplify-and-Forward Relaying
<|reference_start|>Grassmannian Beamforming for MIMO Amplify-and-Forward Relaying: In this paper, we derive the optimal transmitter/ receiver beamforming vectors and relay weighting matrix for the multiple-input multiple-output amplify-and-forward relay channel. The analysis is accomplished in two steps. In the first step, the direct link between the transmitter (Tx) and receiver (Rx) is ignored and we show that the transmitter and the relay should map their signals to the strongest right singular vectors of the Tx-relay and relay-Rx channels. Based on the distributions of these vectors for independent identically distributed (i.i.d.) Rayleigh channels, the Grassmannian codebooks are used for quantizing and sending back the channel information to the transmitter and the relay. The simulation results show that even a few number of bits can considerably increase the link reliability in terms of bit error rate. For the second step, the direct link is considered in the problem model and we derive the optimization problem that identifies the optimal Tx beamforming vector. For the i.i.d Rayleigh channels, we show that the solution to this problem is uniformly distributed on the unit sphere and we justify the appropriateness of the Grassmannian codebook (for determining the optimal beamforming vector), both analytically and by simulation. Finally, a modified quantizing scheme is presented which introduces a negligible degradation in the system performance but significantly reduces the required number of feedback bits.<|reference_end|>
arxiv
@article{khoshnevis2007grassmannian, title={Grassmannian Beamforming for MIMO Amplify-and-Forward Relaying}, author={Behrouz Khoshnevis, Wei Yu, and Raviraj Adve}, journal={arXiv preprint arXiv:0710.5758}, year={2007}, doi={10.1109/JSAC.2008.081006}, archivePrefix={arXiv}, eprint={0710.5758}, primaryClass={cs.IT math.IT} }
khoshnevis2007grassmannian
arxiv-1695
0710.5830
Stability analysis of a max-min fair Rate Control Protocol (RCP) in a small buffer regime
<|reference_start|>Stability analysis of a max-min fair Rate Control Protocol (RCP) in a small buffer regime: In this note we analyse various stability properties of the max-min fair Rate Control Protocol (RCP) operating with small buffers. We first tackle the issue of stability for networks with arbitrary topologies. We prove that the max-min fair RCP fluid model is globally stable in the absence of propagation delays, and also derive a set of conditions for local stability when arbitrary heterogeneous propagation delays are present. The network delay stability result assumes that, at equilibrium, there is only one bottleneck link along each route. Lastly, in the simpler setting of a single link, single delay model, we investigate the impact of the loss of local stability via a Hopf bifurcation.<|reference_end|>
arxiv
@article{voice2007stability, title={Stability analysis of a max-min fair Rate Control Protocol (RCP) in a small buffer regime}, author={Thomas Voice, Gaurav Raina}, journal={arXiv preprint arXiv:0710.5830}, year={2007}, doi={10.1109/TAC.2009.2022115}, archivePrefix={arXiv}, eprint={0710.5830}, primaryClass={cs.NI} }
voice2007stability
arxiv-1696
0710.5893
Codes from Zero-divisors and Units in Group Rings
<|reference_start|>Codes from Zero-divisors and Units in Group Rings: We describe and present a new construction method for codes using encodings from group rings. They consist primarily of two types: zero-divisor and unit-derived codes. Previous codes from group rings focused on ideals; for example cyclic codes are ideals in the group ring over a cyclic group. The fresh focus is on the encodings themselves, which only under very limited conditions result in ideals. We use the result that a group ring is isomorphic to a certain well-defined ring of matrices, and thus every group ring element has an associated matrix. This allows matrix algebra to be used as needed in the study and production of codes, enabling the creation of standard generator and check matrices. Group rings are a fruitful source of units and zero-divisors from which new codes result. Many code properties, such as being LDPC or self-dual, may be expressed as properties within the group ring thus enabling the construction of codes with these properties. The methods are general enabling the construction of codes with many types of group rings. There is no restriction on the ring and thus codes over the integers, over matrix rings or even over group rings themselves are possible and fruitful.<|reference_end|>
arxiv
@article{hurley2007codes, title={Codes from Zero-divisors and Units in Group Rings}, author={Paul Hurley, Ted Hurley}, journal={arXiv preprint arXiv:0710.5893}, year={2007}, archivePrefix={arXiv}, eprint={0710.5893}, primaryClass={cs.IT math.IT} }
hurley2007codes
arxiv-1697
0710.5895
Source-to-source optimizing transformations of Prolog programs based on abstract interpretation
<|reference_start|>Source-to-source optimizing transformations of Prolog programs based on abstract interpretation: Making a Prolog program more efficient by transforming its source code, without changing its operational semantics, is not an obvious task. It requires the user to have a clear understanding of how the Prolog compiler works, and in particular, of the effects of impure features like the cut. The way a Prolog code is written - e.g., the order of clauses, the order of literals in a clause, the use of cuts or negations - influences its efficiency. Furthermore, different optimization techniques may be redundant or conflicting when they are applied together, depending on the way a procedure is called - e.g., inserting cuts and enabling indexing. We present an optimiser, based on abstract interpretation, that automatically performs safe code transformations of Prolog procedures in the context of some class of input calls. The method is more effective if procedures are annotated with additional information about modes, types, sharing, number of solutions and the like. Thus the approach is similar to Mercury. It applies to any Prolog program, however.<|reference_end|>
arxiv
@article{gobert2007source-to-source, title={Source-to-source optimizing transformations of Prolog programs based on abstract interpretation}, author={Francois Gobert, Baudouin Le Charlier}, journal={arXiv preprint arXiv:0710.5895}, year={2007}, archivePrefix={arXiv}, eprint={0710.5895}, primaryClass={cs.PL cs.LO cs.SE} }
gobert2007source-to-source
arxiv-1698
0711.0048
Declarative Diagnosis of Floundering
<|reference_start|>Declarative Diagnosis of Floundering: Many logic programming languages have delay primitives which allow coroutining. This introduces a class of bug symptoms -- computations can flounder when they are intended to succeed or finitely fail. For concurrent logic programs this is normally called deadlock. Similarly, constraint logic programs can fail to invoke certain constraint solvers because variables are insufficiently instantiated or constrained. Diagnosing such faults has received relatively little attention to date. Since delay primitives affect the procedural but not the declarative view of programs, it may be expected that debugging would have to consider the often complex details of interleaved execution. However, recent work on semantics has suggested an alternative approach. In this paper we show how the declarative debugging paradigm can be used to diagnose unexpected floundering, insulating the user from the complexities of the execution. Keywords: logic programming, coroutining, delay, debugging, floundering, deadlock, constraints<|reference_end|>
arxiv
@article{naish2007declarative, title={Declarative Diagnosis of Floundering}, author={Lee Naish}, journal={arXiv preprint arXiv:0711.0048}, year={2007}, archivePrefix={arXiv}, eprint={0711.0048}, primaryClass={cs.PL cs.SE} }
naish2007declarative
arxiv-1699
0711.0086
Convex and linear models of NP-problems
<|reference_start|>Convex and linear models of NP-problems: Reducing the NP-problems to the convex/linear analysis on the Birkhoff polytope.<|reference_end|>
arxiv
@article{gubin2007convex, title={Convex and linear models of NP-problems}, author={Sergey Gubin}, journal={arXiv preprint arXiv:0711.0086}, year={2007}, archivePrefix={arXiv}, eprint={0711.0086}, primaryClass={cs.DM cs.CC cs.DS math.CO} }
gubin2007convex
arxiv-1700
0711.0110
Phase Transitions and Computational Difficulty in Random Constraint Satisfaction Problems
<|reference_start|>Phase Transitions and Computational Difficulty in Random Constraint Satisfaction Problems: We review the understanding of the random constraint satisfaction problems, focusing on the q-coloring of large random graphs, that has been achieved using the cavity method of the physicists. We also discuss the properties of the phase diagram in temperature, the connections with the glass transition phenomenology in physics, and the related algorithmic issues.<|reference_end|>
arxiv
@article{krzakala2007phase, title={Phase Transitions and Computational Difficulty in Random Constraint Satisfaction Problems}, author={Florent Krzakala and Lenka Zdeborov'a}, journal={2008 J. Phys.: Conf. Ser. 95 012012}, year={2007}, doi={10.1088/1742-6596/95/1/012012}, archivePrefix={arXiv}, eprint={0711.0110}, primaryClass={cs.CC cond-mat.stat-mech} }
krzakala2007phase