abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
Collaboration technologies are seeing widespread deployment even though it is difficult to assess the effectiveness of these systems. This paper presents an evaluation method that addresses this issue and the use of the method in the field. The method uses a framework to structure evaluations by mapping system goals to evaluation objectives, metrics, and measures. The upper-most levels of the framework are conceptual in nature, while the bottom level is implementation-specific, i.e., evaluation-specific. Capitalizing on this top-down approach, an evaluation template specifying the conceptual elements can be constructed for a series of evaluations. Then implementation-specific measures are specified for individual experiments. This structure makes use of the framework ideal for comparison of the effectiveness of collaboration tools in a particular environment. We present our findings from use of the method in the field to assess the performance of a particular collaboration technology deployment and its impact on the work process.
['Michelle Potts Steves', 'Jean Scholtz']
A Framework for Evaluating Collaborative Systems in the Real World
24,735
One of the prominent issues in Genetic Algorithm (GA) is premature convergence on local optima. This restricts the enhanced optimal solution searching in the entire search space. Population size is one of the influencing factors in Genetic Algorithm. Increasing the population size will improvise the randomized searching and maintains the diversity in the population. It also increases its computational complexity. Especially in GA Biclustering (GABiC), the search should be randomized to find more optimal patterns. In this paper, a novel approach for population setup in MapReduce framework is proposed. The maximal population is split into population sets, and these groups will proceed searching in parallel using MapReduce framework. This approach is attempted for biclustering the gene expression dataset in this paper. The performance of this proposed work seems promising on comparing its results with those obtained from previous hybridized optimization approaches. This approach will also handle data scalability issues and applicable to the big data biclustering problems.
['R. Gowri', 'R. Rathipriya']
Local Optima Avoidance in GA Biclustering using Map Reduce
945,448
Almost difference sets have important applications in cryptography and coding theory. In this paper, new families of almost difference sets are constructed. Some necessary conditions for the existence of (v,k,/spl lambda/,t)-almost difference sets in Z/sub v/ are derived.
['Yuan Zhang', 'Jian Guo Lei', 'Shao Pu Zhang']
A new family of almost difference sets and some necessary conditions
298,712
This paper aims to extend current research in the area of on-line business-to-business clients preferences. A quantitative research in a form of transaction log analysis (TLA) performed by the authors allow to conclude that search log and shop basket logs are sources of viable information about business customers. Transaction log analysis allows to identify different customer groups, such as searchers and buyers. It also allows for better customization of cloud of tags, so that the results are better tailored to customers' preferences. It turns out that a large list of products (result of search process) is not a deterrent to purchasing on-line. However, in order to facilitate the purchasing process b2b platform should offer additional filtering mechanisms.
['Lukasz Wiechetek', 'Mieczyslaw Pawlowski']
Searching for information and making purchase decisions in b2b online stores. The case of the technical articles wholesale
898,013
The problem of projected clustering was first proposed in the ACMSIGMOD Conference in 1999, and the Probabilistic Latent Semantic Indexing (PLSI) technique was independently proposed in the ACMSIGIR Conference in the same year. Since then, more than two thousand papers have been written on these problems by the database, data mining and information retrieval communities, along completely independent lines of work . In this paper, we show that these two problems are essentially equivalent, under a probabilistic interpretation to the projected clustering problem. We will show that the EM-algorithm, when applied to the probabilistic version of the projected clustering problem, can be almost identically interpreted as the PLSI technique. The implications of this equivalence are significant, in that they imply the cross-usability of many of the techniques which have been developed for these problems over the last decade. We hope that our observations about the equivalence of these problems will stimulate further research which can significantly improve the currently available solutions for either of these problems.
['Charu C. Aggarwal']
On the equivalence of PLSI and projected clustering
454,490
Visual exploration has proven to be a powerful tool for multivariate data mining and knowledge discovery. Most visualization algorithms aim to find a projection from the data space down to a visually perceivable rendering space. To reveal all of the interesting aspects of multimodal data sets living in a high-dimensional space, a hierarchical visualization algorithm is introduced which allows the complete data set to be visualized at the top level, with clusters and subclusters of data points visualized at deeper levels. The methods involve hierarchical use of standard finite normal mixtures and probabilistic principal component projections, whose parameters are estimated using the expectation-maximization and principal component neural networks under the information theoretic criteria. We demonstrate the principle of the approach on several multimodal numerical data sets, and we then apply the method to the visual explanation in computer-aided diagnosis for breast cancer detection from digital mammograms.
['Yue Joseph Wang', 'Lan Luo', 'Matthew T. Freedman', 'Sun-Yuan Kung']
Probabilistic principal component subspaces: a hierarchical finite mixture model for data visualization
39,555
['Martin Dowd']
An Extension of the Lebesgue Measure Pertaining to the Repeated Experiment
193,803
Experience shows that data warehouse solutions and reporting systems widely-used in enterprises can support the information needs of municipal managers only to a certain degree. Their need for information need is generated, in part, by ad hoc press reports, citizen requests or actions of the political opposition. Regularly produced reports or figure based information do not often provide satisfactory answers to unpredictable questions. The development of a management information system for the municipality in Stuttgart, Germany, concentrates on ad hoc information retrieval which is semantically supported by topic map technology. The article describes the characteristics of unstructured ad hoc information needs and shows on a conceptual level how topic maps can be developed as an instrument to support municipal management.
['Petra Wolf', 'Helmut Krcmar']
Topic Map Technology for Municipal Management Information Systems
25,198
This work is intended to be an objective introduction to the topic of secure image transcoding for ubiquitous P2P environments. The focus of the article is on the "XMLization" (or computer imaging in XML) in general, and SVG in particular. This article develop a framework for achieving such goal using the JXTA delivery infrastructure, the Batik or the XSL APIs for transcoding and the XML encryption based on lightweight encipherment techniques. The ultimate goal is development of a comprehensive secure medical imaging delivery system which can work effectively for ubiquitous and P2P environment.
['Sabah Mohammed', 'Jinan Fiaidhi']
Developing secure transcoding intermediary for SVG medical images within peer-to-peer ubiquitous environment
364,181
Since heterogeneous translucent materials, such as natural jades and marble, are complex hybrids of different materials, it is difficult to set precise optical parameters for subsurface scattering model as the material really has. In this paper, an inverse rendering approach is presented for heterogeneous translucent materials from a single input photograph. Given one photograph with an object of a certain heterogeneous translucent material, our approach can generate material distribution and estimate heterogeneous optical parameters to render images that look similar to the input photograph. We initialize material distribution using 3D Simplex Noise combined with Fractal Brownian Motion, and set color pattern of the noise using histogram matching method. The volume data with heterogeneous optical parameters is initialized based on the value of color pattern matched noise, and it is rendered in a certain lighting condition using Monte Carlo ray marching method. An iteration process is designed to approximate optical parameters to minimize the difference between rendering result and input photograph. Then the volume data with optimal heterogeneous optical parameters is obtained, which can be used for rendering any geometry model in different lighting conditions. Experimental results show that heterogeneous translucent objects can be rendered precisely similar to the material in the photograph with our approach.
['Jingjie Yang', 'Shuangjiu Xiao']
An inverse rendering approach for heterogeneous translucent materials
966,408
Feature selection is an important technique in machine learning and pattern classification. Most existing studies of feature selection are using the batch learning methods. Such methods are not appropriate for real-world applications especially when data arrive sequentially. Recently, this problem is addressed by some feature selection techniques using online learning. Despite the advantages in efficiency of online feature selection methods, they are not always accurate enough when handling real world data. In this paper, we address this limitation by the integration of automated negotiation process. We present a novel method based on negotiation theory for online feature selection (ANOFS) and demonstrate its application to several public datasets.
['Fatma Ben Said', 'Adel M. Alimi']
ANOFS: Automated negotiation based online feature selection method
821,878
Trajectory data visualizations are usually 3-Dimensional. Due to the potential cluttering and occlusion problems, slow speed and poor interactivity as well as the influence of perspective to the effects of display, 3D visualization techniques have been controversial in the field of information visualization. To address these problems, this paper proposes a novel 3D trajectory visualization method employing the immersive virtual reality technology. Compared with the traditional 3D trajectory visualization methods, our method breaks the limitations of the traditional 2D display of 3D visualization, and engages users in a virtual world with optional perspective to observe 3D visualization without over-plotting. Through practical experiments, our work can reduce the over-plotting of 3D visualization results, and provide smooth transition between visualization views and interactions.
['Meng-Jia Zhang', 'Jie Li', 'Kang Zhang']
Using Virtual Reality Technique to Enhance Experience of Exploring 3D Trajectory Visualizations
616,302
Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly large or exactly zero. From a learning-theoretic perspective, these methods control the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization. This paper presents three contributions related to learning sparse classifiers. First, we introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a component-wise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can be derived for our classifier in the binary case. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.
['Balaji Krishnapuram', 'Lawrence Carin', 'Mário A. T. Figueiredo', 'Alexander J. Hartemink']
Sparse multinomial logistic regression: fast algorithms and generalization bounds
327,447
Wellness and healthcare are central to the lives of all people, young or old, healthy or ill, rich or poor. New computing and behavioral research can lead to transformative changes in the cost-effective delivery of quality and personalized healthcare. Also beyond the daily practice of healthcare and wellbeing, basic information technology research can provide the foundations for new directions in the clinical sciences via tools and analyses that identify subtle but important causal signals in the fusing of clinical, behavioral, environmental and genetic data. In this paper we describe a system that analyzes images from the laparoscopic videos. It indicates the possibility of an injury to the cystic artery by automatically detecting the proximity of the surgical instruments with respect to the cystic artery. The system uses machine learning algorithm to classify images and warn surgeons against probable unsafe actions.
['Ashwini Lahane', 'Yelena Yesha', 'Michael A. Grasso', 'Anupam Joshi', 'Adrian Park', 'Jimmy Lo']
Detection of unsafe action from laparoscopic cholecystectomy video
87,737
Driven by the high sensing resolution provided by advanced nano technologies, Electromagnetic-based Wireless Nano Sensor Networks (EM-WNSNs) operating in the terahertz (THz) band is becoming an integral part of the Internet of Things (IoT). Data acquisition for EM-WNSNs face two challenges: the dynamic IoT backhaul bandwidth and THz channel conditions that jointly impact resource utilization efficiency. Specifically, the mismatch between the demands of the EM-WNSNs and the available bandwidth of the backhaul link reduces the bandwidth efficiency of IoT backhaul or the energy efficiency of EM-WNSNs. To address the new constraints that emerge from the EM-WNSN, an On-demand Efficient (OE) polling is proposed for the backhaul tier of EM-WNSNs which is composed of nano sinks that aggregate and transport data from nano sensors to the IoT gateway. The OE polling adjusts the packet aggregation process of nano sinks according to the up-to-date network conditions and we show that it achieves both high bandwidth efficiency for IoT backhaul and high energy efficiency for EM-WNSNs. To the best of our knowledge, OE polling is the first data acquisition scheme that connects EM-WNSNs with IoT by taking into consideration the dynamic IoT backhaul bandwidth and THz channel conditions.
['Hang Yu', 'Bryan Ng', 'Winston Khoon Guan Seah']
On-demand efficient polling for nanonetworks under dynamic IoT backhaul network conditions
987,104
At the level of individual neurons, catecholamine release increases the responsivity of cells to excitatory and inhibitory inputs. We present a model of catecholamine effects in a network of neural-like elements. We argue that changes in the responsivity of individual elements do not affect their ability to detect a signal and ignore noise. However, the same changes in cell responsivity in a network of such elements do improve the signal detection performance of the network as a whole. We show how this result can be used in a computer simulation of behavior to account for the effect of CNS stimulants on the signal detection performance of human subjects.
['David Servan-Schreiber', 'Harry William Printz', 'Jonathan D. Cohen']
The Effect of Catecholamines on Performance: From Unit to System Behavior
258,843
This paper solves the dynamic traveling salesman problem (DTSP) using dynamic Gaussian Process Regression (DGPR) method. The problem of varying correlation tour is alleviated by the nonstationary covariance function interleaved with DGPR to generate a predictive distribution for DTSP tour. This approach is conjoined with Nearest Neighbor (NN) method and the iterated local search to track dynamic optima. Experimental results were obtained on DTSP instances. The comparisons were performed with Genetic Algorithm and Simulated Annealing. The proposed approach demonstrates superiority in finding good traveling salesman problem (TSP) tour and less computational time in nonstationary conditions.
['Stephen M. Akandwanaho', 'Aderemi Oluyinka Adewumi', 'A. A. Adebiyi']
Solving Dynamic Traveling Salesman Problem Using Dynamic Gaussian Process Regression
309,709
Most of the present methods for multi-objective decision making can only deal with linearly ordered preference information. In this paper, we focus on investigating methods for multi-objective decision making when the preference information set includes incomparable natural language terms. A logical algebraic structure of lattice implication algebra is then applied to represent both comparable and incomparable information simultaneously. We present a model for multi-objective decision making in Which the preference information set is a kind of linguistic-valued lattice implication algebras. And we extend the model to deal with the multi-objective decision making when the preference information set is a generalized linguistic-valued lattice. In these cases, decision makers can supply lattice information on their preference and weights of the individual objectives.
['Xiaobing Li', 'Da Ruan', 'Jun Liu', 'Yang Xu']
A linguistic lattice-valued approach for fuzzy multi-objective decision making
855,137
This paper proposes an efficient architecture, which combines the context-based adaptive variable length coding (CAVLC) decoder and inverse quantization (IQ) together to simplify the H.264/AVC decoder. The IQ function is effectively moved to the run before stage in the CAVLC decoder. With this efficient arrangement, it can easily implement the interface between CAVLC decoder and IQ without additional logic circuit. However, the authors also use pipeline skill to improve the performance. Because there are data dependency properties in the CAVLC decoder, it should modify the algorithm in the standard to realize the pipeline skill. The authors implement this architecture with UMC 0.18 mum cell library. The simulation results show the operation frequency can achieve 200 MHz. The total number of logic gate counts is 9.23k. For the real-time requirement, it achieves 1080HD (1920times1088) @30 frames/sec while the clock frequency is set to 195 MHz
['Yi-Chih Chao', 'Shih-Tse Wei', 'Jar-Ferr Yang', 'Bin-Da Liu']
Combined CAVLC Decoder and Inverse Quantizer for Efficient H.264/AVC Decoding
412,047
['Masaru Ito', 'Hiroshi Inoue', 'Kenjiro Taura']
Fragmented BWT: An Extended BWT for Full-Text Indexing
891,037
Summary. Spatio-temporal clustering is a process of grouping objects based on their spatial and temporal similarity. It is relatively new subfield of data mining which gained high popularity especially in geographic information sciences due to the pervasiveness of all kinds of location-based or environmental devices that record position, time or/and environmental properties of an object or set of objects in real-time. As a consequence, different types and large amounts of spatio-temporal data became available that introduce new challenges to data analysis and require novel approaches to knowledge discovery. In this chapter we concentrate on the spatio-temporal clustering in geographic space. First, we provide a classification of different types of spatio-temporal data. Then, we focus on one type of spatio-temporal clustering - trajectory clustering, provide an overview of the state-of-the-art approaches and methods of spatio-temporal clustering and finally present several scenarios in different application domains such as movement, cellular networks and environmental studies.
['Slava Kisilevich', 'Florian Mansmann', 'Mirco Nanni', 'Salvatore Rinzivillo']
Spatio-temporal clustering
361,868
In insurance application, there are typical problems which can arise as complexity of products increases and reason for product line engineering required. For easy channel extension and new product release, insurance product line engineering was launched. In this paper, we describe experiences of product line engineering. Specially, this paper proposes ACM (Adaptable Component Model) to deal with variability of component applying with rule concept. For component reuse, variability management is most important in product line engineering. It is a basic and essential principle to identify common elements as components and to make them assets of product line. But the problem is that it is not easy to identify what are the variable parts of components and to handle them. These days, business rule is getting more critical key for successful RTE (Real Time Enterprise). It means separating rules can make easy change or fast adapt to new business context without modifying application. The elements of variability which change with each application are viewed as variability of business rules. Therefore, when there is need to change business requirements in product line; this is resolved through rule changes.
['Jeong Ah Kim']
Variability Management with ACM (Adaptable Component Model) for Insurance Product Line
233,255
Summary: We offer a tool, denoted VISTAL, for two-dimensional visualization of protein structural alignments. VISTAL describes aligned structures as a series of matched secondary structure elements, colored according to the three-dimensional distance of their Cα atoms.#R##N##R##N#Availability: VISTAL can be downloaded from http://trantor.bioc.columbia.edu/~kolodny/software.html#R##N##R##N#Contact: [email protected]
['Rachel Kolodny', 'Barry Honig']
VISTAL---a new 2D visualization tool of protein 3D structural alignments
351,962
['C. Eckardt', 'Danilo Kardel', 'Christof Leng', 'Jan Marco Leimeister', 'Michael Mörike', 'Welf Schröter', 'Hans-Dieter Zimmermann']
Privatheit in der E-Society - Zwischenbilanz eines Diskussionsprozesses.
764,594
Documents and web pages share many similarities. Thus classification methods used in documents can be applied to advanced web content, with or even without modifications. Algorithms for document and web classification are presented as an introduction. One out of many tools that can be used in method evaluation, application and modification is WEKA (Waikato Environment for Knowledge Analysis). Testing results and conclusions strengthen the principles and bases of classification, while demonstrating the need for a new interlayer in the evaluation of classification methods.
['Ioannis Charalampopoulos', 'Ioannis Anagnostopoulos']
A Comparable Study Employing WEKA Clustering/Classification Algorithms for Web Page Classification
450,492
['Maureen Donnelly']
Layered mereotopology
837,244
['Zhebin Hu', 'Chaoyi Huang', 'Jun Peng', 'Weiming Shi', 'Songbai He', 'Fei You', 'Haodong Lin']
Concurrent Tri-Band Power Amplifier Based on Novel Tri-band Impedance Transformer
898,457
With the introduction of small cell into current macro cell structure, the ever-growing demand for mobile data service has the opportunity to be fulfilled. But the correspondingly overlaid dense deployment situation caused by the Heterogeneous Network (HetNet) also arouses the interference and mobility management problems. In order to solve the above problems, we propose the Joint Voronoi diagram and game theory-based power control scheme for the HetNet small cell networks with two-step approach. The first step focuses on the optimization of the small cell cluster deployment planning within the coverage of the macro cell. The intra-tier interferences are mitigated by the min-max power allocation algorithm and the mobility management performance can also be improved by the Voronoi diagram-based scheme. Then, the second step addresses on the mitigation of cross-tier interferences while protecting the guaranteed users and high-mobility users usually served by the macro cells. The game theory-based dynamic power control scheme is proposed via non-cooperative game model with the convex pricing function. The existence and uniqueness of the Nash equilibrium for the proposed game model are verified, which provide the feasible solution for the cross-tier power control in the heterogeneous network. From the observation of the system-level simulation results, the proposed scheme can bring a significant increase in the system throughput, reduce the outage probability, as well as enhance the energy efficiency over current works.
['Xiaodong Xu', 'Yi Li', 'Rui Gao', 'Xiaofeng Tao']
Joint Voronoi diagram and game theory-based power control scheme for the HetNet small cell networks
490,936
An asynchronous multicarrier code division multiple access (MC-CDMA) scheme over frequency selective multipath Rayleigh fading channels for the reverse link of a mobile communication system is investigated when the delay spread is in excess of a symbol interval. An architecture of a centralized decision feedback equalizer (CDFE) based on the minimum mean-square-error (MMSE) criterion is suggested to reduce both inter-symbol-interference (ISI) and multiple access interference (MAI) due to the multi-path propagation, and the MAI due to the effect of the reverse asynchronous reception mode at the base station. The receiver architecture is for multi-user detection (MUD) consisting of a multiple-layer feed-forward filter (ML-FFF) and a centralized feed back filter (CFBF). Results indicate that an enhancement in capacity and good interference resistance are obtained by the proposed multi-user detection scheme over the structure without CFBF (NCDFE), which only uses ML-FFF and FBF. Additionally, it is demonstrated that the structure performs multi-path energy Rake combining.
['R. Liu', 'E.G. Chester', 'Bayan S. Sharif', 'S.J. Yi']
Performance of asynchronous multicarrier CDMA multiuser receiver over frequency selective fading channels
433,734
In a time-sharing system that is intended to serve a number of console users simultaneously, there are two related, but distinct, functions to be performed. One is time slicing , which is the allocation of bursts of processor time to the various active programs according to a suitable algorithm. The other is core space allocation which arises because, in a modern multi-programmed system, there will be space in core for more than one active program at the same time. If, as will normally be the case, there are more active programs than can be accommodated in core, some of them must be held on a drum and brought into core periodically; this is swapping . Confusion has sometimes arisen between time slicing and swapping, since, in the early time-sharing systems, there was only one active object program resident in core at any time, all the others being on the drum. In these circumstances, swapping and time slicing go together; when a program is in core, it is receiving processor time, and as soon as it ceases to receive processor time it is removed from core. In a multi-programmed system, however, space allocation and time slicing can proceed independently. It is the responsibility of the space allocation algorithm to ensure that, as far as possible, there is always at least one program in core that is ready to run. The time-slicing algorithm is responsible for dividing up the available processor time between the various programs that are in core.
['Maurice V. Wilkes']
A model for core space allocation in a time-sharing system
16,741
Image noise can present a serious problem in motion deblurring. While most state-of-the-art motion deblurring algorithms can deal with small levels of noise, in many cases such as low-light imaging, the noise is large enough in the blurred image that it cannot be handled effectively by these algorithms. In this paper, we propose a technique for jointly denoising and deblurring such images that elevates the performance of existing motion deblurring algorithms. Our method takes advantage of estimated motion blur kernels to improve denoising, by constraining the denoised image to be consistent with the estimated camera motion (i.e., no high frequency noise features that do not match the motion blur). This improved denoising then leads to higher quality blur kernel estimation and deblurring performance. The two operations are iterated in this manner to obtain results superior to suppressing noise effects through regularization in deblurring or by applying denoising as a preprocess. This is demonstrated in experiments both quantitatively and qualitatively using various image examples.
['Yu-Wing Tai', 'Stephen Lin']
Motion-aware noise filtering for deblurring of noisy and blurry images
11,347
The polynomial chaos of Wiener provides a framework for the statistical analysis of dynamical systems, with computational cost far superior to Monte Carlo simulations. It is a useful tool for control systems analysis because it allows probabilistic description of the effects of uncertainty, especially in systems having nonlinearities and where other techniques, such as Lyapunov's method, may fail. We show that stability of a system can be inferred from the evolution of modal amplitudes, covering nearly the full support of the uncertain parameters with a finite series. By casting uncertain parameters as unknown gains, we show that the separation of stochastic from deterministic elements in the response points to fast iterative design methods for nonlinear control.
['Franz S. Hover', 'Michael S. Triantafyllou']
Application of polynomial chaos in stability and control
522,158
['Tiago Pinto', 'Zita Vale', 'Isabel Praça', 'Gabriel Santos']
Demonstration of ALBidS: Adaptive Learning Strategic Bidding System
824,178
This paper presents how the TAAABLE project addresses the textual case-based reasoning challenge of the CCC, thanks to a combination of prin- ciples, methods, and technologies of various fields of knowledge-based system technologies, namely CBR, ontology engineering (manual and semi-automatic), data and text-mining using textual resources of the Web, text annotation (used as an indexing technique), knowledge representation, and hierarchical classifica- tion. Indeed, to be able to reason on textual cases, indexing them by a formal representation language using a formal vocabulary has proven to be useful.
['Fadi Badra', 'Rokia Bendaoud', 'Rim Bentebibel', 'Pierre-Antoine Champin', 'Julien Cojan', 'Amélie Cordier', 'Sylvie Després', 'Stéphanie Jean-Daubias', 'Jean Lieber', 'Thomas Meilender', 'Alain Mille', 'Emmanuel Nauer', 'Amedeo Napoli', 'Yannick Toussaint']
TAAABLE: Text Mining, Ontology Engineering, and Hierarchical Classification for Textual Case-Based Cooking
183,605
This paper proposes an English adverb ordering method based on adverb grammatical functions (subjuncts, adjuncts, disjuncts and conjuncts) and meanings (process, space, time etc.), preferred positions in sentences (initial, medial, end, pre, post), and priorities between adverbs with the same preferred position.
['Kentaro Ogura', 'Francis Bond', 'Satoru Ikehara']
English Adverb Generation in Japanese to English Machine Translation
358,901
['Darya Chyzhyk', 'Manuel Graña']
Findings in resting-state fMRI by differences from K-means clustering.
809,100
Although a significant number of public organizations have embraced the idea of open data, many are still reluctant to do this. One root cause is that the publicizing of data represents a shift from a closed to an open system of governance, which has a significant impact upon the relationships between public agencies and the users of open data. Yet no systematic research is available which compares the benefits of an open data with the barriers to its adoption. Based on interviews and a workshop, the benefits and adoption barriers for open data have been derived. The findings show that a gap exists between the promised benefits and barriers. They furthermore suggest that a conceptually simplistic view is often adopted with regard to open data, one which automatically correlates the publicizing of data with use and benefits. Five ‘myths’ are formulated promoting the use of open data and placing the expectations within a realistic perspective. Further, the recommendation is given to take a user’s view and to actively govern the relationship between government and its users.
['Marijn Janssen', 'Yannis Charalabidis', 'Anneke Zuiderwijk']
Benefits, Adoption Barriers and Myths of Open Data and Open Government
537,448
Susceptibility-weighted imaging (SWI) venography can produce detailed venous contrast and complement arterial dominated MR angiography (MRA) techniques. However, these dense reversed-contrast SWI venograms pose new segmentation challenges. We present an automatic method for whole-brain venous blood segmentation in SWI using Conditional Random Fields (CRF). The CRF model combines different first and second order potentials. First-order association potentials are modeled as the composite of an appearance potential, a Hessian-based shape potential and a non-linear location potential. Second-order interaction potentials are modeled using an auto-logistic (smoothing) potential and a data-dependent (edge) potential. Minimal post-processing is used for excluding voxels outside the brain parenchyma and visualizing the surface vessels. The CRF model is trained and validated using 30 SWI venograms acquired within a population of deep brain stimulation (DBS) patients (age range $= 43{\mathchar"702D}73$ years). Results demonstrate robust and consistent segmentation in deep and sub-cortical regions (median ${\rm kappa}= 0.84$ and 0.82), as well as in challenging mid-sagittal and surface regions (median ${\rm kappa}= 0.81$ and 0.83) regions. Overall, this CRF model produces high-quality segmentation of SWI venous vasculature that finds applications in DBS for minimizing hemorrhagic risks and other surgical and non-surgical applications.
['Silvain Bériault', 'Yiming Xiao', 'D. Louis Collins', 'G. Bruce Pike']
Automatic SWI Venography Segmentation Using Conditional Random Fields
554,289
['Sri Harish Reddy Mallidi', 'Sriram Ganapathy', 'Hynek Hermansky']
Modulation Spectrum Analysis for Recognition of Reverberant Speech.
749,069
Betweenness Centrality (BC) is steadily growing in popularity as a metrics of the influence of a vertex in a graph. The BC score of a vertex is proportional to the number of all-pairs-shortest-paths passing through it. However, complete and exact BC computation for a large-scale graph is an extraordinary challenge that requires high performance computing techniques to provide results in a reasonable amount of time. Our approach combines bi-dimensional (2-D) decomposition of the graph and multi-level parallelism together with a suitable data-thread mapping that overcomes most of the difficulties caused by the irregularity of the computation on GPUs. In order to reduce time and space requirements of BC computation, a heuristics based on 1-degree reduction technique is developed as well. Experimental results on synthetic and real-world graphs show that the proposed techniques are well suited to compute BC scores in graphs which are too large to fit in the memory of a single computational node.
['Massimo Bernaschi', 'Giancarlo Carbone', 'Flavio Vella']
Scalable betweenness centrality on multi-GPU systems
815,536
Animations and videos are often designed to present information that involves change over time, in such a way as to aid understanding and facilitate learning. However, in many studies, static displays have been found to be just as beneficial and sometimes better. In this study, we investigated the impact of presenting together both a video recording and a series of static pictures. In experiment 1, we compared 3 conditions (1) video shown alone, (2) static pictures displayed alone, and (3) video plus static pictures. On average the best learning scores were found for the 3rd condition. In experiment 2 we investigated how best to present the static pictures, by examining the number of pictures required (low vs. high frequency) and their appearance type (static vs. dynamic). We found that the dynamic presentation of pictures was superior to the static pictures mode; and showing fewer pictures (low frequency) was more beneficial. Overall the findings support the effectiveness of a combination of instructional animation with static pictures. However, the number of static pictures, which are used, is an important moderating factor.
['Amaël Arguel', 'Eric Jamet']
Using video and static pictures to improve learning of procedural contents
339,900
We present a system to enhance signal-to-interference plus noise ratio (SINR) for multiple-input-multiple-output (MIMO) direct-sequence code-division multiple-access (DS/CDMA) communications in the downlink for frequency-selective fading environments. The proposed system utilizes a transmit antenna array at the base station and a receive antenna array at the mobile station with finite-impulse response filters at both the transmitter and receiver. We arrive at our system by attempting to find the optimal solution to a general MIMO antenna system. A single user joint optimum scenario and a multiuser SINR enhancement scenario are derived. In addition, a simplified one-finger receiver structure is introduced. Numerical results reveal that significant system performance and capacity improvement over conventional approaches are possible. We also investigate the sensitivity of the proposed system to channel estimation errors.
['Ruly Lai-U Choi', 'Ross David Murch', 'Khaled Ben Letaief']
MIMO CDMA antenna system for SINR enhancement
321,218
Immune computation is burgeoning bioinformatics technique inspired from the natural immune system and can solve the information security problems such as antivirus and fault detection. And the immune model is a crucial problem of the artificial immune system. In this paper, an immune model was proposed for the application of a mobile robot simulator, which was infected by some worms, such the love worm and the happy-time worm. The immune model was comprised of three tiers, including the inherent immune tier, the adaptive immune tier and the parallel computing tier. This immune model was built on the theories of the natural immune system and had many excellent features, such as adaptability, immunity, memory, learning, and robustness. And the application example of the immune model in the mobile robot simulator showed, the artificial immune system can detect, recognize, learn and eliminate computer viruses, and can detect and repair faults such as software bugs, and so the immune computation is an excellent approach for antivirus security. Moreover, the application fields and prospect of the immune computation would be rich and successful in the near future
['Tao Gong', 'Sigma Xi', 'Zixing Cai']
An immune model and its application to a mobile robot simulator
365,721
We present a new approach for humanoid gait generation based on movement primitives learned from optimal and dynamically feasible motion trajectories. As testing platform we consider the humanoid robot HRP-2, so far only in simulation. Training data is generated by solving a set of optimal control problems for a minimum-torque optimality criterion and five different step lengths. As the dynamic robot model with all its kinematic and dynamic constraints is considered in the optimal control problem formulation, the resulting motion trajectories are not only optimal but also dynamically feasible. For the learning process we consider the joint angle trajectories of all actuated joints, the ZMP trajectory and the pelvis trajectory, which are sufficient quantities to control the robot. From the training data we learn morphable movement primitives based on Gaussian processes and principal component analysis. We show that five morphable primitives are sufficient to generate steps with 24 different lengths, which are close enough to both dynamical feasibility and optimality to be useful for fast on-line movement generation.
['Kai Henning Koch', 'Debora Clever', 'Katja D. Mombaur', 'Dominik Endres']
Learning movement primitives from optimal and dynamically feasible trajectories for humanoid walking
589,098
The subclass method is a classifier based on approximation of class regions. It assumes that all classes are separable (but not necessarily linear separable). We extend the method so as to meet cases in which class-conditional probability density functions (PDFs) overlap each other. In this extension, the method becomes a histogram approach for approximating PDFs, but the method allows overlapping of bins unlike usual histogram approaches. It is shown that this method is consistent in the sense that the error rate approaches the Bayes error rate as the number of samples tends to infinity. It is also shown that the convergence rate is faster than that using a previous MDL-based histogram approach in the range of practical number of samples.
['Mineichi Kudo', 'Hideyuki Imai', 'Masaru Shimbo']
A histogram-based classifier on overlapped bins
504,747
With the additional constraint of requiring only two codeword lengths, lossless codes of blocks of size n generated by stationary memoryless binary sources are studied. For arbitrary /spl delta/>0, classical large-deviation inequalities imply the existence of codes attaining an expected redundancy of the order O(n/sup -1/2+/spl delta//). It is shown that it is not possible to construct lossless codes with two codeword lengths having rate of order better or equal to O(n/sup -1/2/).
['Enrique Juan De Dios Fernández Figueroa', 'Christian Houdré']
On the asymptotic redundancy of lossless block coding with two codeword lengths
262,356
In the study on sports image classification, the characteristics of human pose increasingly raise concerns of researchers. However, the same posture for human may be resulted from different scenes and scene objects that express diverse action states and meanings. Thus, combination of human pose and event scenes shall be considered so as to improve performance of sports image classification. In recent years, spatial pyramid matching (SPM) has attracted more and more attentions on the field of natural scene categories. Moreover, the high accuracy in image retrieval and image classification has been shown in multiple works. However, SPM can merely consider the absolute locations of the visual words in images. Hence, this paper attempts to take spatial pyramid matching as the basic idea, and combines with Visual Words Spatial Dependence Matrices that describes the relative spatial information. As shown in the experimental results, classification accuracy of the proposed method is improved by approximately 19% compared with SPM, and superior to some other improved SPM methods in the sports image classification.
['Yue Gao', 'Kazuki Katagishi']
Improved Spatial Pyramid Matching for Sports Image Classification
692,194
['Jun Ding', 'Xiaohui Cai', 'Ying Wang', 'Haiyan Hu', 'Xiaoman Shawn Li']
ChIPModule: systematic discovery of transcription factors and their cofactors from ChIP-seq data.
734,790
One man's "magic" is another man's engineering. Robert A. Heinlein Some beginning students have fuzzy mental models of how the computer works, or worse, sincerely believe that the computer works unpredictably, "by magic". We seek to demystify computing for these students using analogy, by showing them something that even magic itself isn't really mystical, it is just computation. This is a continuation of our standing-room only SIGCSE 2012 special session. Magic is one of the most colorful examples of "unplugged" (i.e., without-computer, active learning) activities. It adds a unique facet in that it holds a hidden secret that the audience can be challenged to unfold. Once solved, students are often enthusiastic to perform the magic in front of others. In this session, we will share a variety of new magic tricks whose answer is grounded in computer science: modulo arithmetic, human-computer interfaces, algorithms, binary encoding, invariants, etc. For each trick, we will have an interactive discussion of its underlying computing fundamentals, and tips for successful showmanship. Audience participation will be critical, for helping us perform the magic, discussing the solution, and contributing other magic tricks.
['Daniel D. Garcia', 'David Ginat']
Demystifying computing with magic, continued
295,365
['Andrea Pazienza', 'Floriana Esposito', 'Stefano Ferilli']
An authority degree-based evaluation strategy for abstract argumentation frameworks.
791,870
Background#R##N#The number of genes declared differentially expressed is a random variable and its variability can be assessed by resampling techniques. Another important stability indicator is the frequency with which a given gene is selected across subsamples. We have conducted studies to assess stability and some other properties of several gene selection procedures with biological and simulated data.
['Xing Qiu', 'Yuanhui Xiao', 'Alexander Y. Gordon', 'Andrei Yakovlev']
Assessing stability of gene selection in microarray data analysis.
2,182
What makes an image appear realistic? In this work, we are answering this question from a data-driven perspective by learning the perception of visual realism directly from large amounts of data. In particular, we train a Convolutional Neural Network (CNN) model that distinguishes natural photographs from automatically generated composite images. The model learns to predict visual realism of a scene in terms of color, lighting and texture compatibility, without any human annotations pertaining to it. Our model outperforms previous works that rely on hand-crafted heuristics, for the task of classifying realistic vs. unrealistic photos. Furthermore, we apply our learned model to compute optimal parameters of a compositing method, to maximize the visual realism score predicted by our CNN model. We demonstrate its advantage against existing methods via a human perception study.
['Jun-Yan Zhu', 'Philipp Krähenbühl', 'Eli Shechtman', 'Alexei A. Efros']
Learning a Discriminative Model for the Perception of Realism in Composite Images
568,107
The objective to attain fault-tolerant computing has been gaining an increasing amount of attention in the past several years. A digital computer is said to be fault-tolerant when it can carry out its programs correctly in the presence of logic faults, which are defined as any deviations of the logic variables in a computer from the design values. Faults can be either of transient or permanent duration. Their principal causes are: (1) component failures (either permanent or intermitent) in the circuits of the computer, and (2) external interference with the functioning of the computer, such as electric noise or transient variations in power supplies, electromagnetic interference, etc.
['F. P. Mathur', 'Algirdas Avižienis']
Reliability analysis and architecture of a hybrid-redundant digital system: generalized triple modular redundancy with self-repair
151,750
Biological knowledge has been, to date, coded by biologists in axiomatically lean bio-ontologies. To facilitate axiomatic enrichment, complex semantics can be encapsulated as Ontology Design Patterns (ODPs). These can be applied across an ontology to make the domain knowledge explicit and therefore available for computational inference. The same ODP is often required in many different parts of the same ontology and the manual construction of often complex ODP semantics is loaded with the possibility of slips, inconsistencies and other errors. To address this issue we present the Ontology PreProcessor Language (OPPL), an axiom-based language for selecting and transforming portions of OWL ontologies, offering a means for applying ODPs. Example ODPs for the common need to represent "modifiers" of independent entities are presented and one of them is used as a demonstration of how to use OPPL to apply it.
['Mikel Egaña', 'Alan L. Rector', 'Robert Stevens', 'Erick Antezana']
Applying Ontology Design Patterns in Bio-ontologies
195,613
['Xuehan Ma']
A Novel Audio Segmentation for Audio Diarization
930,337
ALT (accelerated life tests) are widely used to provide quickly the information about life distributions of products. Life data at elevated stresses are extrapolated to estimate the life distribution at design stress. The existing estimation methods are efficient and easy to implement-given sufficient life data. However, ALT frequently results in few or no failures at low-level stress, making it difficult to estimate the life distribution. For products whose failures are defined in terms of performance characteristics exceeding their critical values, reliability assessment can be based on degradation measurements by using degradation models. The estimation, however, is usually mathematically complicated and computationally intensive. This paper presents a method for the estimation of life distribution by using life data from degradation measurements. Since the time-to-failure depends on the level of a critical value, more life data can be obtained by tightening the critical value. The relationship between life and critical value and stress is modeled and used to estimate the life distribution at a usual critical value and design stress. The model parameters are estimated by using maximum likelihood. The optimum test plans, which choose the critical values, stress levels, and proportions of sample size to each stress level, are devised by minimizing the asymptotic variance of the mean (log) life at a usual critical value and design stress. The comparison between the proposed and existing 2-level test plans shows that the proposed plans have smaller asymptotic variance and are less sensitive to the uncertainty of the pre-estimates of unknown parameters.
['Guangbin Yang', 'Kai Yang']
Accelerated degradation-tests with tightened critical values
416,398
Automatic face identification of characters in movies has drawn significant research interests and led to various applications. It is a challenging problem due to the huge variation in the appearance of each character. Although existing methods demonstrate promising results in clean environment, the performances are limited in complex movie scenes due to the noises generated during the face tracking and face clustering process. In this paper we present a robust character identification approach by incorporating a noise insensitive relationship representation and a graph matching algorithm. Beyond existing character identification approaches, we further perform explicit sensitivity analysis on character identification by introducing two types of simulated noises. Experiments validate the advantage of the proposed method.
['Jitao Sang', 'Chao Liang', 'Changsheng Xu', 'Jian Cheng']
Robust movie character identification and the sensitivity analysis
261,846
An adaptive technique for scanning rate conversion and interpolation is proposed. This technique performs better than the edge based line average algorithm, especially for an image with more horizontal edges. Moreover, it is easy to implement and a simple VLSI architecture is proposed in this paper. Computer simulation shows that a 37.0 dB image can be obtained via our proposed technique, while edge based line average algorithm only achieve 35.2 dB.
['Chung J. Kuo', 'Ching Liao', 'Ching C. Lin']
Adaptive edge-based interpolation for scanning rate conversion
394,601
Peptide–protein interactions are among the most prevalent and important interactions in the cell, but a large fraction of those interactions lack detailed structural characterization. The Rosetta FlexPepDock web server (http://flexpepdock.furmanlab.cs.huji .ac.il/) provides an interface to a high-resolution peptide docking (refinement) protocol for the modeling of peptide–protein complexes, implemented within the Rosetta framework. Given a protein receptor structure and an approximate, possibly inaccurate model of the peptide within the receptor binding site, the FlexPepDock server refines the peptide to high resolution, allowing full flexibility to the peptide backbone and to all side chains. This protocol was extensively tested and benchmarked on a wide array of non-redundant peptide–protein complexes, and was proven effective when applied to peptide starting conformations within 5.5 A u backbone root mean square deviation from the native conformation. FlexPepDock has been applied to several systems that are mediated and regulated by peptide–protein interactions. This easy to use and general web server interface allows non-expert users to accurately model their specific peptide–protein interaction of interest.
['Nir London', 'Barak Raveh', 'Eyal Cohen', 'Guy Fathi', 'Ora Schueler-Furman']
Rosetta FlexPepDock web server—high resolution modeling of peptide–protein interactions
252,617
The problem of software code security analysis has been considered. The significance of using dynamic code analysis methods if the source code is unavailable has been justified. Modern approaches to the problem have been examined. A class of dynamic code analysis methods based on the virtualization technology has been selected. The methodology of using emulators in order to carry out the dynamic software code analysis has been presented.
['A. Yu. Chernov', 'Artem S. Konoplev']
The use of virtualization technology in the dynamic analysis of software code
787,833
This paper assumes a set of identical wireless hosts, each one aware of its location. The network is described by a unit distance graph whose vertices are points on the plane two of which are connected if their distance is at most one. The goal of this paper is to design local distributed solutions that require a constant number of communication rounds, independently of the network size or diameter. This is achieved through a combination of distributed computing and computational complexity tools. Starting with a unit distance graph, the paper shows: 1. How to extract a triangulated planar spanner; 2. Several algorithms are proposed to construct spanning trees of the triangulation. Also, it is described how to construct three spanning trees of the Delaunay triangulation having pairwise empty intersection, with high probability. These algorithms are interesting in their own right, since trees are a popular structure used by many network algorithms; 3. A load balanced distributed storage strategy on top of the trees is presented, that spreads replicas of data stored in the hosts in a way that the difference between the number of replicas stored by any two hosts is small. Each of the algorithms presented is local, and hence so is the final distributed storage solution, obtained by composing all of them. This implies that the solution adapts very quickly, in constant time, to network topology changes. We present a thorough experimental evaluation of each of the algorithms supporting our claims.
['Constantinos Georgiou', 'Evangelos Kranakis', 'Ricardo Marcelín-Jiménez', 'Sergio Rajsbaum', 'Jorge Urrutia']
Distributed Dynamic Storage in Wireless Networks
444,323
Truth discovery is the problem of detecting true values from the conflicting data provided by multiple sources on the same data items. Since sources' reliability is unknown a priori , a truth discovery method usually estimates sources' reliability along with the truth discovery process. A major limitation of existing truth discovery methods is that they commonly assume exactly one true value on each data item and therefore cannot deal with the more general case that a data item may have multiple true values (or multi-truth ). Since the number of true values may vary from data item to data item, this requires truth discovery methods being able to detect varying numbers of truth values from the multi-source data. In this paper, we propose a multi-truth discovery approach, which addresses the above challenges by providing a generic framework for enhancing existing truth discovery methods. In particular, we redeem the numbers of true values as an important clue for facilitating multi-truth discovery. We present the procedure and components of our approach, and propose three models, namely the byproduct model, the joint model, and the synthesis model to implement our approach. We further propose two extensions to enhance our approach, by leveraging the implications of similar numerical values and values' co-occurrence information in sources' claims to improve the truth discovery accuracy. Experimental studies on real-world datasets demonstrate the effectiveness of our approach.
['Xianzhi Wang', 'Quan Z. Sheng', 'Lina Yao', 'Xue Li', 'Xiu Susie Fang', 'Xiaofei Xu', 'Boualem Benatallah']
Empowering Truth Discovery with Multi-Truth Prediction
916,058
Monte Carlo simulation techniques that use function approximations have been successfully applied to approximately price multi-dimensional American options. However, for many pricing problems the time required to get accurate estimates can still be prohibitive, and this motivates the development of variance reduction techniques. In this paper, we describe a zero-variance importance sampling measure for American options. We then discuss how function approximation may be used to approximately learn this measure; we test this idea in simple examples. We also note that the zero-variance measure is fundamentally connected to a duality result for American options. While our methodology is geared towards developing an estimate of an accurate lower bound for the option price, we observe that importance sampling also reduces variance in estimating the upper bound that follows from the duality.
['Nomesh Bolia', 'Sandeep Juneja', 'Paul Glasserman']
Function-approximation-based importance sampling for pricing American options
463,645
Complex diseases such as allergy are thought to partly result from combinations of particular genetic variants, as well as additive effects of single variations acting independently. As a result, employing an epistatic interaction approach that focuses on identifying multiple single nucleotide polymorphism (SNP) interactions can build on genome wide association studies that focus on discovering associations between disease and individual variants, and can provide insights about the underlying disease mechanisms. In previous work, we identified a number of SNPs and genes potentially involved in nonsteroidal anti-inflammatory drugs (NSAIDs) hypersensitivity through the application of an epistatic analysis approach. In this study, we build on these approaches and use a weighted approach to identify additional SNPs and genes associated with this disorder. This is achieved through the implementation of a novel two stage weighted epistatic analysis approach. In the first step, epistatic analysis is carried out to identify SNP pairs associated with NSAIDs hypersensitivity, and weighted SNP interaction networks inferred based on their p-value. In the second step, these SNPs are mapped to their closest protein coding gene within a 500 Kb flanking distance, with a penalty applied to interactions involving SNPs not located within a gene, and gene interaction networks are constructed from this data. These networks are analysed using graph theory metrics, leading to the identification of several combinations of SNPs and genes potentially involved in and predictive of NSAIDs hypersensitivity. A number of potential asthma and atopy related genes are identified, such as KCNB2, as well as the gene CGNL1 , which is differentially expressed following aspirin intake. In addition, subsequent pathway analysis of the gene interaction subnets uncovers significant enrichment for a number of biological pathways with a potential role in NSAIDs hypersensitivity, such as ALK1 and TGF-beta signalling, both associated with allergy. This study shows that applying a weighted epistatic analysis approach can provide further insights into the underlying mechanisms of NSAIDs hypersensitivity.
['Alex Upton', 'Miguel Blanca', 'J.A. Cornejo-Garcia', 'James R. Perkins']
Weighted Epistatic Analysis of NSAIDs Hypersensitivity Data
892,368
We prove that every countable group with solvable power problem embeds into a finitely presented 2-generated group with solvable power and conjugacy problems.
['Alexander Yu. Olshanskii', 'Mark V. Sapir']
SUBGROUPS OF FINITELY PRESENTED GROUPS WITH SOLVABLE CONJUGACY PROBLEM
163,334
Robotics is an engaging and natural application area for concurrent and parallel models of control. To explore these ideas, we have developed environments and materials to support the programming of robots to do interesting tasks in a fundamentally concurrent manner. Our most recent work involves the development of RoboDeb (short for ''Robotics/Debian''), a ''virtual computer'' pre-installed with the open-source Player API and Stage simulator to support classroom exploration of concurrency and robotic control using the occam programming language.
['Christian L. Jacobsen', 'Matthew C. Jadud']
Concurrency, Robotics, and RoboDeb
174,780
Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited. In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values -a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO.
['Nicholas Nethercote', 'Julian Seward']
Valgrind: a framework for heavyweight dynamic binary instrumentation
390,875
In this paper, we introduce a projection technique that aims to place points representing individual images in a two-dimensional visualization space so that proximity in this space reflects some sort of similarity between the images. This visualization technique enables users to employ their visual ability to evaluate the significance of metadata as well as the characteristics of classification methods and distance functions. It can also be used to recognize and analyze patterns in large sets of images, and to get an overview of the entire body of pictures from a given set. The projection technique only uses a similarity function for calculating a suitable distribution of the points in the visualization space and has a linear time complexity.
['Hermann Pflüger', 'Thomas Ertl']
Analysis of Visual Arts Collections
836,227
Sensor networks are a sensing, computing and communication infrastructure that are able to observe and respond to phenomena in the natural environment and in our physical and cyber infrastructure. The sensors themselves can range from small passive micro-sensors to larger scale, controllable weather-sensing platforms. To reduce the consumed energy of a large scale sensor network, we consider a mobile sink node in the observing area. In this work, we investigate how the sensor network performs in the case when the sink node moves. We compare the simulation results for two cases: when the sink node is mobile and stationary considering lattice and random topologies using AODV protocol. The simulation results have shown that for the case of mobile sink, the consumed energy is better than the stationary sink (about half of stationary sink in lattice topology). Also for mobile sink, the consumed energy of lattice topology is better than random topology.
['Tao Yang', 'Makoto Ikeda', 'Gjergji Mino', 'Leonard Barolli', 'Arjan Durresi', 'Fatos Xhafa']
Performance Evaluation of Wireless Sensor Networks for Mobile Sink Considering Consumed Energy Metric
276,244
Search engines and other text retrieval systems use high-performance inverted indexes to provide efficient text query evaluation. Algorithms for fast query evaluation and index construction are well-known, but relatively little has been published concerning update. In this paper, we experimentally evaluate the two main alternative strategies for index maintenance in the presence of insertions, with the constraint that inverted lists remain contiguous on disk for fast query evaluation. The in-place and re-merge strategies are benchmarked against the baseline of a complete re-build. Our experiments with large volumes of web data show that re-merge is the fastest approach if large buffers are available, but that even a simple implementation of in-place update is suitable when the rate of insertion is low or memory buffer size is limited. We also show that with careful design of aspects of implementation such as free-space management, in-place update can be improved by around an order of magnitude over a naive implementation.
['Nicholas Lester', 'Justin Zobel', 'Hugh E. Williams']
Efficient online index maintenance for contiguous inverted lists
241,713
Trajectory prediction is widespread in mobile computing, and helps support wireless network operation, location-based services, and applications in pervasive computing. However, most prediction methods are based on very coarse geometric information such as visited base transceiver stations, which cover tens of kilometers. These approaches undermine the prediction accuracy, and thus restrict the variety of application. Recently, due to the advance and dissemination of mobile positioning technology, accurate location tracking has become prevalent. The prediction methods based on precise spatiotemporal information are then possible. Although the prediction accuracy can be raised, a massive amount of data gets involved, which is undoubtedly a huge impact on network bandwidth usage. Therefore, employing fine spatiotemporal information in an accurate prediction must be efficient. However, this problem is not addressed in many prediction methods. Consequently, this paper proposes a novel prediction framework that utilizes massive spatiotemporal samples efficiently. This is achieved by identifying and extracting the information that is beneficial to accurate prediction from the samples. The proposed prediction framework circumvents high bandwidth consumption while maintaining high accuracy and being feasible. The experiments in this study examine the performance of the proposed prediction framework. The results show that it outperforms other popular approaches.
['Addison Chan', 'Frederick W. B. Li']
Utilizing Massive Spatiotemporal Samples for Efficient and Accurate Trajectory Prediction
366,208
Millimeter wave (mmWave) wireless technologies are expected to become key enablers of multi-gigabit wireless access in next-generation cellular and local area networks. Due to unfavorable radio propagation, mmWave systems will exploit large-scale MIMO and adaptive antenna arrays at both the transmitter and receiver to realize sufficient link margin. Unfortunately, power and cost requirements in mmWave radio frontends make the use of fully-digital beamforming very challenging. In this paper, we focus on hybrid analog-digital beamforming and address two relevant aspects of the initial access procedure at mmWave frequencies. First, we propose a beam training protocol which effectively accelerates the link establishment by exploiting the ability of mobile users to simultaneously receive from multiple directions. Second, we deal with practical constraints of mmWave transceivers and propose a novel, geometric approach to synthesize multi-beamwidth beam patterns that can be leveraged for simultaneous multi-direction scanning. Simulation results show that the proposed hybrid codebooks are able to shape beam patterns very close to those attained by a fully-digital beamforming architecture, yet require lower complexity hardware compared with the state of the art. Furthermore, the reduced duration of the beam training phase, in turn enabled by the multi-beam characteristics of our hybrid codebooks, provides a 25% to 70% increase in spectral efficiency compared to existing sequential scanning strategies.
['Joan Palacios', 'Danilo De Donno', 'Domenico Giustiniano', 'Joerg Widmer']
Speeding up mmWave beam training through low-complexity hybrid transceivers
813,376
Covert channels exist in most communications systems and allow individuals to communicate truly undetectable and exchange hidden information. That's why their detection seems to be a big deal for security systems. However, till now, security systems do not include dedicated processes for covert channel detection. In this paper, we first propose mechanisms to detect common covert channels. Then, within a whole security system, we propose an optimized order regarding the execution of the three major security processes: Firewall, Intrusion Detection System (IDS) and Covert Channel Detection System (CCDS). It will be demonstrated that the proposed order would allow security systems to offer better processing performances.
['Senda Hammouda', 'Lilia Maalej', 'Zouheir Trabelsi']
Towards Optimized TCP/IP Covert Channels Detection, IDS and Firewall Integration
301,624
In conventional two-tiered Wireless Sensor Networks (WSN), sensors in each cluster transmit observed data to a fusion center via an intermediate supernode. This structure is vulnerable to supernode failure. A double supernode system model with a new coding scheme is proposed to monitor a binary data source. A Distributed Joint Source Channel Code (D-JSCC) is proposed for sensors inside a cluster that provides two advantages of low complexity transmitters and scalability to a large number of sensors. In order to setup a robust communication channel from sensors to the data fusion center, Distributed Space-Time Block Coding (D-STBC) is employed at two supernodes prior to relaying that results in additional diversity gain. DeModulate and Forward (DMF) relaying mode is chosen to enable packet reformatting at the supernodes, which is not possible in widely used Amplify and Forward (AF) mode. The optimum power allocation for the two-hop multiple DMF relaying is calculated to minimize the system Bit Error Rate (BER). An upper bound is derived for the system end-to-end BER by analyzing a basic decoder operation over the system model. The simulation results validate this upper bound and also demonstrate considerable improvement in the system BER for the proposed coding scheme.
['Abolfazl Razi', 'Fatemeh Afghah', 'Ali Abedi']
Power Optimized DSTBC Assisted DMF Relaying in Wireless Sensor Networks with Redundant Super Nodes
509,948
['Theofilos Mailis', 'Anni-Yasmin Turhan', 'Erik Zenker']
A pragmatic approach to answering CQs over fuzzy DL-Lite-ontologies - introducing FLite.
752,347
['Douglas M. Guisi', 'Richardson Ribeiro', 'Marcelo Mendonça Teixeira', 'André Pinz Borges', 'Eden Ricardo Dosciatti', 'Fabrício Enembreck']
A Hybrid Interaction Model for Multi-Agent Reinforcement Learning.
728,892
['Indrajit Ray', 'Junxing Zhang']
A Secure Multi-Sited Version Control System
523,413
In recent years, several new threads of research have found their way into the Interaction Design and Children community. Two of these threads-designing for children with special needs, and designing fabrication activities for children-have been especially fertile grounds for discussion and reflection. The intention of this workshop to bring interest to these two realms simultaneously by choosing to look at children's fabrication activities through the lens of accessibility. This paper presents the initial challenges of this enterprise, frameworks and best practices for inclusive fabrication activities with children, examples of current relevant research, as well as discussion and conclusions.
['Ben Leduc-Mills', 'Jaymes Dec', 'John Schimmel']
Evaluating accessibility in fabrication tools for children
274,463
A case-based computer-aided diagnosis system assists physicians and other medical personnel in the interpretation of optical biopsies obtained through confocal laser endomicroscopy. Extraction in CLE images shows promising results on inferring semantic metadata from low-level features. In order to effectively ensure the interoperability with potential third-party applications, the system provides an interface compliant with the recent standards ISO/IEC 15938-12:2008 (MPEG Query Format) and ISO/IEC 24800 (JPEG Search).
['Ruben Tous', 'Jaime Delgado', 'Thomas Zinkl', 'Pere Toran', 'Gabriela Alcalde', 'Martin Goetz', 'O. Ferrer Roca']
The Anatomy of an Optical Biopsy Semantic Retrieval System
464,731
In this paper, we present an efficient 3D shape rejection algorithm for unlabeled 3D markers. The problem is important in domains such as rehabilitation and the performing arts. There are three key innovations in our approach-(a) a multi-resolution shape representation using Haar wavelets for unlabeled markers, (b) a multi-resolution shape metric and (c) a shape rejection algorithm that is predicated on the simple idea that we do not need to compute the entire distance to conclude that two shapes are dissimilar. We tested the approach on a real-world pose classification problem with excellent results. We achieved a classification accuracy of 98% with an order of magnitude improvement in terms of computational complexity over a baseline shape matching algorithm.
['Yinpeng Chen', 'Hari Sundaram']
A computationally efficient 3D shape rejection algorithm
416,859
This paper discusses a stereo vision processing system which processes color and monochrome stereo video signals on a single vision processing board. Here, we propose a "field mixing" technique for multiplexing multiple video signals. A compact color stereo vision system based on this technique is developed for a mobile robot. This system can process multiple video signals simultaneously, and realizes flexible color stereo vision processing. In order to show the feasibility of this vision system, we installed it on our mobile robot, and implemented a correlation-based EZDF method for stereo tracking of an object. The experimental result of the tracking is shown.
['Yoshio Matsumoto', 'Tomohiro Shibata', 'Katsuhiro Sakai', 'Masayuki Inaba', 'Hirochika Inoue']
Real-time color stereo vision system for a mobile robot based on field multiplexing
274,062
Several methods have been proposed so far for the analysis of the integral pulse frequency modulation (IPFM) model and detecting its corresponding physiological information. Most of these methods rely on the low-pass filtering method to extract the modulating signal of the model. In this paper, the authors present an entirely new approach based on vector space theory. The new method is developed for a more comprehensive form of the IPFM model, namely the time-varying threshold integral pulse frequency modulation (TVTIPFM) model. The new method decomposes the driving signals of the TVTIPFM model into a series of orthogonal basis functions and constructs a matrix identity through which the input signals can be obtained by a parametric solution. As a particular case, the authors apply this method to R-R intervals of the SA node to discriminate between its autonomic nervous modulation and the stretch induced effect.
['Saeid Seydnejad', 'Richard I. Kitney']
Time-varying threshold integral pulse frequency modulation
231,837
The decision feedback (DF) transceiver, combining linear precoding and DF equalization, can establish point-to-point communication over a wireless multiple-input multiple-output channel. Matching the DF-transceiver design parameters to the channel characteristics can improve system performance, but requires channel knowledge. We consider the fast-fading channel scenario, with a receiver capable of tracking the channel-state variations accurately, while the transmitter only has long-term, channel-distribution information. The receiver design problem given channel-state information is well studied in the literature. We focus on transmitter optimization, which amounts to designing a statistical precoder to assist the channel-tailored DF equalizer. We develop a design framework that encompasses a wide range of performance metrics. Common cost functions for precoder optimization are analyzed, thereby identifying a structure of typical cost functions. Transmitter design is approached for typical cost functions in general, and we derive a precoder design formulation as a convex optimization problem. Two important subclasses of cost functions are considered in more detail. First, we explore a symmetry of DF transceivers with a uniform subchannel rate allocation, and derive a simplified convex optimization problem, which can be efficiently solved even as system dimensions grow. Second, we explore the tractability of a certain class of mean square error based cost functions, and solve the transmitter design problem with a simple algorithm that identifies the convex hull of a set of points in R 2 . The behavior of DF transceivers with optimal precoders is investigated by numerical means.
['Simon Järmyr', 'Björn E. Ottersten', 'Eduard A. Jorswieck']
Statistical Precoding With Decision Feedback Equalization Over a Correlated MIMO Channel
181,264
Bichromatic reverse nearest neighbor (BRNN) has been extensively studied in spatial database literature. In this paper, we study a related problem called MaxBRNN: find an optimal region that maximizes the size of BRNNs. Such a problem has many real life applications, including the problem of finding a new server point that attracts as many customers as possible by proximity. A straightforward approach is to determine the BRNNs for all possible points that are not feasible since there are a large (or infinite) number of possible points. To the best of our knowledge, the fastest known method has exponential time complexity on the data size. Based on some interesting properties of the problem, we come up with an efficient algorithm called MaxOverlap. Extensive experiments are conducted to show that our algorithm is many times faster than the best-known technique.
['Raymond Chi Wing Wong', 'M. Tamer Özsu', 'Philip S. Yu', 'Ada Wai Chee Fu', 'Lian Liu']
Efficient method for maximizing bichromatic reverse nearest neighbor
423,615
['Peter Nazier Mosaad', 'Martin Fränzle', 'Bai Xue']
Temporal Logic Verification for Delay Differential Equations
890,675
La masse des donnees aujourd'hui disponibles engendre des besoins croissants de methodes decisionnelles adaptees aux donnees traitees. Ainsi, recemment de nouvelles approches fondees sur des cubes de textes sont apparues pour pouvoir analyser et extraire de la connaissance a partir de documents. L'originalite de ces cubes est d'etendre les approches traditionnelles des entrepots et des technologies OLAP a des contenus textuels. Dans cet article, nous nous interessons a deux nouvelles fonctions d'agregation. La premiere propose une nouvelle mesure de TF-IDF adaptative permettant de tenir compte des hierarchies associees aux dimensions. La seconde est une agregation dynamique permettant de faire emerger des groupements correspondant a une situation reelle. Les experiences menees sur des donnees issues du serveur HAL d'une universite confirment l'interet de nos propositions.
['Sandra Bringay', 'Anne Laurent', 'Pascal Poncelet', 'Mathieu Roche', 'Maguelonne Teisseire']
Bien cube, les données textuelles peuvent s'agréger !
801,955
We propose a new router architecture that supports wormhole switching and circuit switching concurrently. This architecture has been designed to take advantage of temporal communication locality. This can be done by establishing a circuit between nodes that are going to communicate frequently. Messages using those circuits face no contention. By combining circuit switching, pre-established physical circuits and wave pipelining across channels and switches, it is possible to increase network bandwidth considerably, also reducing latency for communications that use pre-established physical circuits. This router architecture also allows to reduce the overhead of the software messaging layer in multicomputers by offering a better hardware support. Preliminary performance evaluation results show a drastic reduction in latency and increment in throughput when messages are long enough, even if circuits are established for a single transmission and locality is not exploited.
['José Duato', 'Pedro López', 'Federico Silla', 'Sudhakar Yalamanchili']
A high performance router architecture for interconnection networks
204,422
In a previous work, we introduced a spectrum sharing technique called Multi-User Vandermonde-subspace Frequency Division Multiplexing (MU-VFDM). This overlay technique allows the coexistence of a downlink Orthogonal Frequency Division Multiple Access (OFDMA) macro-cell and a cognitive multi-user small-cell system in time division duplex mode. In that work, MU-VFDM was shown to be able to completely cancel the interference towards a macro-cell system at the price of perfect channel state information (CSI) at the opportunistic small-cells. In this work we relax the perfect CSI constraint by introducing a channel estimation protocol that does not require cooperation between the two systems, but still provides harmless coexistence between them. The impact of this protocol is evaluated in terms of interference at the legacy and sum-rates at the opportunistic system. Simulation results show that, even with imperfect CSI estimation, MU-VFDM is able to achieve promising rates for the small-cells while incurring a small rate loss at the macro-cell due to interference.
['Marco Maso', 'Leonardo S. Cardoso', 'Merouane Debbah', 'Lorenzo Vangelista']
Channel estimation impact for LTE small cells based on MU-VFDM
431,164
Recent hearing aid systems (HASs) can connect to a wireless microphone worn by the talker of interest. This feature gives the HASs access to a noise-free version of the target signal. In this paper, we address the problem of estimating the target sound direction of arrival (DoA) for a binaural HAS given access to the noise-free content of the target signal. To estimate the DoA, we present a maximum-likelihood framework which takes the shadowing effect of the user's head on the received signals into account by modeling the relative transfer functions (RTFs) between the HAS's microphones. We propose three different RTF models which have different degrees of accuracy and individualization. Furthermore, we show that the proposed DoA estimators can be formulated in terms of inverse discrete Fourier transforms to evaluate the likelihood function computationally efficiently. We extensively assess the performance of the proposed DoA estimators for various DoAs, signal to noise ratios, and in different noisy and reverberant situations. The results show that the proposed estimators improve the performance markedly over other recently proposed “informed” DoA estimator.
['Mojtaba Farmani', 'Michael Syskind Pedersen', 'Zheng-Hua Tan', 'Jesper Jensen']
Informed Sound Source Localization Using Relative Transfer Functions for Hearing Aid Applications
974,395
Let $f_m(a,b,c,d)$ denote the maximum size of a family $\mathcal{F}$ of subsets of an $m$-element set for which there is no pair of subsets $A,B \in \mathcal{F}$ with $|A \cap B| \geq a$, $|\bar{A} \cap B| \geq b$, $|A \cap \bar{B}| \geq c$, and $|\bar{A} \cap \bar{B}| \geq d$. By symmetry we can assume $a \geq d$ and $b \geq c$. We show that $f_m(a,b,c,d)$ is $\Theta (m^{a+b-1})$ if either $b > c$ or $a,b \geq 1$. We also show that $f_m(0,b,b,0)$ is $\Theta (m^b)$ and $f_m(a,0,0,d)$ is $\Theta (m^a)$. This can be viewed as a result concerning forbidden configurations and is further evidence for a conjecture of Anstee and Sali. Our key tool is a strong stability version of the Complete Intersection Theorem of Ahlswede and Khachatrian, which is of independent interest.
['Richard P. Anstee', 'Peter Keevash']
Pairwise Intersections and Forbidden Configurations
224,545
BPMN 2.0 is a widely used notation to model business process that has associated tools and techniques to facilitate process management, execution and monitoring. As a result using BPMN to model Software Development Process (SDP) can leverage on the BPMN's infrastructure to improve SDP quality. Nevertheless, when using BPMN to model Software Processes one can observe the lack of an important feature: means to represent process tailoring. This article introduces the BPMNt, a conservative extension to BPMN that aims at aggregating a tailoring representation mechanism as the one found at SPEM 2.0. BPMNt uses the extensibility classes already present in the BPMN meta-model. Our work also presents an example to illustrate the approach.
['Raquel M. Pillat', 'Toacy Cavalcante de Oliveira', 'Fabio Luiz da Fonseca']
Introducing software process tailoring to BPMN: BPMNt
3,113
We propose a novel regularization method for compressive imaging in the context of the CS theory with coherent and redundant dictionaries. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We illustrate the performance of SARA in the context of Fourier imaging, for a particular application to radio interferometric (RI) imaging. We show through realistic simulations that the proposed approach outperforms state-of-the-art imaging methods in the field, which are based on the assumption of signal sparsity in a single frame.
['Rafael E. Carrillo', 'Jason D. McEwen', 'Yves Wiaux']
Sparsity averaging for radio-interferometric imaging
150,185
Sybil detection is an important task in cyber security research. Over past years, many data mining algorithms have been adopted to fulfill such task. Using classification and regression for sybil detection is a very challenging task. Despite of existing research made toward modeling classification for sybil detection and prediction, this research has proposed new solution on how sybil activity could be tracked to address this challenging issue. Prediction of sybil behaviour has been demonstrated by analysing the graph-based classification and regression techniques, using decision trees and described dependencies across different methods. Calculated gain and maxGain helped to trace some sybil users in the datasets.
['Anand Chinchore', 'G. F. Xu', 'Frank Jiang']
Classifying sybil in MSNs using C4.5
971,508
In the problem of scheduling a single machine to minimize total late work, there are n jobs to be processed for which each has an integer processing time and a due date. The objective is to minimize the total late work, where the late work for a job is the amount of processing of this job that is performed after its due date. For the preemptive total late work problem, an O(n log n) algorithm is derived. The nonpreemptive total late work problem is shown to be NP-hard, although efficient algorithms are derived for the special cases in which all processing times are equal and all due dates are equal. A pseudopolynomial dynamic programming algorithm is presented for the general nonpreemptive total late work problem; it requires O(nUB) time, where UB is any upper bound on the total late work. Computational results for problems with up to 10,000 jobs are given.
['Chris N. Potts', 'L Van Wassenhove']
Single machine scheduling to minimize total late work
288,905
The Zernike moments can achieve high accuracy and strong robustness for the classification and retrieval of images, but involve huge amount of computation caused by its complex definition. It has limited its exploitation in online real-time applications or big data processing. So researches on how to improve the computation speed of Zernike moments are carried out. One of the existing high-accuracy algorithms for Zernike moments, which is called ZMGM algorithm, treats Zernike moments as the linear combination of geometric moments. Based on the ZMGM algorithm, we make two accelerating improvements and propose a fast algorithm. Firstly, a simplified linear combination is achieved by merging all the terms corresponding to the same geometric moment. So that the multiplication times is reduced. In this case, combined coefficients can be separated, pre-computed and stored for further computation of Zernike moments. Secondly, to speed up the computation of combined coefficients, a fast algorithm for the coefficient matrix of Zernike radial polynomials is proposed. The elements of this matrix are the main components of combined coefficients. Complexity analysis and numerical experiments show that, compared with the ZMGM algorithm, our proposed algorithm can significantly reduce the complexity and improve the computation speed. The optimization effect becomes more obvious as the order increases.
['Yun Guo', 'Chunping Liu', 'Shengrong Gong']
Improved algorithm for Zernike moments
547,330
A fundamental task in a wireless sensor network is to broadcast some measured data from an origin sensor to a destination sensor. Since the sensors are typically small, power limited, and low cost, they are only able to broadcast low-power signals. As a result, the propagation loss from the origin to the destination nodes can attenuate the signals beyond detection. One way to deal with this problem is to pass the transmitted signal through relay nodes. In this paper we propose and study two-hop multisensor relay strategies that achieve minimum mean-square-error (MSE) performance subject to either local or global power constraints. The capacity of the resulting relay link and its diversity order are studied. The effect of channel uncertainties on system performance is examined and a modified relay scheme is proposed.
['Nima Khajehnouri', 'Ali H. Sayed']
Distributed MMSE Relay Strategies for Wireless Sensor Networks
395,638
['Jop Briët', 'Daniel Dadush', 'Sebastian Pokutta']
On the Existence of 0/1 Polytopes with High Semidefinite Extension Complexity
840,749
['Shao-Yen Tseng', 'Sandeep Nallan Chakravarthula', 'Brian R. Baucom', 'Panayiotis G. Georgiou']
Couples Behavior Modeling and Annotation Using Low-Resource LSTM Language Models.
865,694
Memory errors are a notorious source of security vulnerabilities that can lead to service interruptions, information leakage and unauthorized access. Because such errors are also difficult to debug, the absence of timely patches can leave users vulnerable to attack for long periods of time. A variety of approaches have been introduced to combat these errors, but these often incur large runtime overheads and generally abort on errors, threatening availability. This paper presents Archipelago, a runtime system that takes advantage of available address space to substantially reduce the likelihood that a memory error will affect program execution. Archipelago randomly allocates heap objects far apart in virtual address space, effectively isolating each object from buffer overflows. Archipelago also protects against dangling pointer errors by preserving the contents of freed objects after they are freed. Archipelago thus trades virtual address space---a plentiful resource on 64-bit systems---for significantly improved program reliability and security, while limiting physical memory consumption by tracking the working set of an application and compacting cold objects. We show that Archipelago allows applications to continue to run correctly in the face of thousands of memory errors. Across a suite of server applications, Archipelago's performance overhead is 6% on average (between -7% and 22%), making it especially suitable to protect servers that have known security vulnerabilities due to heap memory errors.
['Vitaliy B. Lvin', 'Gene Novark', 'Emery D. Berger', 'Benjamin G. Zorn']
Archipelago: trading address space for reliability and security
314,291
Aesthetic evaluation of computer generated patterns is a growing filed with several challenges. This paper focuses on the quantitative evaluation of order and complexity in multi-state two-dimensional 2D cellular automata CA. CA are known for their ability to generate highly complex patterns through simple and well defined local interaction of rules. It is suggested that the order and complexity of 2D patterns can be quantified by using mean information gain. This measure, also known as conditional entropy, takes into account conditional and joint probabilities of the elements of a configuration in a 2D plane. A series of experiments is designed to demonstrate the effectiveness of the mean information gain in quantifying the structural order and complexity, including the orientation of symmetries of multi-state 2D CA configurations.
['Mohammad Ali Javaheri Javid', 'Robert Zimmer', 'Anna Ursyn', 'Mohammad Majid al-Rifaie']
A Quantitative Approach for Detecting Symmetries and Complexity in 2D Plane
639,007