abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
A complete modular eigenspace feature extraction technique for hyperspectral images
['Y. A. Chang', 'Hsuan Ren']
A complete modular eigenspace feature extraction technique for hyperspectral images
196,117
A new three-dimensional (3-D) 12-lead electrocardiogram (EGG) display method is presented which employs a 3-D rectangular coordinate system to display the 12-lead cardiac electric signals in two 3-D graphs. The 3-D graph consists of a temporal axis representing the time domain of the cardiac signals, a spatial axis representing the lead positions, and an amplitude axis representing the voltages of the cardiac signals. The six horizontal plane leads and the other six frontal plane leads were displayed in two 3-D graphs, respectively. The voltages of the cardiac signals were represented in rainbow-like colors. Cubic interpolation was employed to insert interconnecting points between neighboring leads on each plane and to smooth the surface of the 3-D ECG graphs. The 3-D ECG graphs of a normal subject, a patient with myocardial infarction, and a patient with left bundle branch block were presented here. This new display method could not only be used as a complementary display method to the 12-lead EGG, but also provide physicians with an overall integral view about the spatial distribution of the cardiac signals.
['Huihua Kenny Chiang', 'Chao-Wei Chu', 'Gau-Yang Chen', 'Cheng-Deng Kuo']
A new 3-D display method for 12-lead ECG
57,942
We present an A* search algorithm that extracts feature curves from 3D meshes. A* algorithm, which finds shortest path on a weighted graph using heuristic, is extended to extract a set of feature curves defined by a curvature- based weighting function. The extracted curve that minimizes the defined feature weight is further smoothed using a smoothing algorithm we present. Our scheme shows very robust and efficient feature extraction results on various 3D meshes.
['Kyungha Min']
An A* Algorithm for Extracting Feature Curves from 3D Meshes
595,188
Due to the fractal nature of the domain geometry in geophysical flow simulations, a completely accurate description of the domain in terms of a computational mesh is frequently deemed infeasible. Shoreline and bathymetry simplification methods are used to remove small scale details in the geometry, particularly in areas away from the region of interest. To that end, a novel method for shoreline and bathymetry simplification is presented. Existing shoreline simplification methods typically remove points if the resultant geometry satisfies particular geometric criteria. Bathymetry is usually simplified using traditional filtering techniques, that remove unwanted Fourier modes. Principal Component Analysis (PCA) has been used in other fields to isolate small-scale structures from larger scale coherent features in a robust way, underpinned by a rigorous but simple mathematical framework. Here we present a method based on principal component analysis aimed towards simplification of shorelines and bathymetry. We present the algorithm in detail and show simplified shorelines and bathymetry in the wider region around the North Sea. Finally, the methods are used in the context of unstructured mesh generation aimed at tidal resource assessment simulations in the coastal regions around the UK.
['Alexandros Avdis', 'Christian T. Jacobs', 'Jon Hill', 'Matthew D. Piggott', 'Gerard J. Gorman']
Shoreline and Bathymetry Approximation in Mesh Generation for Tidal Renewable Simulations
545,257
A stochastic approach to robot plan formation
['Ivan M. Havel', 'Ivan Kramosil']
A stochastic approach to robot plan formation
574,788
In this paper we give an enclosure for the solution of the biharmonic problem and also for its gradient and Laplacian in the $L_2$-norm, respectively.
['Borbála Fazekas', 'Michael Plum', 'Christian Wieners']
Enclosure for the Biharmonic Equation
439,015
Visual Simulation of Magnetic Fluids.
['Tomokazu Ishikawa', 'Yonghao Yue', 'Kei Iwasaki', 'Yoshinori Dobashi', 'Tomoyuki Nishita']
Visual Simulation of Magnetic Fluids.
759,362
This paper presents an architectural framework for customizing Object Request Broker (ORB) implementations to application-specific preferences for various non-functional requirements. ORB implementations are built by reusing a domain-specific component-based architecture that offers support for one or more non-functional requirements. The domain-specific architecture provides the mechanism that allows the ORB to reconfigure its own implementation at run-time on the basis of application-specific preferences. This mechanism is based on a run-time selection between alternative component implementations that guarantee different service-levels for non-functional requirements. Application-specific preferences are defined in policies and service-level guarantees are defined in component descriptors. Policies and component descriptors are expressed using descriptive languages. This gives application programmers an easy and powerful tool for customizing an ORB implementation. To validate the feasibility of our architectural framework we have applied it in the domain of robotic control applications.
['Bo Nørregaard Jørgensen', 'Eddy Truyen', 'Frank Matthijs', 'Wouter Joosen']
Customization of object request brokers by application specific policies
516,919
In this paper, we compare three initialization schemes for the KMEANS clustering algorithm: 1) random initialization (KMEANSRAND), 2) KMEANS++, and 3) KMEANSD++. Both KMEANSRAND and KMEANS++ have a major that the value of k needs to be set by the user of the algorithms. (Kang 2013) recently proposed a novel use of determinantal point processes for sampling the initial centroids for the KMEANS algorithm (we call it KMEANSD++). They, however, do not provide any evaluation establishing that KMEANSD++ is better than other algorithms. In this paper, we show that the performance of KMEANSD++ is comparable to KMEANS++ (both of which are better than KMEANSRAND) with KMEANSD++ having an additional that it can automatically approximate the value of k.
['Apoorv Agarwal', 'Anna Choromanska', 'Krzysztof Choromanski']
Notes on using Determinantal Point Processes for Clustering with Applications to Text Clustering
624,674
The authors present a framework in which human and intelligent agents (IAs) can interact to facilitate the information flow and decision making in real-world enterprises. Underlying the framework is the notion of an enterprise model that is built by dividing complex enterprise operations into a collection of elementary tasks or activities. Each such task is then modeled in cognitive terms and entrusted to an IA for execution. Tasks that require human involvement are referred to the appropriate person through their personal assistant, a special type of IA that knows how to communicate both with humans, through multimedia interfaces, and with other IAs and the shared knowledge base. The computer-aided software engineering tools supported by a library of activity models permit every individual in an enterprise to model the activities with which they are personally most familiar. The preliminary experimental results suggest that this divide-and-conquer strategy, leading to cognitive models that are buildable and maintainable by end-users, is a viable approach to real-world distributed artificial intelligence. >
['Jeff Yung-Choa Pan', 'Jay M. Tenenbaum']
An intelligent agent framework for enterprise integration
107,369
A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling) methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC), field programmable gate array (FPGA), and a programmable clustered VLIW processor.
['Surin Kittitornkun', 'Yu Hen Hu']
Efficient implementation of nested-loop multimedia algorithms
113,619
Inadequate thermal characteristics inside a truck during live- stock transport might lead to an important decrease meat quality and increase swine mortality. This study aims to assess the microenviron- ment characteristics of the pig transportation truck in tropical regions. These characteristics involve air temperature, relative humidity, wind and vehicle speed, and noise level. The present study compares two pig transportation schedules by assessing the ambient and pig surface tem- perature, and calculating the environmental temperature and humidity index (THI). The surface temperature was recorded using a thermal cam- era. Results showed that thermal characteristics inside the truck varied between the farm and the slaughterhouse plant ( p< 0.0489). The unload- ing process at the slaughterhouse plant impacted the pigs surface tem- perature ( p< 0.027) more than the loading management at the farm.
['Sivanilza Teixeira Machado', 'Irenilza de Alencar Nääs', 'João Gilberto Mendes dos Reis', 'Rodrigo Couto Santos', 'Fabiana Ribeiro Caldara', 'Rodrigo Garófallo Garcia']
Logistics Issues in the Brazilian Pig Industry: A Case-Study of the Transport Micro-Environment
591,107
Harold Short recounts that his interest in Computing and the Humanities goes back to when he was an undergraduate in English and French at a university in the former Rhodesia (now Zimbabwe). There, whilst undertaking summer work in the library, he saw first-hand the potential of digital methods. After arriving in London in 1972 he took an Open University degree in mathematics, computing and systems. Among his early influences he identifies the reading he did on matters related to cognitive science whilst undertaking a postgraduate certificated in education. In the UK he worked at the BBC as programmer, systems analyst and then systems manager. In 1988 he moved to King's College London to take up the post of Assistant Director in Computing Services for Humanities and Information Management. One of his first tasks was to work with the Humanities Faculty to develop an undergraduate programme in humanities and computing. The first digital humanities conference he attended was the first joint international conference of ALLC and ACH, held at the University Toronto in 1989, which c. 450 people attended. He reflects on aspects of the institutional shape of the field towards the end of the 1980s, including the key Centres that existed then, the first meeting of the Association for Literary and Linguistic Computing (ALLC) and those who were active in it such as Roy Wisbey, Susan Hockey and the late Antonius Ampoli. He gives a detailed discussion of the development of what is now the Department of Digital Humanities in King's College London, both in terms of the administrative and institutional issues involved, as well as the intellectual. He also reflects on some of the most successful collaborations that the Department has been involved in, for example, the AHRC funded Henry III Fine Rolls project, and the conditions and working practices that characterised them. He closes by discussing his impressions about the movement of scholars into and out of the discipline and of the institutional issues that have had an impact on digital humanities centres.
['Harold Short', 'J Nyhan', 'Anne Welsh', 'Jessica M. Salmon']
Collaboration must be fundamental or it's not going to work: an Oral History Conversation between Harold Short and Julianne Nyhan
593,576
Lattice Queries for Search and Data Exploration.
['Boris A. Galitsky']
Lattice Queries for Search and Data Exploration.
751,726
Recent and next-generation wireless broadcasting standards, such as DVB-T2 or DVB-NGH, are considering distributed multi-antenna transmission in order to increase bandwidth efficiency and signal quality. Full-rate full-diversity (FRFD) space-time codes (STC), such as the Golden code, have been reported to be excellent candidates, being their main drawback their detection complexity, which is enhanced when soft output is required when combined with a bit-interleaved coded modulation (BICM) scheme based on low-density parity check (LDPC) codes. We present a novel low-complexity soft detection algorithm for the reception of Golden codes in LDPC-based orthogonal frequency-division multiplexing (OFDM) systems. Complexity and simulation-based performance results are provided which show that the proposed detector performs close to the optimal detector in a variety of DVB-T2 broadcasting scenarios.
['Iker Sobron', 'Maitane Barrenechea', 'Pello Ochandiano', 'Lorena Martinez', 'Mikel Mendicute', 'Jon Altuna']
Low-complexity detection of golden codes in LDPC-coded OFDM systems
126,618
Workflow management systems/business process management systems (BPMS) provide for an integral support of computer-based information processing, personal activities, business procedures and their relationships to organizational structures. They support the modeling and analysis of so-called business processes and offer means for the application-near design and implementation of computer-based business process assistance. Mainly, the BPMSs concentrate on the support of enterprise-internal processes. Our approach extends the scope of business process management. Enterprise-internal processes are viewed as sub-processes of global inter-enterprise processes. Additional global process assistance is based on the definition of global activity models and global information models. Features of dynamic naming and binding can be provided by business process brokers, which extend the concepts of object trading to the trading of opportunities to participate in global processes.
['G. Graw', 'Volker Gruhn', 'Heiko Krumm']
Support of cooperating and distributed business processes
172,669
Published work in the IT services area is generally centered on the description of management best practices or specific technological issues. There is a lack of empirical studies the relationship among service level agreements (the quality parameters of service agreed between customer and provider) and the required IT parts to deliver IT services. In the ITIL framework, the service level agreements process is fully described, albeit without a formal representation. Enterprise Architecture frameworks provide a mean for formal description of IT and business parts of organizations and their interrelationships, however without reference to service level agreements. In this research, we intend to derive a formal specification of service level agreements by integrating IT Services Management within an Enterprise Architecture framework. This integration will facilitate the provision of a business-aligned automatic checking of compliance between agreed and provided services.
['Anacleto Correia', 'Fernando Brito e Abreu']
Integrating IT Service Management within the Enterprise Architecture
258,218
Blind speed alleviation using a radar sensor network (RSN)
['Jing Liang', 'Qilian Liang']
Blind speed alleviation using a radar sensor network (RSN)
774,862
"Flexible modeling tools" hold the promise of bridging the gap between formal modeling and free-form authoring. This workshop will bring together researchers and practitioners to explore ideas and showcase early results in this emerging field. Both formal modeling and free-form authoring offer important benefits for software architects and designers, as well as others. Unfortunately, contemporary tools often force users to choose one style of work over the other. During the exploratory phases of design, it is more common to use white boards than modeling tools. During the early stages of architectural analysis, it is more common to use office tools like PowerPoint and Excel. These tools offer ease of use, freedom from strict representation rules, and the ability to readily prepare attractive presentations for a variety of stakeholders. However, users miss out on the clarity, consistency, and completeness that can accrue from using modeling tools, as well as the powerful visualization, navigation, manipulation, and guidance that semantics-driven tools can provide. At this workshop, people who build tools and people who use tools for software development will discuss the reasons for the current state of the practice, and will focus on tool users' needs and tool capabilities to address those needs. Papers and live demonstrations will present work on free-form authoring tools, formal modeling tools, and hybrid tools that aim to achieve the benefits of both
['Doug Kimelman', 'Harold Ossher', 'André van der Hoek', 'Margaret-Anne D. Storey']
SPLASH 2010 workshop on flexible modeling tools
1,150
This work investigates the state prediction problem for nonlinear stochastic differential systems, affected by multiplicative state noise. This problem is relevant in many state-estimation frameworks such as filtering of continuous-discrete systems (i.e. stochastic differential systems with discrete measurements) and time-delay systems. A very common heuristic to achieve the state prediction exploits the numerical integration of the deterministic nonlinear equation associated to the noise-free system. Unfortunately these methods provide the exact solution only for linear systems. Instead here we provide the exact state prediction for nonlinear system in terms of the series expansion of the expected value of the state conditioned to the value in a previous time instant, obtained according to the Carleman embedding technique. The truncation of the infinite series allows to compute the prediction at future times with an arbitrary approximation. Simulations support the effectiveness of the proposed state-prediction algorithm in comparison to the aforementioned heuristic method.
['Filippo Cacace', 'Valerio Cusimano', 'Alfredo Germani', 'Pasquale Palumbo']
A state predictor for continuous-time stochastic systems
931,217
Coronary CT angiography (Coronary Computed Tomography Angiography) by MSCT (Multi-Slice Computed Tomography) offers not only a diagnostic capability matching that of CAG (Coronary Angiography) that is the gold standard of the cardiovascular diagnosis, but also a much less invasive examination. However, if calcified plaque adheres to a vessel wall, high brightness shading due to calcium in the calcified plaque, called the blooming artefacts, causes to make it difficult to diagnose the region around the plaque. In this study, we propose a method to semi-automatically detect and remove calcified plaques, which hinder diagnosing a stenosed coronary artery, from a CCTA image. In addition, an analyzing method to accurately and objectively measure angiostenosis rate is provided.
['Yuki Yoshida', 'Kaori Fujisaku', 'Kei Sasaki', 'Tetsuya Yuasa', 'Koki Shibuya']
Semi-automatic detection of calcified plaque in coronary CT angiograms with 320-MSCT
964,140
In this paper, we propose an analytical technique to evaluate the statistics of the channel estimation error in a simple multi-user ad hoc networking scenario. This problem is very relevant in situations where advanced PHY techniques are used (e.g., MIMO or interference cancellation) and channel state information may be needed. The presence of several simultaneous and non-orthogonal signals makes the problems significantly more complicated than in traditional channel estimation. In particular, there is direct dependence of the channel estimation error on the instantaneous channel matrix. The proposed model makes it possible to quickly evaluate the performance of channel estimation schemes as a function of the system parameters. In this light, we include the effect of channel estimation errors in an ad hoc networking protocol simulator and thoroughly evaluate their impact. Our results show that there exists a significant interplay between the performance of MAC protocols for MIMO networks and the accuracy of channel estimation. Moreover, we show that relevant tradeoffs arise between MAC- and PHY-level parameters which lead to the definition of design guidelines.
['Davide Chiarotto', 'Paolo Casari', 'Michele Zorzi']
On the Statistics and MAC Implications of Channel Estimation Errors in MIMO Ad Hoc Networks
332,899
In training radial basis function neural networks (RBFNNs), the locations of Gaussian neurons are commonly determined by clustering. Training inputs can be clustered on a fully unsupervised manner (input clustering), or some supervision can be introduced, for example, by concatenating the input vectors with weighted output vectors (input–output clustering). In this paper, we propose to apply clustering separately for each class (class-specific clustering). The idea has been used in some previous works, but without evaluating the benefits of the approach. We compare the class-specific, input, and input–output clustering approaches in terms of classification performance and computational efficiency when training RBFNNs. To accomplish this objective, we apply three different clustering algorithms and conduct experiments on 25 benchmark data sets. We show that the class-specific approach significantly reduces the overall complexity of the clustering, and our experimental results demonstrate that it can also lead to a significant gain in the classification performance, especially for the networks with a relatively few Gaussian neurons. Among other applied clustering algorithms, we combine, for the first time, a dynamic evolutionary optimization method, multidimensional particle swarm optimization, and the class-specific clustering to optimize the number of cluster centroids and their locations.
['Jenni Raitoharju', 'Serkan Kiranyaz', 'Moncef Gabbouj']
Training Radial Basis Function Neural Networks for Classification via Class-Specific Clustering
722,702
This paper proposes a 3D human body pose reconstruction system based on videos captured from any perspective view of a monocular camera. The appearance, color and temporal information extracted from the video frames are effectively combined to accurately track 2D body features. This view invariant system overcomes the challenges of requiring the modeled human to be viewed from a pre-specified angular perspective so as to initialize the 3D body model configuration, as well as to continuously find the best match between the tracked 2D features with the 3D model based on the downhill simplex algorithm. The matching information of 3D poses are also fed back to assist 2D tracking, which eventually provides more reliable 3D tracking performance.
['Shian-Ru Ke', 'Jenq-Neng Hwang', 'Kung-Ming Lan', 'Shen-Zheng Wang']
View-invariant 3D human body pose reconstruction using a monocular video camera
152,391
In home-based care, reliable contextual information of remotely monitored patients should be generated by correctly recognizing the activities to prevent hazardous situations of the patient. It is difficult to achieve a higher confidence level of contextual information for several reasons. First, low-level data from multisensors have different degrees of uncertainty. Second, generated contexts can be conflicting, even though they are acquired by simultaneous operations. We propose the static evidential fusion process (SEFP) as a context-reasoning method. The context-reasoning method processes sensor data with an evidential form based on the Dezert-Smarandache theory (DSmT). The DSmT approach reduces ambiguous or conflicting contextual information in multisensor networks. Moreover, we compare SEFP based on DSmT with traditional fusion processes such as Bayesian networks and the Dempster-Shafer theory to understand the uncertainty analysis in decision making and to show the improvement of the DSmT approach compared to the others.
['Hyun Lee', 'Jae Sung Choi', 'Ramez Elmasri']
A Static Evidential Network for Context Reasoning in Home-Based Care
44,388
There has recently been a great deal of work focused on developing statistical models of graph structure---with the goal of modeling probability distributions over graphs from which new, similar graphs can be generated by sampling from the estimated distributions. Although current graph models can capture several important characteristics of social network graphs (e.g., degree, path lengths), many of them do not generate graphs with sufficient variation to reflect the natural variability in real world graph domains. One exception is the mixed Kronecker Product Graph Model (mKPGM), a generalization of the Kronecker Product Graph Model, which uses parameter tying to capture variance in the underlying distribution [10]. The enhanced representation of mKPGMs enables them to match both the mean graph statistics and their spread as observed in real network populations, but unfortunately to date, the only method to estimate mKPGMs involves an exhaustive search over the parameters. In this work, we present the first learning algorithm for mKPGMs. The O (| E |) algorithm searches over the continuous parameter space using constrained line search and is based on simulated method of moments , where the objective function minimizes the distance between the observed moments in the training graph and the empirically estimated moments of the model. We evaluate the mKPGM learning algorithm by comparing it to several different graph models, including KPGMs. We use multi-dimensional KS distance to compare the generated graphs to the observed graphs and the results show mKPGMs are able to produce a closer match to real-world graphs (10-90% reduction in KS distance), while still providing natural variation in the generated graphs.
['Sebastian Moreno', 'Jennifer Neville', 'Sergey Kirshner']
Learning mixed kronecker product graph models with simulated method of moments
473,665
It is argued that the dynamics of an application domain is best modeled as patterns of change in the entities that make up the domain. An abstraction mechanism for semantic data models is described which represents the transition of domain entities among entity classes. The model of transitions is related to a general computational formalism with well-understood properties. It is shown that the transition abstraction mechanism facilitates the accurate conceptual modeling of the static nature of the domain, assists in the design of database transactions, enables certain kinds of inference, and leads to the ability of a database to actively respond at a high level to low level updates of the data it contains. >
['Gary Hall', 'Ranabir Gupta']
Modeling transition
685,533
A new processor allocation scheme for hypercube systems, called the HPA (heuristic processor allocation) strategy, is presented. In this scheme, an undirected graph, called the SC-graph (Subcube-graph), is used to maintain the free subcubes available in system, which are represented by vertices. An allocation request for a k-cube is satisfied by finding a free subcube of dimension k in the SC-graph or by decomposing a nearest higher dimension subcube. If there are more than one subcube of dimension k, a subcube which has minimum degree in the SC-graph is selected to reduce the external fragmentation. For deallocating the released subcube a heuristic algorithm is used to maintain the dimension of free subcube as high as possible. It is theoretically shown that the HPA strategy is not only statically optimal but also it has a complete subcube recognition capability in a dynamic environment. Extensive simulation results show that the HPA strategy improves the performance and significantly reduces the allocation/deallocation time compared to the previously proposed schemes. >
['Sang Youl Yoon', 'Ohan Kang', 'Hyunsoo Yoon', 'Seung Ryoul Maeng', 'Jung Wan Cho']
A heuristic processor allocation strategy in hypercube systems
166,592
Of the nearly 4 million births that occur each year in the U.S., almost 1 in 3 is a cesarean delivery. Due to the various increased risks associated with cesarean sections (C-sections) and the potential major complications in subsequent pregnancies, a re-evaluation of the C-section rate has been a topic of major concern for patients and health care providers. To evaluate the current C-section rate due to a "failure-to-progress" diagnosis, we implement a percentile matching procedure to derive labor progression times needed to replicate the delivery process in a discrete event simulation for women undergoing a trial of labor. The goals are to: (1) model the natural progression of labor in absence of C-sections, (2) determine the underlying rules responsible for the current rate of cesarean deliveries due to a "failure-to-progress" diagnosis, and (3) develop stopping rules that reduce the number of cesarean deliveries and the rate of complications.
['Karen Hicklin', 'Julie S. Ivy', 'James R. Wilson', 'Evan R Myers']
Using percentile matching to simulate labor progression and the effect of labor duration on birth complications
660,192
Accurate tumor segmentation is an essential and crucial step for computer-aided brain tumor diagnosis and surgical planning. Subjective segmentations are widely adopted in clinical diagnosis and treating, but they are neither accurate nor reliable. An automatical and objective system for brain tumor segmentation is strongly expected. But they are still facing some challenges such as lower segmentation accuracy, demanding a priori knowledge or requiring the human intervention. In this paper, a novel and new coarse-to-fine method is proposed to segment the brain tumor. This hierarchical framework consists of preprocessing, deep learning network based classification and post-processing. The preprocessing is used to extract image patches for each MR image and obtains the gray level sequences of image patches as the input of the deep learning network. The deep learning network based classification is implemented by a stacked auto-encoder network to extract the high level abstract feature from the input, and utilizes the extracted feature to classify image patches. After mapping the classification result to a binary image, the post-processing is implemented by a morphological filter to get the final segmentation result. In order to evaluate the proposed method, the experiment was applied to segment the brain tumor for the real patient dataset. The final performance shows that the proposed brain tumor segmentation method is more accurate and efficient.
['Zhe Xiao', 'Ruohan Huang', 'Yi Ding', 'Tian Lan', 'RongFeng Dong', 'Zhiguang Qin', 'Xinjie Zhang', 'Wei Wang']
A deep learning-based segmentation method for brain tumor in MR images
964,997
In this paper, we propose and analyze a novel q-duplex radio frequency/free space optical (RF/FSO) relaying protocol for a mixed RF and hybrid RF/FSO communication system. In our scheme, several mobile users transmit their data over an RF link to a relay node (e.g. a small cell base station) and the relay forwards the information to a destination (e.g. a macro cell base station) over a hybrid RF/FSO backhaul link. The RF links are full-duplex with respect to the FSO link and half-duplex with respect to each other, i.e., either the user-relay RF link or the relay-destination RF link is active. Depending on the channel statistics, the q-duplex relaying protocol may reduce to full-duplex relaying, when the quality of the FSO link is sufficiently high, or to half-duplex relaying, when the FSO link becomes unavailable due to severe atmospheric conditions. We derive an analytical expression for the end-to-end outage probability of the proposed protocol when the fading for the user-relay RF link, the relay-destination RF link, and the relay-destination FSO link are modelled as Rayleigh, Ricean, and Gamma-Gamma distributed, respectively. Our simulation results confirm the analytical derivations and reveal the effectiveness of the proposed q-duplex protocol and its superiority compared to existing schemes.
['Vahid Jamali', 'Diomidis S. Michalopoulos', 'Murat Uysal', 'Robert Schober']
Outage analysis of q-duplex RF/FSO relaying
714,838
Transfer Metric Learning for Kinship Verification with Locality-Constrained Sparse Features
['Yanli Zhang', 'Bo Ma', 'Lianghua Huang', 'Hongwei Hu']
Transfer Metric Learning for Kinship Verification with Locality-Constrained Sparse Features
687,383
A distributed linear quadratic decision problem is considered, where several different controllers act as a team, but with access to different measurements. Previous contributions have shown how to state the optimal synthesis as a finite-dimensional convex optimization problem. This paper shows that the dynamic behavior can be optimized by a distributed iterative procedure, without any need for a globally available model or centralized coordination. An illustrative model with three agents is considered.
['Anders Rantzer']
On Prize Mechanisms in linear quadratic team theory
36,581
Abstract Research productivity distributions exhibit heavy tails because it is common for a few researchers to accumulate the majority of the top publications and their corresponding citations. Measurements of this productivity are very sensitive to the field being analyzed and the distribution used. In particular, distributions such as the lognormal distribution seem to systematically underestimate the productivity of the top researchers. In this article, we propose the use of a (log)semi-nonparametric distribution (log-SNP) that nests the lognormal and captures the heavy tail of the productivity distribution through the introduction of new parameters linked to high-order moments. The application uses scientific production data on 140,971 researchers who have produced 253,634 publications in 18 fields of knowledge (O’Boyle and Aguinis in Pers Psychol 65(1):79–119, 2012) and publications in the field of finance of 330 academic institutions (Borokhovich et al. in J Finance 50(5):1691–1717, 1995), and shows that the log-SNP distribution outperforms the lognormal and provides more accurate measures for the high quantiles of the productivity distribution.
['Lina M. Cortés', 'Andrés Mora-Valencia', 'Javier Perote']
The productivity of top researchers: A semi-nonparametric approach
690,924
Affective algorithmic composition is a growing field that combines perceptually motivated affective computing strategies with novel music generation. This article presents work toward the development of one application. The long-term goal is to develop a responsive and adaptive system for inducing affect that is both controlled and validated by biophysical measures. Literature documenting perceptual responses to music identifies a variety of musical features and possible affective correlations, but perceptual evaluations of these musical features for the purposes of inclusion in a music generation system are not readily available. A discrete feature, rhythmic density (a function of note duration in each musical bar, regardless of tempo), was selected because it was shown to be well-correlated with affective responses in existing literature. A prototype system was then designed to produce controlled degrees of variation in rhythmic density via a transformative algorithm. A two-stage perceptual evaluation of a stimulus set created by this prototype was then undertaken. First, listener responses from a pairwise scaling experiment were analyzed via Multidimensional Scaling Analysis (MDS). The statistical best-fit solution was rotated such that stimuli with the largest range of variation were placed across the horizontal plane in two dimensions. In this orientation, stimuli with deliberate variation in rhythmic density appeared farther from the source material used to generate them than from stimuli generated by random permutation. Second, the same stimulus set was then evaluated according to the order suggested in the rotated two-dimensional solution in a verbal elicitation experiment. A Verbal Protocol Analysis (VPA) found that listener perception of the stimulus set varied in at least two commonly understood emotional descriptors, which might be considered affective correlates of rhythmic density. Thus, these results further corroborate previous studies wherein musical parameters are monitored for changes in emotional expression and that some similarly parameterized control of perceived emotional content in an affective algorithmic composition system can be achieved and provide a methodology for evaluating and including further possible musical features in such a system. Some suggestions regarding the test procedure and analysis techniques are also documented here.
['Duncan Williams', 'Alexis Kirke', 'Eduardo Reck Miranda', 'Ian Daly', 'James Hallowell', 'James Weaver', 'Asad Malik', 'Etienne B. Roesch', 'Faustina Hwang', 'Slawomir J. Nasuto']
Investigating Perceived Emotional Correlates of Rhythmic Density in Algorithmic Music Composition
223,768
In this paper, the regulated dual phase charge pump with compact size is presented. This charge pump uses the dual phase technique to reduce the output ripple and proposes a new power stage to define the stability of the overall system. This charge pump provides output voltage 5V and maximum load current 50 Am with the constant frequency regulation. This design is based on TSMC 035 mum 3.3V/5V CMOS technology.
['Chun-Yu Hsieh', 'Po-Chin Fan', 'Ke-Horng Chen']
A Dual Phase Charge Pump with Compact Size
512,116
We consider the design of coding schemes for the wireless two-way relaying channel when there is no channel state information at the transmitter. In the spirit of the compute-and-forward paradigm, we present a multilevel coding scheme that permits reliable computation (or, decoding) of a class of functions at the relay. The function to be computed (or decoded) is then chosen depending on the channel realization. We define such a class of functions which can be decoded at the relay using the proposed coding scheme and derive rates that are universally achievable over a set of channel gains when this class of functions is used at the relay. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using 4-ary and 8-ary modulation schemes. Numerical results demonstrate that the flexibility afforded by our proposed scheme results in substantially higher rates than those achievable by always using a fixed function or considering only linear functions over higher order fields.
['Brett Hern', 'Krishna R. Narayanan']
Multilevel Coding Schemes for Compute-and-Forward With Flexible Decoding
36,172
This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of 2016 Clinical TempEval challenge. Clinical TempEval 2016 addressed the problem of temporal reasoning in clinical domain by providing annotated clinical notes and pathology reports similar to Clinical TempEval challenge 2015. The Clinical TempEval challenge consisted of six subtasks. Hitachi team participated in two time expression based subtasks: time expression span detection (TS) and time expression attribute identification (TA) for which we developed hybrid of rule-based and machine learning based methods using Stanford TokensRegex framework and Stanford Named Entity Recognizer and evaluated it on the THYME corpus. Our hybrid system achieved a maximum F-score of 0.73 for identification of time spans (TS) and 0.71 for identification of time attributes (TA).
['P R Sarath', 'Manikandan R', 'Yoshiki Niwa']
Hitachi at SemEval-2016 Task 12: A Hybrid Approach for Temporal Information Extraction from Clinical Notes.
822,553
Increased customer needs and intensified global competition require intelligent and flexible automation. The interaction technology mobile robotics 1 addresses this, so it holds great potential within the industry.
['Mads Hvilshøj', 'Simon Bøgh', 'Ole Madsen', 'Morten Kristiansen']
The mobile robot “Little Helper”: Concepts, ideas and working principles
258,593
For submicron integrated circuits, 3D numerical techniques are required to accurately compute the values of the interconnect capacitances. In this paper, we describe an hierarchical capacitance extraction method that efficiently extracts 3D interconnect capacitances of large regular layout structures such as RAMs and array multipliers. The method is based on a 3D capacitance extraction method that uses a boundary-element technique and approximate matrix inversion to efficiently compute 3D interconnect capacitances for flat layout descriptions. The latter method has a computational complexity O(Z), where Z is the size of the layout. In the worst case, the hierarchical extraction method has computational complexity O(B+U), where B is the total size of the boundary area between all circuit parts in which the circuit is decomposed, and U is the total size of the parts of the circuit that are unique. The method has been implemented in the layout-to-circuit extractor SPACE that uses as input a hierarchical layout description of the circuit. It produces as output a netlist containing transistors, resistances, ground capacitances, and coupling capacitances between conductor parts that are near to each other.
['A.J. van Genderen', 'N.P. van der Meijs']
Hierarchical extraction of 3D interconnect capacitances in large regular VLSI structures
530,897
We characterize publication and retrieval of structured documents in peer-to-peer (P2P) file-sharing systems, based on the abstract notion of community, emcompassing a shared document schema, a network protocol and data presentation tools. We present an extension of this model to manage multiple communities, and to describe relations between documents or communities. Our approach is based on the idea of reifying complex concepts to structured documents, then sharing these documents in the P2P network. The design of our prototype P2P client involves components interacting asynchronously using the blackboard model. This decoupled architecture allows the system to dynamically extend its query processing functionality by creating new components that implement the processing described in downloaded documents.
['Alan Davoust', 'Babak Esfandiari']
Towards Semantically Enhanced File-Sharing
424,041
Approches discrètes pour l'analyse d'images.
['Nicolas Passat']
Approches discrètes pour l'analyse d'images.
778,915
The paper presents a new distribution network which is capable of concentrating and shifting the incoming active packets simultaneously, without requiring dummy destination address generation and extraction processes. It has the structure of a reverse banyan network (RBN) and consists of controlled switching elements (CSEs) which is obtained by extending the passive iterative-cells introduced by Narasimha [1994]. The CSE-based RBN has a set of external control inputs (ECIs) in addition to the data input and output lines and can generate different output patterns according to the ECI values. It is shown through four properties that the CSE-based RBN can perform the distribution function of the conventional distributor. In addition, it is rigorously described in the properties how to determine the set of ECI values to achieve the desired distribution function, which includes the distribution in the normal mode, in the reversed mode, and in alternation of these two modes. The proposed CSE-based distributor can be applied to a variety of occasions by modifying the use of the counter, the numbers to write on the registers, and the table to store the ECI values. Some of useful examples are demonstrated through applications to shift-sequence permutation, N/spl times/R concentration, nonblocking point-to-point switching, and virtual FIFO queueing. >
['Jeong Gyu Lee', 'Byeong Gi Lee']
A new distribution network based on controlled switching elements and its applications
267,634
Functional realism focuses on helping users with tasks execution through an enhanced perception of the augmented scene. This work applies a common visualization technique, Ghosting, to improve depth perception in Augmented Reality scenes. Computer Vision and Image Processing techniques are used to extract natural features from a real scene, which will guide the assignment of transparency to each pixel of the virtual object, and provide the ghosting effect while blending the virtual object into the real scene. A moving object in a real scene catches users' attention. So, it is expected that natural and important visual information of the scene does not get occluded when the moving object passes over it. Because of that, the main contribution of this work is the inclusion of a motion detection technique to the scene feature analysis step of the Ghosting technique pipeline. A qualitative evaluation of the results achieved shows that the case studies of this work, in indoor and outdoor environments, using the proposed technique led to a better depth perception of the augmented scene, preserving the most relevant information for visual attention.
['Arthur Padilha', 'Veronica Teichrieb']
Motion-Aware Ghosted Views for Single Layer Occlusions in Augmented Reality
557,670
In this paper, a MIMO Broadcast Channel (MIMO-BC) with large (K) number of users is considered. It is assumed that all users have a hard delay constraint D. We propose a scheduling algorithm for maximizing the throughput of the system, while satisfying the delay constraint for all users. It is proved that by using the proposed algorithm, it is possible to achieve the maximum throughput and maximum fairness in the network, simultaneously, in the asymptotic case of K rarr infin. We introduce a new performance metric in the network, called "minimum average throughput", and prove that the proposed algorithm is capable of maximizing the minimum average throughput in a MIMO-BC, in the asymptotic case of K rarr infin. Finally, it is established that the proposed algorithm reaches the boundaries of the capacity region and stability region of the network, simultaneously, in the asymptotic case of K rarr infin.
['Alireza Bayesteh', 'Mehdi Ansari Sadrabadi', 'Amir K. Khandani']
Is it possible to achieve the optimum throughput and fairness simultaneously in a MIMO Broadcast Channel
327,564
The artificial immune algorithm is the hot topic in much research such as the intrusion detection system, the information retrieval system and the data mining system. The negative selection algorithm is the typical artificial immune algorithm. A common representation of binary strings for antibody (detector) and antigen have been associated with inefficiencies when generating detector and inspecting antigen. We use a single integer to represent the detector and provide the basis of improving negative selection algorithm efficiency. In the detector generation algorithm, extracting sub-strings in self that its length is larger than threshold and converting them to single integer in numerical interval, then the rest integers in numerical interval are selected as numerical detectors. It can reduce the time and space overhead of detector generation and provide the facility to analyze the positive and negative errors when antigen inspection. The numerical matching rule is given. The B-tree is used to create index of numerical detector. Extracting sub-strings in antigen that its length is larger than threshold and converting them to some integers. If there is the same value as those integers in the index of numerical detector, then the antigen matches one numerical detector. It can improve the efficiency of antigen inspection. Finally the prototype of the numerical negative selection algorithm and negative selection algorithm are realized to test the overhead of the detector generation and antigen inspection using the live data set. The results show that the numerical negative selection algorithm can reduce the time and space overhead and avoid fluctuation of the overhead.
['Tao Cai', 'Shiguang Ju', 'Dejiao Niu']
NUMERICAL NEGATIVE SELECTION ALGORITHM
160,677
The most serious problem in the area of quantitative security evaluation is modeling of hacker's behavior. Because of the intelligent and complicated mental aspects of hackers, there are many challenges to model their behavior. Recently, there have been some efforts to use game theory for predicting hacker's behavior. However, it is necessary to revise the proposed approaches if there is a society of hackers with significant diversity in their behaviors. In this paper, we have examined our newly introduced approach to extend the basic ideas of using game theory to predict transition rates in stochastic models. The proposed method categorizes the society of hackers based on two main criteria used widely in hacker classification: motivations and skills. Markov chains are used to model the system. Based on the preferences of each class of hackers and the distribution of skills in each class, the transition rates between the states are computed. The resulting Markov chains can be solved to obtain the corresponding security measures of the system. We have explored some of the applications of the method and have shown that the method facilitates the study of relationships between important factors of hackers/defenders societies and different security measures of the system.
['Behzad Zare Moayedi', 'Mohammad Abdollahi Azgomi']
A Game Theoretic Approach for Quantitative Evaluation of Security by Considering Hackers with Diverse Behaviors
270,920
LTE (Long Term Evolution) is a next major step in mobile radio communications, and will be introduced as Release 8 in the 3rd Generation Partnership Project (3GPP). The new evolution aims to reduce delays, improve spectrum flexibility and reduce cost for operators and end users [1]. To fulfil these targets, new enabling technologies need to be integrated into the current 3G radio network architectures. Multiple Input and Multiple Output (MIMO) is one of the crucial enabling technologies in the LTE system particularly in the downlink to achieve the required peak data rate. The unitary codebook based precoding technique is proposed in the standard to increase the capacity of the system. This paper presents a link level analysis of the LTE downlink and an investigation of the performance of both Single User (SU) MIMO and Multi User (MU) MIMO with codebook based unitary precoding.
['Kian Chung Beh', 'Angela Doufexi', 'Simon M D Armour']
On the performance of SU-MIMO and MU-MIMO in 3GPP LTE downlink
140,383
In this paper, a novel intra coding scheme is proposed. The proposed scheme improves H.264 intra coding from three aspects: 1) H.264 intra prediction is enhanced with additional bi-directional intra prediction modes; 2) H.264 integer transform is supplemented with directional transforms for some prediction modes; and 3) residual coefficient coding in CAVLC is improved. Compared to H.264, together the improvements can bring on average 7% and 10% coding gain for CABAC and for CAVLC, respectively, with average coding gain of 12% for HD sequences.
['Yan Ye', 'Marta Karczewicz']
Improved h.264 intra coding based on bi-directional intra prediction, directional transform, and adaptive coefficient scanning
473,346
Graphical abstractDisplay Omitted An approach based on hybrid genetic algorithm (HGA) is proposed for image denoising. In this problem, a digital image corrupted by a noise level must be recovered without losing important features such as edges, corners and texture. The HGA introduces a combination of genetic algorithm (GA) with image denoising methods. During the evolutionary process, this approach applies some state-of-the-art denoising methods and filtering techniques, respectively, as local search and mutation operators. A set of digital images, commonly used by the scientific community as benchmark, is contaminated by different levels of additive Gaussian noise. Another set composed of some Satellite Aperture Radar (SAR) images, corrupted with a multiplicative speckle noise, is also used during the tests. First, the computational tests evaluate several alternative designs from the proposed HGA. Next, our approach is compared against literature methods on the two mentioned sets of images. The HGA performance is competitive for the majority of the reported results, outperforming several state-of-the-art methods for images with high levels of noise.
['Jônatas Lopes de Paiva', 'Claudio Fabiano Motta Toledo', 'Helio Pedrini']
An approach based on hybrid genetic algorithm applied to image denoising problem
551,735
FSSGR: Feature Selection System to Dynamic Gesture Recognition
['Diego G. S. Santos', 'Rodrigo C. Neto', 'Bruno J. T. Fernandes', 'Byron L. D. Bezerra']
FSSGR: Feature Selection System to Dynamic Gesture Recognition
683,900
U-business uses ubiquitous computing technologies to support uninterrupted communications in business transactions to gain competitive advantage. This study seeks to identify patterns underlying successful U-business growth. It follows four stages: 1 dimensions important for the study of U-business growth strategies were identified through examining past cases. Two dimensions were identified: nature of change Improvement vs. Innovation and implementation environment Physical vs. Virtual Value Chain. 2 These dimensions were then used to build a taxonomy of U-business types and a growth strategy type was defined as a transition from one of these types to another. 3 A focus group consisting of U-business and UbiComp experts was then used to validate these dimensions and generate hypotheses about successful strategies. 4 These strategies were then tested by using the transition strategies that Apple has used in its product offerings. These findings are used to provide guidelines about growth strategies for U-business companies.
['Changsu Kim', 'Jintae Lee', 'Stephen Bradley']
U-business: a taxonomy and growth strategies
668,956
On acoustic modeling, recurrent neural networks (RNNs) using Long Short-Term Memory (LSTM) units have recently been shown to outperform deep neural networks (DNNs) models. This paper focuses on resolving two challenges faced by LSTM models: high model complexity and poor decoding efficiency. Motivated by our analysis of the gates activation and function, we present two LSTM simplifications: deriving input gates from forget gates, and removing recurrent inputs from output gates. To accelerate decoding of LSTMs, we propose to apply frame skipping during training, and frame skipping and posterior copying (FSPC) during decoding. In the experiments, model simplifications reduce the size of LSTM models by 26%, resulting in a simpler model structure. Meanwhile, the application of FSPC speeds up model computation by 2 times during LSTM decoding. All these improvements are achieved at the cost of 1% WER degradation.
['Yajie Miao', 'Jinyu Li', 'Yongqiang Wang', 'Shi-Xiong Zhang', 'Yifan Gong']
Simplifying long short-term memory acoustic models for fast training and decoding
741,083
The size-Ramsey number of a graph G is the minimum number of edges in a graph H such that every 2-edge-coloring of H yields a monochromatic copy of G. Size-Ramsey numbers of graphs have been studied for almost 40 years with particular focus on the case of trees and bounded degree graphs. We initiate the study of size-Ramsey numbers for k-uniform hypergraphs. Analogous to the graph case, we consider the size-Ramsey number of cliques, paths, trees, and bounded degree hypergraphs. Our results suggest that size-Ramsey numbers for hypergraphs are extremely dicult to determine, and many open problems remain.
['Andrzej Dudek', 'Steven La Fleur', 'Dhruv Mubayi', 'Vojtech Rödl']
On the size-Ramsey number of hypergraphs
408,107
In this study, we propose a hybrid fruit fly optimization algorithm (HFOA) to solve the hybrid flowshop rescheduling problem with flexible processing time in steelmaking casting systems. First, machine breakdown and processing variation disruptions are considered simultaneously in the rescheduling problem. Second, each solution is represented by a fruit fly with a well-designed solution representation. Third, two novel decoding heuristics considering the problem characteristics, which can significantly improve the solution quality, are developed. Several routing and scheduling neighborhood structures are proposed to balance the exploration and exploitation abilities. Finally, we propose an effective HFOA with well-designed smell and vision search procedures. In addition, an iterated greedy (IG) local search is embedded in the proposed algorithm to further enhance its exploitation ability. The proposed algorithm is tested on sets of instances generated from industrial data. Through comprehensive computational comparisons and statistical analyses, the performance of the proposed HFOA algorithm is favorably compared against several algorithms in terms of both solution quality and efficiency.
['Jun-qing Li', 'Quan-Ke Pan', 'Kun Mao']
A Hybrid Fruit Fly Optimization Algorithm for the Realistic Hybrid Flowshop Rescheduling Problem in Steelmaking Systems
703,574
MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer.
['Marcel Keller', 'Emmanuela Orsini', 'Peter Scholl']
MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer.
985,185
Ontology based Description of Analytic Methods for Electrophysiology.
['Jan Stebeták', 'Roman Moucek']
Ontology based Description of Analytic Methods for Electrophysiology.
720,790
(Nothing else) MATor(s): Monitoring the Anonymity of Tor's Path Selection.
['Michael Backes', 'Aniket Kate', 'Sebastian Meiser', 'Esfandiar Mohammadi']
(Nothing else) MATor(s): Monitoring the Anonymity of Tor's Path Selection.
779,456
The Operations Research model known as the Set Covering Problem has a wide range of applications. See for example the survey by Ceria, Nobili and Sassano and edited by Dell'Amico, Maffioli and Martello (Annotated Bibliographies in Combinatorial Optimization, Wiley, New York, 1997). Sometimes, due to the special structure of the constraint matrix, the natural linear programming relaxation yields an optimal solution that is integer, thus solving the problem. Under which conditions do such integrality properties hold? This question is of both theoretical and practical interest. On the theoretical side, polyhedral combinatorics and graph theory come together in this rich area of discrete mathematics. In this tutorial, we present the state of the art and open problems on this question.
['Gérard Cornuéjols', 'Bertrand Guenin']
Ideal clutters
716,810
In this paper, we propose a multilevel cooperative coevolution (MLCC) framework for large scale optimization problems. The motivation is to improve our previous work on grouping based cooperative coevolution (EACC-G), which has a hard-to-determine parameter, group size, in tackling problem decomposition. The problem decomposer takes group size as parameter to divide the objective vector into low dimensional subcomponents with a random grouping strategy. In the MLCC, a set of problem decomposers is constructed based on the random grouping strategy with different group sizes. The evolution process is divided into a number of cycles, and at the start of each cycle MLCC uses a self-adapted mechanism to select a decomposer according to its historical performance. Since different group sizes capture different interaction levels between the original objective variables, MLCC is able to self-adapt among different levels. The efficacy of the proposed MLCC is evaluated on the set of benchmark functions provided by CECpsila2008 special session.
['Zhenyu Yang', 'Ke Tang', 'Xin Yao']
Multilevel cooperative coevolution for large scale optimization
387,921
Based on experiments and numerical simulation, it has been widely believed that the time average mean square error in the first order sigma-delta modulator with input of bandlimited signals decays like O(/spl lambda//sup -3/) as the sampling ratio /spl lambda/ goes to infinity. This conjecture remains as an open problem for many years. Combining tools from number theory, harmonic analysis, real analysis and complex analysis, this paper shows that the conjecture holds in some reasonable sense.
['Wen Chen', 'Chintha Tellambura']
Time average MSE analysis for the first order sigma-delta modulator with the inputs of bandlimited signals
393,424
Several multicast protocols such as Protocol Independent Multicast (PIM) (Deering et al., 1996) and Core-Based Trees (CBT) (Ballardie et al., 1993) use the notion of group-shared trees. The reason is that construction of minimal-cost tree spanning all members of the multicast group is expensive, hence these protocols use a core-based group-shared tree to distribute packets from all the sources. A core-based tree is a shortest-path tree rooted at some core node. The core node is also referred to as a center node or a rendezvous point. Core nodes may be chosen from some preselected set of nodes or some heuristics may be employed to select core nodes. We present distributed core selection and migration protocols for mobile ad hoc networks with dynamically changing network topology. Most protocols for core selection in static networks are not suitable for ad hoc networks, since these algorithms depend on knowledge of entire network topology, which is not available or is too expensive to maintain in an ad hoc network with dynamic topology. The proposed core location method is based on the notion of median node of the current multicast tree instead of the median node of the entire network. The rationale is that the mobile ad hoc network graphs are in general sparse and, hence, the multicast tree is a good approximation of the entire network for the current purpose. Our adaptive distributed core selection and migration method uses the fact that the median of a tree is equivalent to the centroid of that tree. The significance of this observation is due to the fact that the computation of a tree's centroids does not require any distance information. Mobile ad hoc networks have limited bandwidth which needs to be conserved. Hence, we use the cost of multicast tree as the sum of weights of all the links in the tree, which signifies the total bandwidth consumed for multicasting a packet. We compare the cost of shortest-path tree rooted at the tree median, Cost/sub TM/, with the cost of shortest-path tree rooted at the median of the graph, Cost/sub GM/, which requires complete topology information to compute. A network graph model for generating random ad hoc mobile networks is developed to perform this comparison. The simulation results show that for large size networks, the ratio Cost/sub TM//Cost/sub GM/ lies between 0.8 to 1.2 for different multicast groups. Further, as the size of the multicast group increases the ratio approaches 1.
['Sandeep K. S. Gupta', 'Pradip K. Srimani']
Adaptive core selection and migration method for multicast routing in mobile ad hoc networks
331,954
The number of assistance systems in cars has been increasing in recent years. While these systems are targeted at supporting the individual driver and his or her safety, they may though compete for the driver's attention, and may demand too much of the driver's cognitive resources. Based on the established multiple resource theory in recent years, the use of different multimodal displays has been investigated that give the driver attention while not overloading sensory channels. In our work, we are looking into peripheral light displays in means to present safety relevant information. In this paper, we present the results of an experiment in which a peripheral light display was used to show the distance to a closing vehicle. The display is an LED stripe, seamlessly integrated into the side door and the dashboard of the car. Two different light patterns were tested in an overtaking scenario in a driving simulator. One pattern encodes the expected time to collision to the left rear car by moving a light source towards the front left corner. The other pattern additionally adapts its brightness to a simplified model of the driver's certainty to get his or her attention in uncertain situations. In contrast to previous works, we did not focus on a warning system but on a decision aid system. We found that using the adaptive pattern led to faster decisions and therefore to a smaller probability of violating safety distances. We believe that this pattern is a good basis for patterns which are fine-tuned to individual drivers as well as better driver models.
['Andreas Löcken', 'Wilko Heuten', 'Susanne Boll']
Supporting lane change decisions with ambient light
328,945
In this paper, we propose an improved MANET gateway selection scheme suitable for disaster recovery applications. Having an infrastructure less and decentralize features, MANET is well suited to bring the network back that has been collapse after a disaster. In this paper, we focus on improving throughput performance of MANET by designing a better gateway selection scheme. The key idea is to eliminate the congestion at each MANET gateway to improve the network performance. The main challenge is the mobility of nodes in disaster recovery environment. Simulation results show that the proposed gateway selection scheme can efficiently manage the traffic distribution at each gateway in order to maximize the network performance.
['Nor Aida Mahiddin', 'Nurul I. Sarkar']
Improving the Performance of MANET Gateway Selection Scheme for Disaster Recovery
999,125
Local Changes in Marching Cubes to Generate Less Degenerated Triangles
['Thiago F. Leal', 'Aruquia Peixoto', 'Cassia I. G. Silva', 'Marcelo Dreux', 'Carlos A. de Moura']
Local Changes in Marching Cubes to Generate Less Degenerated Triangles
785,393
Bring you to the past: Automatic Generation of Topically Relevant Event Chronicles
['Tao Ge', 'Wenzhe Pei', 'Heng Ji', 'Sujian Li', 'Baobao Chang', 'Zhifang Sui']
Bring you to the past: Automatic Generation of Topically Relevant Event Chronicles
614,644
A novel scheme for deformable tracking of curvilinear structures in image sequences is presented. The approach is based on B-spline snakes defined by a set of control points whose optimal configuration is determined through efficient discrete optimization. Each control point is associated with a discrete random variable in a MAP-MRF formulation where a set of labels captures the deformation space. In such a context, generic terms are encoded within this MRF in the form of pairwise potentials. The use of pairwise potentials along with the B-spline representation offers nearly perfect approximation of the continuous domain. Efficient linear programming is considered to recover the approximate optimal solution. The method is successfully applied to the tracking of guide-wires in fluoroscopic X-ray sequences of several hundred frames which requires extremely robust techniques.
['Tim Hauke Heibel', 'Ben Glocker', 'Martin Groher', 'Nikos Paragios', 'Nikos Komodakis', 'Nassir Navab']
Discrete tracking of parametrized curves
278,370
Singapore aims to be the premier teaching and research centre for computer science in the Asia-Pacific region in the 21st century, and the National University of Singapore is taking steps to meet that objective. Excellence in teaching is promoted via continued efforts to secure top quality students and lecturers, promoting teaching quality, and establishing close links with industry to ensure that the graduates are able to meet changing industry needs. Research excellence is promoted by collaborations with top academic and research institutions and ensuring high quality research work by the academic staff.
['C. T. Chong']
Computer science education in the Asia-Pacific region in the 21st century
279,254
A distributed implementation of a parallel system is of interest because it can provide an economical source of concurrency, can be scaled easily to match the needs of particular computations, and can be fault-tolerant. A design is described for such an implementation for the Linda parallel programming system, in which processes share a memory called the tuple space. Fault tolerance is achieved by replication: by having more than one copy of the tuple space, some replicas can provide information when others are not accessible due to failures. The replication technique takes advantage of the semantics of Linda so that processes encounter little delay in accessing the tuple space. In addition to providing an efficient implementation for Linda, the study extends work on replication techniques by showing what can be done when semantics are taken into account. >
['Andrew Xu', 'Barbara Liskov']
A design for a fault-tolerant, distributed implementation of Linda
389,673
Alternating-aperture phase shift masking (AAPSM), a form of strong resolution enhancement technology, will be used to image critical features on the polysilicon layer at smaller technology nodes. This technology imposes additional constraints on the layouts beyond traditional design rules. Of particular note is the requirement that all critical features be flanked by opposite-phase shifters while the shifters obey minimum width and spacing requirements. A layout is called phase assignable if it satisfies this requirement. Phase conflicts have to be removed to enable the use of AAPSM for layouts that are not phase assignable. Previous work has sought to detect a suitable set of phase conflicts to be removed as well as correct them. This paper has two key contributions: 1) a new computationally efficient approach to detect a minimal set of phase conflicts, which when corrected will produce a phase-assignable layout, and 2) a novel layout modification scheme for correcting these phase conflicts with small layout area increase. Unlike previous formulations of this problem, the proposed solution for the conflict detection problem does not frame it as a graph bipartization problem. Instead, a simpler and more computationally efficient reduction is proposed. This simplification greatly improves the runtime while maintaining the same improvements in the quality of results obtained in Chiang (Proc. DATE, 2005, p. 908). An average runtime speedup of 5.9times is achieved using the new flow. A new layout modification scheme suited for correcting phase conflicts in large standard-cell blocks is also proposed. The experiments show that the percentage area increase for making standard-cell blocks phase assignable ranges from 1.7% to 9.1%
['Charles Chiang', 'Andrew B. Kahng', 'Subarnarekha Sinha', 'Xu Xu', 'Alexander Zelikovsky']
Fast and Efficient Bright-Field AAPSM Conflict Detection and Correction
175,292
With the emergence of bandwidth-intensive online mobile multimedia applications in wireless networks, in order to make mobile users enjoy better Quality of Service (QoS) under the conditions of limited resources, efficient radio spectrum resource allocation schemes are always desirable. This paper addresses the problem of joint Resource Block (RB) allocation and Modulation-and-Coding Scheme (MCS) selection in LTE femtocell DownLink (DL) for mobile multimedia applications. We first formulate the problem as an Integer Linear Program (ILP) whose objective is to minimize the number of allocated RBs of a closed femtocell, while guaranteeing minimum throughput for each user. In view of the NP-hardness of the ILP, we then propose an intelligent optimization learning algorithm called ACO-HM algorithm with reduced polynomial time complexity. The Ant Colony Optimization (ACO) learning algorithm exhibits better performance in machine learning and supports parallel search for the RB allocation, while the Harmonic Mean (HM) method is to select a more appropriate MCS than the MINimum/MAXimum MCS selection schemes (MIN/MAX). Simulation results show that compared with the ACO-MIN algorithm and the ACO-MAX algorithm, the proposed ACO-HM learning algorithm achieves better performance with fewer RBs and better QoS guarantees.
['Xin Chen', 'Longfei Li', 'Xudong Xiang']
Ant colony learning method for joint MCS and resource block allocation in LTE Femtocell downlink for multimedia applications with QoS guarantees
593,435
Accurate detection and clustering are two of the main analysis tasks for remotely sensed spectral imagery. Hyper-spectral image (HSI) analysis often involves mathematically transforming the raw data into a new space using Principal Components Analysis (PCA) or similar techniques where a lower dimensional subspace containing most of the image information may be extracted. The results of standard algorithms may perform better in this new, less correlated space. Many of the currently used transformations in HSI analysis are statistical in nature and therefore place Gaussian or similar assumptions on the data distribution. A new, data driven, mathematical transformation is presented as a preprocessing step for HSI analysis. Termed the Nearest Neighbor Transformation, this new transformation does not rely on placing assumptions on the data. Instead, the approach taken is to use the pair-wise Euclidean distance between neigboring pixels in order to characterize the distribution of the data. This approach is introduced here and shown to improve analytical results from standard HSI algorithms, including anomaly detection and clustering.
['Ariel Schlamm', 'David W. Messinger']
Improved detection and clustering of hyperspectral image data by preprocessing with a euclidean distance transformation
56,426
Comparative analysis of the threshold SNR and/or sample support values where genuine maximum likelihood DOA estimation starts to produce “outliers” is conducted for unconditional (stochastic) and conditional (deterministic) problem formulations. Theoretical predictions based on recent results from Random Matrix Theory (RMT) are provided and supported by simulation results.
['Yuri I. Abramovich', 'Ben A. Johnson']
Comparative threshold performance study for conditional and unconditional direction-of-arrival estimation
92,641
Traffic signals are essential to provide safe driving that allows all traffic flows to share road intersection. However, they decrease the traffic flow fluency because of the queuing delay at each road intersection. In order to improve the traffic efficiency all over the road network, Intelligent Traffic Light Scheduling ITLS algorithm has been proposed. In this work, we introduce an ITLS algorithm based on Genetic Algorithm GA merging with Machine Learning ML algorithm. This algorithm schedules the time phases of each traffic light according to each real-time traffic flow that intends to cross the road intersection, whilst considering next time phases of traffic flow at each intersection by ML. In order to get each next time phases of traffic flow, we use Linear Regression LR algorithm as ML algorithm. The introduced algorithm aims to increase traffic fluency by decreasing the total waiting delay of all traveling vehicles at each road intersection in the road network. We compare the performance of our algorithm with the unimproved one for different simulated data. Results shows that, our algorithm increases the traffic fluency and decreases the waiting delay by 21.5i¾?% compared with the unimproved one.
['Biao Zhao', 'Chi Zhang', 'Lichen Zhang']
Real-Time Traffic Light Scheduling Algorithm Based on Genetic Algorithm and Machine Learning
788,064
Online personalization services belong to a class of economic goods with a “no free disposal” (NFD) property where consumers do not always prefer more services to less because of the privacy concerns. These concerns arise from the revelation of information necessary for the provision of personalization services. We examine vendor strategies in a market where consumers have heterogeneous concerns about privacy. In successive generalizations, we allow the vendor to offer a fixed level of personalization, variable levels of personalization, and monetary transfers (coupons) to the consumers that depend on the level of personalization chosen. We show that a vendor offering a fixed level of personalization does not offer a coupon unless his marginal value of information (MVI) is sufficiently high, and even when personalization is costless, the vendor does not cover the market. Under a fixed services offering, the vendor serves the same market with or without couponing. Next, we demonstrate that in the absence of couponing, the vendor's optimal variable personalization services contract maximizes surplus for all heterogeneous consumers, which is in contrast to standard results from monopolistic screening. When the vendor can offer coupons that vary according to personalization levels, the optimal contract is not fully revealing unless his MVI is high and he will not offer coupons when this MVI is low. However, a vendor with a moderate MVI (between certain thresholds) offers a bunched contract, wherein consumers with low privacy concerns receive a variable services-coupon contract, those with moderate privacy concerns receive a fixed services-coupon contract, and those with high privacy concerns do not participate in the market. The coupon value is decreasing in privacy sensitivity of consumers.
['Ramnath K. Chellappa', 'Shivendu Shivendu']
Mechanism Design for “Free” but “No Free Disposal” Services: The Economics of Personalization Under Privacy Concerns
240,804
This paper presents a computationally efficient technique for accurate analysis of floating-body partially depleted SOI (PD/SOI) CMOS circuits in steady state operating mode. The basic algorithm and techniques to improve the convergence and reduce simulation time are described. The methodology provides over 2 orders of magnitude improvement in simulation time compared with straightforward circuit simulation for large multiple-input circuit macros and SRAMs, thus allowing accurate analysis/assessment of the history effect in PD/SOI CMOS circuits and body voltage and Vt drifts in sensitive circuits.
['Rajiv V. Joshi', 'K.E. Kroell', 'Ching-Te Chuang']
A novel technique for steady state analysis for VLSI circuits in partially depleted SOI
307,752
Industrial assembly involves sensing the pose (orientation and position) of a part. Efficient and reliable sensing strategies can be developed for an assembly task if the shape of the part is known in advance. In this paper the authors investigate the problem of determining the pose of a convex n-gon from a set of m supporting cones, i.e., cones with both sides supporting the polygon. An algorithm with running time O(nm) which almost always reduces to O(n+m log n) is presented to solve for all possible poses of the polygon. As a consequence, the polygon inscription problem of finding all possible poses for a convex n-gon inscribed in another convex m-gon, can be solved within the same asymptotic time bound. The authors prove that the number of possible poses cannot exceed 6n, given m/spl ges/2 supporting cones with distinct vertices. Experiments demonstrate that two supporting cones are sufficient to determine the real pose of the n-gon in most cases. The authors' results imply that sensing in practice can be carried out by obtaining viewing angles of a planar part at multiple exterior sites in the plane. As a conclusion, the authors generalize this and other sensing methods into a scheme named sensing by inscription. >
['Yan-Bin Jia', 'Michael A. Erdmann']
Sensing polygon poses by inscription
514,858
Abstract#R##N##R##N#The present situation of electricity production from renewable energy sources (RES) in Portugal is analyzed, giving particular attention to the wind power sector due to its increasing importance. The evolution of the electricity system is presented along with the strategies for the sector, and future prospects for the RES. Although the interest of private companies in the wind sector is high, the administrative and grid barriers represent major obstacles to wind power development. The problem of wind intermittency and uncertainty is also discussed. The improvement of interconnection capacity and the increase of power reserve are identified as key requirements for ensuring the security of supply. A clear understanding of all these aspects is fundamental for integrated, multidimensional wind power planning.
['Paula M. T. Ferreira', 'Madalena Araújo', "M. E. J. O'Kelly"]
An overview of the Portuguese wind power sector
466,295
Load balancing and data aggregation tree routing algorithm in wireless sensor networks
['Jing Zhang', 'Ting Yang', 'Chengli Zhao']
Load balancing and data aggregation tree routing algorithm in wireless sensor networks
634,051
Policy-based automation is emerging as a viable approach to IT systems management, codifying high-level business goals into executable specifications for governing IT operations. Little is known, however, about how policies are actually made, used, and maintained in practice. Here, we report studies of policy use in IT service delivery. We found that although policies often make explicit statements, much is deliberately left implicit, with correct interpretation and execution depending critically on human judgment.
['Eser Kandogan', 'John H. Bailey', 'Paul P. Maglio', 'Eben M. Haber']
Policy-based IT automation: the role of human judgment
462,163
A Multiobjective Evolutionary Algorithm for Personalized Tours in Street Networks
['Ivanoe De Falco', 'Umberto Scafuri', 'Ernesto Tarantino']
A Multiobjective Evolutionary Algorithm for Personalized Tours in Street Networks
589,145
We prove that every planar poset $P$ of height $h$ has dimension at most $192h + 96$. This improves on previous exponential bounds and is best possible up to a constant factor.
['Gwenaël Joret', 'P Micek', 'Veit Wiechert']
Planar posets have dimension at most linear in their height
963,226
Cooperation plays an important role in distributed reasoning systems in Ambient Environment. In such systems, ambient entities with diverse perception and capabilities have to cooperate during the reasoning process by sharing parts of their local knowledge to achieve a set of common goals. Nevertheless, previous work in distributed reasoning assumedthat ambient entities keep their local knowledge private and reason in a top-down manner to reach these goals. Such approaches do not fit with the features of Ambient environment. This paper presents a more tailored distributed peer-to-peerapproach for modeling and reasoning with context informationusing cooperation in Ambient Environment. Ambient entitiescooperate by sharing parts of their local context informationand reason in a bottom-up manner. We propose an operational model and present a prototype implementation for the proposed approach.
['Amina Jarraya', 'Khedija Arour', 'Amel Borgi', 'Amel Bouzeghoub']
Distributed Cooperative Reasoning in Ambient Environment
763,350
In Kyushu University, a traditional "Student ID" based on student number assigned by Student Affairs Department had been used as the user ID of various IT services for a long time. There were some security and usability concerns using Student ID as a user ID. Since Student ID was used as the e-mail address of the student, it was easy to leak outside. Student ID is constructed based on a department code and a serial number, so guessing other ID strings from one ID is easy. Student ID is issued at the day of the entrance ceremony, so it is not usable for pre-entrance education. Student ID will change when the student moves to another department or proceeds from undergraduate to graduate school, so he/she loses personal data when Student ID changes. To solve these problems, Kyushu University decided to introduce another unchanging user ID independent from Student ID. This paper reports the design of new user ID, ID management system we are using, and the effect of introduction of new user ID.
['Yoshiaki Kasahara', 'Naomi Fujimura', 'Eisuke Ito', 'Masahiro Obana']
Introduction of Unchanging Student User ID for Intra-Institutional Information Service
685,511
This paper explains some analyses that can be performed on a hierarchical finite state machine to validate that it performs as intended. Such a hierarchical state machine has transitions between states, triggered by conditions over inputs, with outputs determined per state in terms of inputs. Intentions are captured per state as expectations on input values. These expectations are expressed using the same condition language as transition triggers, extended to constrain rates of change as well as ranges. The analyses determine whether the expectations are consistent and whether the state machine conforms to the expectations. For the analyses to find no problems, the explicit expectations on the root state would be at least as strong as the implicit expectations of the state machine. One way of using the analyses is to reveal these implicit expectations. The analyses have been automated for statecharts built with the MathWorks' Stateflow tool.
['Ian Toyn', 'Andy Galloway']
Formal Validation of Hierarchical State Machines against Expectations
345,064
This paper aims to extract lessons from archivists' experience of appraising electronic records that are likely to have wider application in the preservation of other digital materials, including scientific data. It relies mainly on the work of the Appraisal Task Force of the InterPARES project on long-term preservation of authentic electronic records to develop a picture of the process of appraisal. It concludes that the aspects of assessment of authenticity, determination of the feasibility of preservation, and monitoring electronic records as they are maintained in the live environment are likely to find counterparts in attempts to appraise digital objects for long-term preservation in the scientific community. It also argues that the activities performed during appraisal constitute the first vital step in the process of preservation of digital materials.
['Terry Eastwood']
Appraising digital records for long-term preservation
232,542
This paper describes spatial operators in robot dynamics, emphasizing their physical interpretation, while avoiding lengthy mathematical derivations. The spatial operators are rooted in the function space approach to the estimation theory developed in the decades that followed the introduction of the Kalman filter. In the mid 1980's, the authors recognized the analogy between Kalman filtering and robot dynamics, and began to use this approach on a wide range of multi-body systems of increasing complexity. This paper reviews the spatial operator approach to robot dynamics, and outlines current applications to the modeling, simulation and control of space robot dynamics and large molecular structures.
['Abhinandan Jain', 'G. Rodriguez']
Computational robot dynamics using spatial operators
226,218
In this paper, we address multiclass pairwise labeling problems by proposing an alternative approach to continuous relaxation techniques which makes use of a quadratic cost function over the class labels. Here, we relax the discrete labeling problem by abstracting the problem of multiclass semi-supervised labeling to a graph regularisation one. By doing this, we can perform multiclass labeling using a cost function which is convex and related to the target function used in discrete Markov Random Field approaches. Moreover, the Hessian of our cost function is given by the graph Laplacian of the adjacency matrix. Therefore, the optimisation of the cost function is governed by the pairwise interactions between pixels in the local neighbourhood. Since the Hessian is sparse in nature, we can find the global minimum of the continuous relaxation problem efficiently by solving a linear equation using Cholesky factorization. In constrast to other segmentation algorithms elsewhere in the literature, the general nature of the cost function we employ is capable of capturing arbitrary pairwise relations. We provide results on synthetic and real- world imagery and demonstrate the efficacy of our method compared to competing approaches.
['Zhouyu Fu', 'Antonio Robles-Kelly']
Convex Optimisation for Multiclass Image Labeling
452,390
We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a high reliability. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources.
['Henning Sprekeler', 'Tiziano Zito', 'Laurenz Wiskott']
An extension of slow feature analysis for nonlinear blind source separation
515,443
In this report, we describe a novel tandem peptide repeat protein, Eicosapentapeptide repeat (EPR), which occurs notably only in flowering plants. The EPRs are characterized by a 25 amino acid repeat unit, X2CX4CX10CX2HGGG, repeated 10 times tandemly. Sequence search revealed that the repeat motif is highly conserved across its occurrence. EPRs are predicted to exist as quasi-globular stable structures owing to highly conserved amino acid positions and potential disulfide bridges. Proteins containing EPRs are predicted to be located in chloroplasts; non-enzymatic and peptide or DNA-binding in molecular function; and they are possibly involved in transcription regulation.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Architecture, identifiers and annotations of EPRs; search parameters, distribution and sequence alignment; 2D structure prediction and disulfide connectivity are provided as pdf files S1--S8, at Bioinformatics online.
['Sunil Archak', 'Javaregowda Nagaraju']
Eicosapentapeptide repeats (EPRs): novel repeat proteins specific to flowering plants
448,962
The distribution of a class of objects, such as images depicting a specific topic, can be studied by observing the best-matching units (BMUs) of the objects' feature vectors on a Self-Organizing Map (SOM). When the BMU "hits" on the map are summed up, the class distribution may be seen as a two-dimensional histogram or discrete probability density. Due to the SOM's topology preserving property, one is motivated to smooth the value field and spread out the values spatially to neighboring units, from where one may expect to find further similar objects. In this paper we study the impact of using more map units than just the single BMU of each feature vector in modeling the class distribution. We demonstrate that by varying the number of units selected in this way and varying the width of the spatial convolution one can find an optimal combination which maximizes the class detection performance.
['Mats Sjöberg', 'Jorma Laaksonen']
Optimal Combination of SOM Search in Best-Matching Units and Map Neighborhood
44,144
A methodology for evaluating robotic striking mechanisms for musical contexts.
['Jason Long', 'Jim W. Murphy', 'Ajay Kapur', 'Dale A. Carnegie']
A methodology for evaluating robotic striking mechanisms for musical contexts.
986,062
A priori optimization
['Dimitris Bertsimas', 'Patrick Jaillet', 'Amedeo R. Odoni']
A priori optimization
618,400
Space efficient algorithms play a central role in dealing with large amount of data. In such settings, one would like to analyse the large data using small amount of "working space". One of the key steps in many algorithms for analysing large data is to maintain a (or a small number) random sample from the data points. In this paper, we consider two space restricted settings -- (i) streaming model, where data arrives over time and one can use only a small amount of storage, and (ii) query model, where we can structure the data in low space and answer sampling queries. In this paper, we prove the following results in above two settings: #R##N#- In the streaming setting, we would like to maintain a random sample from the elements seen so far. We prove that one can maintain a random sample using $O(\log n)$ random bits and $O(\log n)$ space, where $n$ is the number of elements seen so far. We can extend this to the case when elements have weights as well. #R##N#- In the query model, there are $n$ elements with weights $w_1, ..., w_n$ (which are $w$-bit integers) and one would like to sample a random element with probability proportional to its weight. Bringmann and Larsen (STOC 2013) showed how to sample such an element using $nw +1 $ space (whereas, the information theoretic lower bound is $n w$). We consider the approximate sampling problem, where we are given an error parameter $\varepsilon$, and the sampling probability of an element can be off by an $\varepsilon$ factor. We give matching upper and lower bounds for this problem.
['Anup Bhattacharya', 'Davis Issac', 'Ragesh Jaiswal', 'Amit Kumar']
Sampling in Space Restricted Settings
570,635
Understanding dominant factors for precipitation over the great lakes region
['Soumyadeep Chatterjee', 'Stefan Liess', 'Arindam Banerjee', 'Vipin Kumar']
Understanding dominant factors for precipitation over the great lakes region
992,268
Idea of Impact of ERP-APS-MES Systems Integration on the Effectiveness of Decision Making Process in Manufacturing Companies
['Edyta Kucharska', 'Katarzyna Grobler-Dębska', 'Jarosław Gracel', 'Mieczysław Jagodziński']
Idea of Impact of ERP-APS-MES Systems Integration on the Effectiveness of Decision Making Process in Manufacturing Companies
807,758
Multi-context systems (MCS) presented by Brewka and Eiter can be considered as a promising way to interlink decentralized and heterogeneous knowledge contexts. In this paper, we propose preferential multicontext systems (PMCS), which provide a framework for incorporating a total preorder relation over contexts in a multi-context system. In a given PMCS, its contexts are divided into several parts according to the total preorder relation over them, moreover, only information flows from a context to ones of the same part or less preferred parts are allowed to occur. As such, the first l preferred parts of an PMCS always fully capture the information exchange between contexts of these parts, and then compose another meaningful PMCS, termed the l-section of that PMCS. We generalize the equilibrium semantics for an MCS to the (maximal) l�-equilibrium which represents belief states at least acceptable for the lsection of an PMCS. We also investigate inconsistency analysis in PMCS and related computational complexity issues.
['Kedian Mu', 'Kewen Wang', 'Lian Wen']
Preferential Multi-Context Systems
622,545
In this paper we propose a taxonomy to characterize component-based systems. The criteria of our taxonomy have been selected as a result of constructing a number of component-based software engineering tools within the Adoption-Centric Software Engineering project at the University of Victoria. We have applied the taxonomy in our work to characterize the resulting tools and to define the design space of our project's proposed tool-building methodology. Our taxonomy strives to capture the most important properties of component-based systems, resulting in a taxonomy that is both course-grained and lightweight. We believe that it is useful for other researchers in a number of ways, for instance, for component selection and to reason about certain quality attributes of components
['Holger M. Kienle', 'Hausi A. Müller']
A Lightweight Taxonomy to Characterize Component-Based Systems
317,541
Low Power Wide Area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things (IoT), LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine (M2M) applications. This survey paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LORa Alliance,WEIGHTLESS-SIG, and DASH7 Alliance). We further note that LPWA technologies adopt similar approaches, thus sharing the same limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade.
['Usman Raza', 'Parag Kulkarni', 'Mahesh Sooriyabandara']
Low Power Wide Area Networks: A Survey
826,774
De-Identification Method for Bilingual EMR Free Texts.
['Soo-Yong Shin', 'Yongdon Shin', 'Hyo Joung Choi', 'Jihyun Park', 'Yongman Lyu', 'Woo-Sung Kim', 'Jae Ho Lee']
De-Identification Method for Bilingual EMR Free Texts.
806,346