abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
The outbound logistics can determine the success or failure of an industry. It has a high participation in overall logistics costs and is a decisive factor for the Quality of Service (QoS). Measuring this activity and evaluating possible changes will help its management and can lead to a cost reduction and a better QoS. This paper proposes the use of predefined Generalized Stochastic Petri Nets (GSPN) components that allows the evaluation of vehicles utilization, storage levels and QoS indices. The model representing an outbound logistics scenario is obtained with a bottom-up approach, through the composition of these components, guaranteeing some expected GSPN properties. At the end of this paper, a study case made in a Brazilian meat processing industry is presented.
['Gabriel Alves', 'Paulo Romero Martins Maciel', 'Ricardo Massa Ferreira Lima']
A GSPN based approach to evaluate outbound logistics
343,297
Massive MIMO has rapidly gained popularity as a technology crucial to the capacity advances required for 5G wireless systems. Since its theoretical conception six years ago, research activity has grown exponentially, and there is now a developing industrial interest to commercialise the technology. For this to happen effectively, we believe it is crucial that further pragmatic research is conducted with a view to establish how reality differs from theoretical ideals. This paper presents an overview of the massive MIMO research activities occurring within the Communication Systems & Networks Group at the University of Bristol centred around our 128-antenna real-time testbed, which has been developed through the BIO programmable city initiative in collaboration with NI and Lund University. Through recent preliminary trials, we achieved a world first spectral efficiency of 79.4 bits/s/Hz, and subsequently demonstrated that this could be increased to 145.6 bits/s/Hz. We provide a summary of this work here along with some of our ongoing research directions such as large-scale array wave-front analysis, optimised power control and localisation techniques.
['Paul Harris', 'Wael A G Boukley Hasan', 'Henry G Brice', 'Benny Chitambira', 'Mark A Beach', 'Evangelos Mellios', 'Andrew R Nix', 'Simon M D Armour', 'Angela Doufexi']
An Overview of Massive MIMO Research at the University of Bristol
947,107
The explosive growth in the variety and size of social networks has focused attention on searching these networks for useful structures. Like the Internet or the telephone network, the ability to efficiently search large social networks will play an important role in the extent of their use by individuals and organizations alike. However, unlike these domains, search on social networks is likely to involve measures that require a set of individuals to collectively satisfy some skill requirement or be tightly related to each other via some underlying social property of interest.#R##N##R##N#The aim of this paper is to highlight---and demonstrate via specific examples---the need for algorithmic results for some fundamental set-based notions on which search in social networks is expected to be prevalent. To this end, we argue that the concepts of an influential set and a central set that highlight, respectively, the specific role and the specific location of a set are likely to be useful in practice. We formulate two specific search problems: the elite group problem (EGP) and the portal problem (PP), that represent these two concepts and provide a variety of algorithmic results. We first demonstrate the relevance of EGP and PP across a variety of social networks reported in the literature. For simple networks (e.g., structured trees and bipartite graphs, cycles, paths), we show that an optimal solution to both EGP and PP is easy to obtain. Next, we show that EGP is polynomially solvable on a general graph, whereas PP is strongly NP-hard. Motivated by practical considerations, we also discuss (i) a size-constrained variant of EGP together with its penalty-based relaxation and (ii) the solution of PP on balanced and full d-trees and general trees.
['Milind Dawande', 'Vijay S. Mookerjee', 'Chelliah Sriskandarajah', 'Yunxia Zhu']
Structural Search and Optimization in Social Networks
292,155
The single stuck-at fault coverage is often seen as a figure-of-merit also for scan testing according to other fault models like transition faults, bridging faults, crosstalk faults, etc. This paper analyzes how far this assumption is justified. Since the scan test infrastructure allows reaching states not reachable in the application mode, and since faults only detectable in such unreachable states are not relevant in the application mode, we distinguish those irrelevant faults from relevant faults, i.e. faults detectable in the application mode. We prove that every combinatorial circuit with exactly 100% stuck-at fault coverage has 100% transition fault test coverage for those faults which are relevant in the application. This does not necessarily imply that combinatorial circuits with almost 100% single-stuckat coverage automatically have high transition fault coverage. This is shown in an extreme example of a circuit with nearly 100% stuck-at coverage, but 0% transition fault coverage.
['Jan Schat']
On the relationship between stuck-at fault coverage and transition fault coverage
136,425
Visualizing for Success: How Can we Make the User more Efficient in Interactive Evolutionary Algorithms?
['J. J. Merelo', 'Mario García Valdez', 'Carlos Cotta']
Visualizing for Success: How Can we Make the User more Efficient in Interactive Evolutionary Algorithms?
854,369
Recently, many researchers started to question a long-standing paradox in the engineering practice of digital photography: oversampling followed by compression, and pursue more intelligent sparse sampling techniques. In this research we take a practical approach of uniform down sampling in image space, and the sampling is made adaptive by a spatially varying directional low-pass prefiltering. Since the down-sampled prefiltered image is a low-resolution image of conventional square sample grid, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upsamples it to the original resolution by least-square estimation using a 2D piecewise autoregressive model and the knowledge of directional low-pass filter. The proposed joint adaptive down-sampling and up-sampling technique outperforms JPEG 2000 (the state-of-the-art in lossy image coding) in PSNR measure at low to modest bit rates and achieves superior visual quality at all bit rates. This work shows that oversampling not only increases cost and energy consumption, but it could, even when coupled with a sophisticated rate-distortion optimized compression scheme, cause inferior image quality at certain bit rates.
['Xiangjun Zhang', 'Xiaolin Wu']
Can Lower Resolution Be Better
458,892
Visual Servoing in an Optimization Framework for the Whole-Body Control of Humanoid Robots
['Don Joven Agravante', 'Giovanni Claudio', 'Fabien Spindler', 'François Chaumette']
Visual Servoing in an Optimization Framework for the Whole-Body Control of Humanoid Robots
964,481
This study is intended to improve the photoresponse of an optical actuator with a bimorph PLZT element. Known optical actuators have multiple merits for converting light energy to driving energy but have a slow response speed and large hysteresis. A method developed by us for exposing both sides of the bimorph element to light has been proved through a series of experiments that the response speed is increased and the hysteresis is reduced thus improving the photoresponse. More particularly, an optical servo system was fabricated for implementing the method and subjected to optical servo tests with PI control. As a result, the PI control was performed with much ease when both sides of the bimorph PLZT were irradiated indicating an advantage of the method. >
['Toshio Fukuda', 'Shinobu Hattori', 'Fumihito Arai', 'Hirofumi Nakamura']
Performance improvement of optical actuator by double side irradiation
309,072
Biological neuronal networks can be embodied in closed-loop robotic systems, with electromechanical hardware providing the neurons with the ability to interact with a real environment. Due to the difficulties of maintaining biological networks, it is useful to have a simulation environment in which pilot experiments can be run and new software can be tested. A simulator for cultured mouse neurons is described, and used to simulate neurons in a closed-loop robotic system. The results are compared to results from a similar experiment using biological neurons.
['Abraham Shultz', 'Sangmook Lee', 'Thomas B. Shea', 'Holly A. Yanco']
Biological and simulated neuronal networks show similar competence on a visual tracking task
565,788
Some wearable computing applications require sensing devices that detect the deflection of joints during human motion. Presented here is a novel non-invasive technique for measuring joint motion using pressure sensors. Described is a specific example of this; an easy to assemble glove that can be used as a low cost, high resolution gesture input device for wearable computers.
['Aaron Toney']
A novel method for joint motion sensing on a wearable computer
913,051
A new possibility of application of a new structure of neural networks in robot control is presented, where the following concepts are employed : 1) combination of input and output activation functions, 2) input time-varying signal distribution, 3) time-discrete domain synthesis, and 4) one-step learning iteration approach. The proposed NN synthesis procedures are useful for applications to identification and control of dynamical systems. In this sense a feedforward neural network for an adaptive nonlinear robot control is proposed. This neural network is trained to imitate an adaptive nonlinear robot control algorithm, based on the dynamics of the full robot model of RRTR-structure. Thus, this neural network can compute both the nominal and feedback robot control by parallel processing. >
['Branko Novakovic']
Feedforward neural networks for adaptive nonlinear robot control
97,692
This paper presents an algorithm for most prominent component of active vehicle safety applications, namely the detection and recognition of traffic signs. In the detection stage, HOG feature descriptors combined with SVM classifiers are used to determine the location of points that are high likely to be the potential traffic signs in the scene. Once the search space for traffic sign recognition is reduced through first stage, SURF, FAST and Harris algorithms are used to extract the keypoints in these potential traffic sign regions and BRIEF feature descriptors are used to define the neighbourhood around these keypoints. Model traffic signs are then compared to the regions that are detected to be potential traffic signs in the current traffic scene to determine the type of the traffic sign. In order to extract keypoints, the performance of a variety of feature descriptors are analyzed. Proposed method is tested on video sequences acquired by the camera mounted on a vehicle cruising inner city traffic.With %90 success rate, experimental results suggest that SURF algorithm outperforms the other algorithms in recognizing traffic signs.
['Can Erhan', 'Amin Ahmadi Tazehkandi', 'Hülya Yalçın', 'Ilker Bayram']
Traffic sign detection and recognition fusing feature descriptors
285,469
This note studies an issue relating to essential smoothness that can arise when the theory of large deviations is applied to a certain option pricing formula in the Heston model. The note identifies a gap, based on this issue, in the proof of Corollary 2.4 in \cite{FordeJacquier10} and describes how to circumvent it. This completes the proof of Corollary 2.4 in \cite{FordeJacquier10} and hence of the main result in \cite{FordeJacquier10}, which describes the limiting behaviour of the implied volatility smile in the Heston model far from maturity.
['Martin Forde', 'Antoine Jacquier', 'Aleksandar Mijatovic']
A note on essential smoothness in the Heston model
596,366
On the Regularity of Binoid Languages: A Comparative Approach.
['Zoltán Németh']
On the Regularity of Binoid Languages: A Comparative Approach.
737,392
Introduction to Plane Algebraic Curves by Ernst Kunz; Richard G. Belshoff.
['Susan Jane Colley']
Introduction to Plane Algebraic Curves by Ernst Kunz; Richard G. Belshoff.
788,988
New blind carrier frequency offset (CFO) estimation methods based on the correlation function of the subchannel signals are presented for OFDM/OQAM systems. The proposed estimators are robust to multipath effects due to the narrowband property of the subchannels in OFDM systems. The performance of the estimators is evaluated by asymptotic analysis and simulation results. Our results show that methods based on estimation in subchannels have better performance than method based on the signal before demodulation.
['Gang Lin', 'Lars Lundheim', 'Nils Holte']
SPC08-4: Blind Carrier Frequency Offset Estimation for OFDM/OQAM Systems Based on Subchannel Signals
183,664
Combining Visual and Textual Features for Information Extraction from Online Flyers
['Emilia Apostolova', 'Noriko Tomuro']
Combining Visual and Textual Features for Information Extraction from Online Flyers
616,747
Recently, sparse algorithm for signal enhancement is more and more popular issues. In this paper, we apply it to enhance speech signal. The process of sparse theory is classified into two parts, one is for dictionary training part and the other is signal reconstruction part. We focus environment on both white Gaussian noise and color noise filtering based on sparse. The orthogonal matching pursuit (OMP) algorithm is used to optimize the sparse coefficients X of clean speech dictionary, where clean speech dictionary is trained by K-SVD algorithm. Then, we multiply these two matrixes D' and X' to reconstruct the clean speech signal. Denoising performance of the experiments shows that our proposed method is superior to other state of art methods in four kinds of objective quality measures as SNR, LLR, SNRseg and PESQ.
['Ching-Tang Hsieh', 'Piao-Yu Huang', 'Ting-Wen Chen', 'Yan-heng Chen']
Speech enhancement based on sparse representation under color noisy environment
674,009
Two novel cascode circuits, the differential cross-cascode and differential cross-follower, are proposed and investigated. Their fundamental distinction from the differential ordinary cascode consists of the input signal voltage being applied simultaneously to the inputs of common emitter/source and common base/gate stages, and in addition the inputs of CE/CS and CB/CG are cross-coupled. We show that the input signal is amplified in the input circuit, and furthermore the input impedance and the current gain increase considerably and the bandwidth is essentially expanded. Simulation results of such a cascode designed with IBM BJT and FET transistors are presented. The actual obtained bandwidths (BJT-18.7 GHz, FET-7.8 GHz) proved as predicted to be more than twice as wide as compared with the bandwidth of the ordinary cascode (BJT-8.6 GHz, FET-3.4 GHz).
['Yuri Bruck', 'Michael Zelikson']
Two novel cross-cascode differential amplifiers
926,175
Wavelet image coding exhibits a robust error resilience performance by utilizing a naturally layered bitstream construction over a band-limited channel. In this letter, a new measure that appears to provide a better assessment of visual entropy for comparing and evaluating progressive image coders is defined based on a visual weight over the wavelet domain. This visual weight is characterized by the human visual system (HVS) over the frequency and spatial domains and is then utilized as a criterion for determining the coding order of wavelet coefficients, resulting in improved visual quality. A transmission gain, which is expressed by visual entropy, of up to about 23% can be obtained at a normalized channel throughput of about 0.3. In accordance with the subjective visual quality, a relatively high gain can be obtained at a low channel capacity
['Hyungkeuk Lee', 'Sanghoon Lee']
Visual Entropy Gain for Wavelet Image Coding
502,323
K-Means is an important clustering algorithm that is widely applied to different applications, including color clustering and image segmentation. To handle large cluster numbers in embedded systems, a hardware architecture of hierarchical K-Means (HK-Means) is proposed to support a maximum cluster number of 1024. It adopts 10 processing elements for the Euclidean distance computations and the level-order binary-tree traversal. Besides, a hierarchical memory structure is integrated to offer a maximum bandwidth of 1280 bit/cycle to processing elements. The experiments show that applications such as video segmentation and color quantization can be implemented based on the proposed HK-Means hardware. Moreover, the gate count of the hardware is 414 K, and the maximum frequency achieves 333 MHz. It supports the highest cluster number and has the most flexible specifications among our works and related works.
['Tse-Wei Chen', 'Shao-Yi Chien']
Flexible Hardware Architecture of Hierarchical K-Means Clustering for Large Cluster Number
406,302
Compressed sensing (CS) offers a joint compression and sensing processes, based on the existence of a sparse representation of the treated signal and a set of projected measurements. Work on CS thus far typically assumes that the projections are drawn at random. In this paper, we consider the optimization of these projections. Since such a direct optimization is prohibitive, we target an average measure of the mutual coherence of the effective dictionary, and demonstrate that this leads to better CS reconstruction performance. Both the basis pursuit (BP) and the orthogonal matching pursuit (OMP) are shown to benefit from the newly designed projections, with a reduction of the error rate by a factor of 10 and beyond.
['Michael Elad']
Optimized Projections for Compressed Sensing
161,329
In this paper, two blind digital audio watermarking technique are proposed according to the payload requirement which uses discrete cosine transform (DCT). In our method the watermark is embedded into the selected mid band coefficients of the DCT transformed audio. The selected coefficients are middle band DCT coefficients which are modified and quantized through the average energy of the selected sub frame for watermark embedding. The mid band frequency components are selected through experiments such that the effect of mp3 compression and the common signal processing operation has a minimum effect on these coefficients. Original audio will not be required for extraction of the watermark. To adapt to the mp3 attack a preprocessing step is embedded which did an mp3 conversion of the audio and convert again the audio to the original format prior to embedding. Also embedding is done on the selected blocks which satisfies minimum energy threshold. Experiments show that our scheme produces imperceptible audios and the watermarked audio is robust against common signal processing attacks.
['Tribhuwan Kumar Tewari', 'Vikas Saxena', 'J. P. Gupta']
A digital audio watermarking scheme using selective mid band DCT coefficients and energy threshold
11,777
Identifying multiple categories of cybersecurity skills that affect user acceptance of protective information technologies
['Dinesh Reddy', 'Glenn B. Dietrich']
Identifying multiple categories of cybersecurity skills that affect user acceptance of protective information technologies
896,157
In this paper a novel development of a testing technique for analogue integrated circuits based on sweeping the power supply voltage is described. It is shown that by using a simple floating gate fault model together with the proposed scheme it is possible to achieve a high fault coverage. The scope of work discussed in this paper is focused on exposing floating gate defects which, using other methods, usually requires careful and accurate knowledge of the elements, including parasitic components, of the equivalent circuit of the devices.
["A.K.B. A'ain", 'A.H. Bratt', 'A.P. Dorey']
Exposing floating gate defects in analogue CMOS circuits by power supply voltage control testing technique
474,481
The so-called chief executive officer problem suggests that the source message can be recovered at the destination by merging a set of corrupted replicas forwarded by multiple relays, as long as these replicas are sufficiently correlated with the original message. In this paper, we build on Slepian-Wolf’s correlated source coding theorem to design a simple, yet efficient power allocation scheme for a multirelay system, in which the direct link is unavailable to convey information. In such a system, the replicas forwarded by the relays are allowed to contain intra-link errors due to previous unreliable hops, and the destination is supposed to retrieve the source message by jointly decoding all received replicas. Importantly, the proposed power allocation is asymptotically optimal at high signal-to-noise ratio.
['Diana Cristina González', 'Albrecht Wolf', 'Luciano Leonel Mendes', 'Jose Candido Silveira Santos Filho', 'Gerhard Fettweis']
An Efficient Power Allocation Scheme for Multirelay Systems With Lossy Intra-Links
999,534
Web services solve the problem of inter-organization business integration and are under a distributed, dynamic, autonomic and heterogeneous environment. The correctness and verification of Web service orchestration is important. Formalization is a valid method. This paper gives the model of Web service orchestration based on concurrent transaction logic. An introduction of Web service orchestration and concurrent transaction logic is given. Then the translation rules from WS-BPEL to concurrent transaction logic are given. The verification problem o Web service orchestration based on concurrent transaction logic is discussed. Finally, an actual Web service orchestration example based on concurrent transaction logic is illustrated.
['Yong Wang', 'Li Wang', 'Guiping Dai']
A Web Service Orchestration Model Based on Concurrent Transaction Logic
183,158
Ontology in the Core of Information Management - Information Management in Infrastructure Building
['Irina Peltomaa', 'Esa Viljamaa']
Ontology in the Core of Information Management - Information Management in Infrastructure Building
779,435
In a distributed computing environment, the autonomous decentralized database system (ADDS) has been proposed to avoid the single point of failure. In ADDS, the update operations at each node are autonomously performed without communicating with other nodes if the amount of update requests are within the allowance volume (AV), thus the short response time of the transactions can be realized. Each node starts the AV adjustments when given AV amounts reach the threshold value, and gets the AV from relatively surplus nodes. To communicate with other nodes, there are two types of communications, one is a point to point (P2P) communication and the other is a mobile agent (MA) circulation. This paper discusses the relationship among the message amount, the configuration check frequency and the transaction response time in the P2P communication. We have defined the configuration checking frequency (f) that expresses how frequently a node confirms the configuration per AV request. It is expected that the value f which gives the minimum transaction response may exist. The simulator has been developed to evaluate the relationship and simulations have been performed in various parameters.
['Isao Kaji', 'Kenji Kano']
Considerations on transaction response time and configuration checking in autonomous decentralized DB (ADDS)
396,985
AbstractAdvances in information technology bring changes to the nature of work by facilitating companies to go beyond the wisdom of their workforce and tap into the “wisdom of the crowd” via online crowdsourcing contests. In these contests, active and motivated individuals collaborate in the form of self-organized teams that compete for rewards. Using a rich data set of 732 teams in 52 contests collected from the crowdsourcing platform, Kaggle.com, from its launch in April 2010 to July 2012, we studied how the allocation of members’ social and intellectual capital within a virtual team affects team performance in online crowdsourcing contests. Our econometric analysis uses a rank-ordered logistic regression model, and suggests that the effect of a member’s social and intellectual capital on team performance varies depending on his or her roles. Though a team leader’s social capital and a team expert’s intellectual capital significantly influence team performance, a team leader’s intellectual capital and a...
['Indika Dissanayake', 'Jie Zhang', 'Bin Gu']
Task Division for Team Success in Crowdsourcing Contests: Resource Allocation and Alignment Effects
640,966
Automated gender estimation has numerous applications, including video surveillance, human–computer interaction, anonymous customized advertisement, and image retrieval. Most commonly, the underlying algorithms analyze the facial appearance for clues of gender. In this paper, we propose a novel method for gender estimation, which exploits dynamic features gleaned from smiles and we proceed to show that: a) facial dynamics incorporate clues for gender dimorphism and b) while for adult individuals appearance features are more accurate than dynamic features, for subjects under 18 years facial dynamics can outperform appearance features. In addition, we fuse proposed dynamics-based approach with state-of-the-art appearance-based algorithms, predominantly improving performance of the latter. Results show that smile-dynamics include pertinent and complementary to appearance gender information.
['Antitza Dantcheva', 'François Brémond']
Gender Estimation Based on Smile-Dynamics
939,489
Measuring the heel strike and toe time during walking provides valuable insight to the spatiotemporal parameters of human gait. The authors developed a sensor mechanism using FSR. Result shows that FSR sensor show high degree of accuracy and repeatability for measuring heel strike and toe contact time. The objective of the research was to develop a rugged and robust sensor mechanism to be used in prosthetic shoes which will help to determine the gait parameters for precise control of intelligent prosthetic devices.#R##N##TAB#Neelesh Kumar, Davinder Pal Singh, Amod Kumar, B.S. Sohi
['Neelesh Kumar', 'Davinder Pal Singh', 'Amod Kumar', 'B.S. Sohi']
Spatiotemporal parameters measurement of human gait using developed FSR for prosthetic knee joint
207,078
This paper proposes the Disaster Information Transmission Common Infrastructure System (DITCIS) intended to rapid sharing of information in a time of mega disaster. This Disaster Information Transmission Common Infrastructure System consists of the Relief Supplies Distribution Management System, the IC Card Authorization Safety Confirmation System, the Web-GIS Disaster Management System, the Disaster Information Registration System, the Disaster Information Sharing System, and the Disaster Information Transmission Platform. In this paper, we introduce the Disaster Information Registration System, the Disaster Information Sharing System, and the Disaster Information Transmission Platform. The Disaster Information Registration System enables to register disaster information provided by the related institutions. And, the Disaster Information Sharing System enables to share exact disaster information in the disaster countermeasure headquarters. Moreover, the Disaster Information Transmission Platform enables to automatic upload or automatic delivery disaster information to various communication tool.
['Kazuhiro Takahagi', 'Tomoyuki Ishida', 'Akira Sakuraba', 'Kaoru Sugita', 'Noriki Uchida', 'Yoshitaka Shibat']
Proposal of the Disaster Information Transmission Common Infrastructure System Intended to Rapid Sharing of Information in a Time of Mega Disaster
677,132
The Sonic Enhancement of Graphical Buttons
['Stephen A. Brewster', 'Peter C. Wright', 'Alan Dix', 'Alistair D. N. Edwards']
The Sonic Enhancement of Graphical Buttons
732,644
A certificateless public key cryptosystem (CL-PKC) was proposed for the first time in 2003, and since then it has been attracted. But in fact most CL-PKC is implemented using the pairing technique. So it is difficult so far to apply CL-PKC to an existing encryption system that uses the conventional encryption method such as RSA. Needless to say, it is also difficult in even the system with smart cards. In this paper we propose the signature scheme with the RSA based smart card in the simple CL-PKC. Our scheme does not have user's public key certificate as well as the ID-based cryptography. Also, it has some advantages that there is no escrow problem which occurs in the ID-based cryptography,and that even the manager cannot generate user's falsified signature without changing user's public key.
['Kazumasa Omote', 'Atsuko Miyaji', 'Kazuhiko Kato']
Simple Certificateless Signature with Smart Cards
70,981
A new definition of immersion with respect to virtual environment (VE) systems has been proposed in earlier work, based on the concept of simulation. One system ( A ) is said to be more immersive than another ( B ) if A can be used to simulate an application as if it were running on B. Here we show how this concept can be used as the basis for a psychophysics of presence in VEs, the sensation of being in the place depicted by the virtual environment displays (Place Illusion, PI), and also the illusion that events occurring in the virtual environment are real (Plausibility Illusion, Psi). The new methodology involves matching experiments akin to those in color science. Twenty participants first experienced PI or Psi in the initial highest level immersive system, and then in 5 different trials chose transitions from lower to higher order systems and declared a match whenever they felt the same level of PI or Psi as they had in the initial system. In each transition they could change the type of illumination model used, or the field-of-view, or the display type (powerwall or HMD) or the extent of self-representation by an avatar. The results showed that the 10 participants instructed to choose transitions to attain a level of PI corresponding to that in the initial system tended to first choose a wide field-of-view and head-mounted display, and then ensure that they had a virtual body that moved as they did. The other 10 in the Psi group concentrated far more on achieving a higher level of illumination realism, although having a virtual body representation was important for both groups. This methodology is offered as a way forward in the evaluation of the responses of people to immersive virtual environments, a unified theory and methodology for psychophysical measurement.
['Mel Slater', 'Bernhard Spanlang', 'David Corominas']
Simulating virtual environments within virtual environments as the basis for a psychophysics of presence
458,292
Outage probability and outage capacity analysis of cooperative OFDM system with subcarrier mapping
['Raza Ali Shah', 'Nandana Rajatheva', 'Yusheng Ji']
Outage probability and outage capacity analysis of cooperative OFDM system with subcarrier mapping
14,854
In our previous work, we proposed a distributed buffer (DB) scheme to tackle the problem of transporting layered video over erroneous multi-hop wireless networks. In DB scheme, some intermediate nodes are selected as DB nodes, which are used to pre-buffer video packets before the video streaming starts. In this paper, a novel hierarchical queueing model for DB nodes is proposed. The video playback quality including buffer overflow probability and video quality throughput can be computed based on the proposed queueing model. The simulation result matches the queueing performance of video streaming application.
['Hao Wang']
A Hierarchical Queueing Model for Streaming Video over Multi-Hop Wireless Networks
224,559
Many billions of documents are stored in the Portable Document Format (PDF). These documents contain a wealth of information and yet PDF is often seen as an inaccessible format and, for that reason, often gets a very bad press. In this tutorial, we get under the hood of PDF and analyze the poor practices that cause PDF files to be inaccessible. We discuss how to access the text and graphics within a PDF and we identify those features of PDF that can be used to make the information much more accessible. We also discuss some of the new ISO standards that provide profiles for producing Accessible PDF files.
['Steven R. Bagley', 'Matthew R. B. Hardy']
DOCENG 2014: PDF tutorial
382,611
In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
['Guido Governatori', 'Antonino Rotolo']
BIO logical agents: Norms, beliefs, intentions in defeasible logic
300,949
The recent advancements of technology in robotics and wireless communication have enabled the low-cost and large-scale deployment of mobile sensor nodes for target tracking, which is a critical application scenario of wireless sensor networks. Due to the constraints of limited sensing range, it is of great importance to design node coordination mechanism for reliable tracking so that at least the target can always be detected with a high probability, while the total network energy cost can be reduced for longer network lifetime. In this paper, we deal with this problem considering both the unreliable wireless channel and the network energy constraint. We transfer the original problem into a dynamic coverage problem and decompose it into two subproblems. By exploiting the online estimate of target location, we first decide the locations where the mobile nodes should move into so that the reliable tracking can be guaranteed. Then, we assign different nodes to each location in order that the total energy cost in terms of moving distance can be minimized. Extensive simulations under various system settings are employed to evaluate the effectiveness of our solution.
['Yifei Qi', 'Peng Cheng', 'Jing Bai', 'Jiming Chen', 'Adrien Guenard', 'Ye-Qiong Song', 'Zhiguo Shi']
Energy-Efficient Target Tracking by Mobile Sensors With Limited Sensing Range
827,693
Despite significant recent progress on approximating graph spanners (subgraphs which approximately preserve distances), there are still several large gaps in our understanding. We give new results for two of them: approximating basic k -spanner (particularly for small k ), and the dependence on f when approximating f -fault tolerant spanners. We first design an O( n 1/3 )-approximation for 4-spanner (both basic and directed). This was the last value of k for which only an O ([EQUATION])-approximation was known for basic k -spanner, and thus implies that for any k the approximation ratio is at most O ( n 1/3 ). For basic k -spanner, we also show an integrality gap for the natural flow-based LP (the main tool in almost all nontrivial spanner approximations) which nearly matches the trivial approximation of n [EQUATION]. For f -fault tolerant spanners, we show that in the small-stretch setting ( k ∈ {3, 4}) it is possible to entirely remove the dependence on f from the approximation ratio, at the cost of moving to bicriteria guarantees. The previous best dependence on f was either almost-linear (in the undirected setting) or exponential (in the directed setting for stretch 4).
['Michael Dinitz', 'Z. Zhang']
Approximating low-stretch spanners
627,765
A probabilistic model is developed to study the process of automatic global wiring for LSI and VLSI chips. The probability parameter for this model is related to the local utilization rate and the channel supply on each global cell boundary. This theoretical relationship is compared with a real example, which agrees well with the theoretical prediction. Using Monte Carlo methods to obtain numerical solutions from the model, the effects of search region size on global routing probability are studied. There seems to be little gain in going more than one or two global cells beyond the minimum rectangle to find a path, regardless of the length of the connection. This conclusion is supported by the observation that the routing probability does not "scale" very accurately as the dimensions of the problem are increased.
['D. Wallace', 'Lane A. Hemachandra']
Some Properties of a Probabilistic Model for Global Wiring
539,745
Language model (LM) adaptation is often achieved by combining a generic LM with a topic-specific model that is more relevant to the target document. Unlike previous work on unsupervised LM adaptation, in this paper we propose to leverage named entity (NE) information for topic analysis and LM adaptation. We investigate two topic modeling approaches, latent Dirichlet allocation (LDA) and clustering, and proposed a new mixture topic model for LDA based LM adaptation. Our experiments for N-best list rescoring have shown that this new adaptation framework using NE information and topic analysis outperforms the baseline generic N-gram LM based on a state-of-the-art Mandarin recognition system.
['Yang Liu', 'Feifan Liu']
Unsupervised language model adaptation via topic modeling based on named entity hypotheses
127,916
This paper presents conceptual navigation and NavCon, an architecture that implements this navigation in World Wide Web pages. NavCon architecture makes use of ontology as metadata to contextualise user's search for information. Conceptual navigation is a technique to browse websites within a context. Context filters relevant retrieved information, and it drives user's navigation through paths that meet his needs. Based on ontologies, NavCon automatically inserts conceptual links in web pages. These links permit the users to access a graph representing concepts and their relationships. Browsing this graph, it is possible to reach documents associated with user's desired ontology concept.
['Jose Renato Villela Dantas', 'Pedro Porfirio Muniz Farias']
Using NavCon for conceptual navigation in web documents
320,246
In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects.
['Christoph Strecha', 'Rik Fransens', 'L. Van Gool']
Combined Depth and Outlier Estimation in Multi-View Stereo
91,911
This paper presents a multiagent systems model for constructing nurse rosters. Over the past decade self-rostering has become more favorable in nursing personnel scheduling, due to its empowerment and motivational benefits. However, the labor intensive negotiation procedure among participants has limited its application to medium sized and large wards. To overcome this limitation, we propose an automated negotiation tool utilizing economic-based negotiation mechanisms. We model the environment as a multiagent system. Nurses can indicate their preferences by configuring the preference profiles of the Nurse Agents, and the Nurse Agents collectively construct rosters through the negotiation with the management agents and among themselves. To support the design and implementation of automated self-rostering systems, we present a multiagent systems architecture, a structure of preference information, and a negotiation mechanism.
['Zhiguo Wang', 'Chun Wang']
Automating nurse self-rostering: A multiagent systems model
309,060
In this paper, the analysis is carried out over distribution test systems. The analyses show that voltage variation problems occur in different nodes of the distribution networks with an increase of penetration level. However, proper selection of dispersion level can improve the voltage profile and decrease losses of the distribution systems. Depending on the percentage of PV penetration and its degree of concentration, these variations in irradiance may cause undesirable voltage fluctuations and may affect the operation of the voltage regulating equipment.
['Naidji Mourad', 'Boudour Mohamed']
Impact of increased distributed photovoltaic generation on radial distribution networks
913,968
Modeling Uncertainty in Support Vector Surrogates of Distributed Energy Resources - Enabling Robust Smart Grid Scheduling
['Jörg Bremer', 'Sebastian Lehnhoff']
Modeling Uncertainty in Support Vector Surrogates of Distributed Energy Resources - Enabling Robust Smart Grid Scheduling
720,992
This paper describes a spectral method for rotating matrix form digital image data in a cartesian-coordinate organized array processor structure. The major elements of the method include: 1) an initial three step number-theoretic-transform (NTT) procedure to implement appropriate column and row data translations, and 2) a final non-spectral translation step within adjacent pixel groups. Digital array image data may be rotated about a selected pivot point through an arbitrary angle within the range ±45°. A comparison of the spectral method execution time with a non-spectral implementation indicates that the new method can provide a performance improvement factor of up to 5 depending upon the magnitude of the rotation angle and pixel word size.
['Thomas A. Kriz', 'Dale F. Bachman']
A number theoretic transform approach to image rotation in parallel array processors
42,341
Minimalist analyses typically treat quantifier scope interactions as being due to movement, thereby bringing constraints thereupon into the purview of the grammar. Here we adapt De Groote's continuation-based presentation of dynamic semantics to minimalist grammars. This allows for a simple and simply typed compositional interpretation scheme for minimalism.
['Gregory M. Kobele']
Importing montagovian dynamics into minimalism
592,776
A new error-resilient JPEG2000 wireless transmission scheme is proposed. The proposed scheme exploits the 'progressive by quality' structure of the JPEG2000 code-stream and takes into account the effect of channel errors at different quality layers in order to protect the coded bit-stream according to channel conditions using multi-rate low-density parity-check (LDPC) codes, leading to a flexible joint source-channel coding design. The novelty of this adaptive technique lies in its ability to truncate the less important source layers to accommodate optimal channel protection to more important ones to maximize received image quality. Results show that the proposed scheme facilitates considerable gains in terms of subjective and objective quality as well as decoding probability of the retrieved images.
['Abdullah Al Muhit', 'Teong Chee Chuah']
Robust Quality-Scalable Transmission of JPEG2000 Images over Wireless Channels Using LDPC Codes
888,161
A new soft decoding algorithm for linear block codes is proposed. The decoding algorithm works with any algebraic decoder and its performance is strictly the same as that of maximum-likelihood-decoding (MLD). Since our decoding algorithm generates sets of different candidate codewords corresponding to the received sequence, its decoding complexity depends on the received sequence. We compare our decoding algorithm with Chase (1972) algorithm 2 and the Tanaka-Kakigahara (1983) algorithm in which a similar method for generating candidate codewords is used. Computer simulation results indicate, for some signal-to-noise ratios (SNR), that our decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of ours is always superior to those of the other two. >
['Toshimitsu Kaneko', 'Toshihisa Nishijima', 'Hiroshige Inazumi', 'Shigeichi Hirasawa']
An efficient maximum-likelihood-decoding algorithm for linear block codes with algebraic decoder
375,309
This paper challenges the popular notions of tacit and explicit organizational knowledge and argues that its philosophical underpinnings derived from Gilbert Ryle are problematic due to their logical behaviourist perspective. The paper articulates the philosophical problem as the neglect of any role for the mind in organizational activity and the representation of mental activity as purely a set of behaviours. An alternative realist philosophy is advanced taking into account the potential of adopting a number of competing philosophical perspectives. The paper forwards a realist theory of organizational knowledge that moves beyond the surface behaviours of tacit and explicit knowledge and argues that collective consciousness and organizational memory play primary and deeper roles as knowledge processes and structures. Consciousness is not a Hegelian world spirit but rather a real process embedded in people's brains and mental activity. Further, the paper argues that organizational routines provide the contingent condition or `spark' to activate organizational knowledge processes. The implications of this model are explored in relation to the measurement of intellectual capital. The theory developed in this paper represents the first attempt to provide a coherent philosophically grounded framework of organizational knowledge that moves organizational theory beyond neat conversion processes of tacit and explicit knowledge.
['Ashok Jashapara']
Moving beyond tacit and explicit distinctions: a realist theory of organizational knowledge
67,919
A Toolkit for Automatic Generation of Polygonal Maps - Las Vegas Reconstruction
['Thomas Wiemann', 'Andreas Nuechter', 'Joachim Hertzberg']
A Toolkit for Automatic Generation of Polygonal Maps - Las Vegas Reconstruction
738,811
A novel simple stator resistance estimation technique for high-performance induction motor drives is proposed. It makes use of a synchronously revolving reference frame aligned with the stator current vector, so that the resistance can be straightforwardly derived from the mathematical model of the induction motor. A sensorless direct field orientation scheme is employed to validate the proposed solution, with the drive operating in the critical area of low speeds. A combination of two observers is used: a Kalman filter observer to estimate the rotor flux, and a MRAS observer for speed estimation. The stator resistance estimator alleviates the usual performance degradation of MRAS-based drives at low speeds, caused by the thermal drift of stator resistance. Computer simulations, including realistic disturbances, show high effectiveness of the described approach.
['Rachid Beguenane', 'M. Ouhrouche', 'Andrzej M. Trzynadlowski']
A new scheme for sensorless induction motor control drives operating in low speed region
126,337
This paper deals with the problem of globally delay-dependent robust stabilization for Takagi–Sugeno (T–S) fuzzy neural network with time delays and uncertain parameters. The time delays comprise discrete and distributed interval time-varying delays and the uncertain parameters are norm-bounded. Based on Lyapunov–Krasovskii functional approach and linear matrix inequality technique, delay-dependent sufficient conditions are derived for ensuring the exponential stability for the closed-loop fuzzy control system. An important feature of the result is that all the stability conditions are dependent on the upper and lower bounds of the delays, which is made possible by using the proposed techniques for achieving delay dependence. Another feature of the results lies in that involves fewer matrix variables. Two illustrative examples are exploited in order to illustrate the effectiveness of the proposed design methods.
['Kuan-Hsuan Tseng', 'Jason Sheng Hong Tsai', 'Chien-Yu Lu']
A DELAY-DEPENDENT APPROACH TO ROBUST TAKAGI–SUGENO FUZZY CONTROL OF UNCERTAIN RECURRENT NEURAL NETWORKS WITH MIXED INTERVAL TIME-VARYING DELAYS
48,780
Background#R##N#The large amount of completely sequenced genomes allows genomic context analysis to predict reliable functional associations between prokaryotic proteins. Major methods rely on the fact that genes encoding physically interacting partners or members of shared metabolic pathways tend to be proximate on the genome, to evolve in a correlated manner and to be fused as a single sequence in another organism.
['François Enault', 'Karsten Suhre', 'Jean-Michel Claverie']
Phydbac "Gene Function Predictor" : a gene annotation tool based on genomic context analysis
29,492
From its start using supercomputers, scientific computing constantly evolved to the next levels such as cluster computing, meta-computing, or computational Grids. Today, Cloud Computing is emerging as the paradigm for the next generation of large-scale scientific computing, eliminating the need of hosting expensive computing hardware. Scientists still have their Grid environments in place and can benefit from extending them by leased Cloud resources whenever needed. This paradigm shift opens new problems that need to be analyzed, such as integration of this new resource class into existing environments, applications on the resources and security. The virtualization overheads for deployment and starting of a virtual machine image are new factors which will need to be considered when choosing scheduling mechanisms. In this paper we investigate the usability of compute Clouds to extend a Grid workflow middleware and show on a real implementation that this can speed up executions of scientific workflows.
['Simon Ostermann', 'Radu Prodan', 'Thomas Fahringer']
Extending Grids with cloud resource management for scientific computing
514,723
We characterize the achievable three-dimensional tradeoff between diversity, multiplexing, and delay of the single antenna Automatic Retransmission reQuest (ARQ) Z-interference channel. Non-cooperative and cooperative ARQ protocols are adopted under these assumptions. Considering no cooperation exists, we study the achievable tradeoff of the fixed-power split Han-Kobayashi (HK) approach. Interestingly, we demonstrate that if the second user transmits the common part only of its message in the event of its successful decoding and a decoding failure at the first user, communication is improved over that achieved by keeping or stopping the transmission of both the common and private messages. Under cooperation, two special cases of the HK are considered for static and dynamic decoders. The difference between the two decoders lies in the ability of the latter to dynamically choose which HK special-case decoding to apply. Cooperation is shown to dramatically increase the achievable first user diversity.
['Mohamed S. Nafea', 'Doha Hamza', 'Karim G. Seddik', 'Mohammed Nafie', 'Hesham El Gamal']
On the ARQ protocols over the Z-interference channels: Diversity-multiplexing-delay tradeoff
318,280
The development and reuse of software engineering processes within an organization can be impeded by the lack of a solid process framework. An open process architecture provides a framework through the identification of architectural elements and the specification of element interfaces. This paper introduces one open process architecture and examines some architectural element interfaces.
['Barry W. Boehm', 'Steven Wolf']
An open architecture for software process asset reuse
58,188
Power Grid Design.
['Haihua Su', 'Sani R. Nassif']
Power Grid Design.
748,852
This paper presents an analysis of the extended Kalman filter formulation of simultaneous localisation and mapping (EKF-SLAM). We show that the algorithm produces very optimistic estimates once the "true" uncertainty in vehicle heading exceeds a limit. This failure is subtle and cannot, in general, be detected without ground-truth, although a very inconsistent filter may exhibit observable symptoms, such as disproportionately large jumps in the vehicle pose update. Conventional solutions - adding stabilising noise, using an iterated EKF or unscented filter, etc., - do not improve the situation. However, if "small" heading uncertainty is maintained, EKF-SLAM exhibits consistent behaviour over an extended time-period. Although the uncertainty estimate slowly becomes optimistic, inconsistency can be mitigated indefinitely by applying tactics such as batch updates or stabilising noise. The manageable degradation of small heading variance SLAM indicates the efficacy of submap methods for large-scale maps
['Tim Bailey', 'Juan I. Nieto', 'José E. Guivant', 'Michael E. Stevens', 'Eduardo Mario Nebot']
Consistency of the EKF-SLAM Algorithm
476,970
Computer networks exist to provide a communication medium for social networks, and information from social networks can help in estimating their communication needs. Despite this, current network management ignores the information from social networks. On the other hand, due to their limited and fluctuating bandwidth, mobile ad hoc networks are inherently resource-constrained. As traffic load increases, we need to decide when and how to throttle the traffic to maximize user satisfaction while keeping the network operational. The state-of-the-art for making these decisions is based on network measurements and so employs a reactive approach to deteriorating network state by reducing the amount of traffic admitted into the network. However, a better approach is to avoid congestion before it occurs, by (a) monitoring the network for early onset signals of congestive phase transition, and (b) predicting future network traffic using user and application information from the overlaying social network. We use machine learning methods to predict the amount of traffic load that can be admitted without transitioning the network to a congestive phase and to predict the source and destination of near future traffic load. These two predictions when fed into an admission control component ensure better management of constrained network resources while maximizing the quality of user experience.
['Akshay Vashist', 'Siun-Chuon Mau', 'Alexander Poylisher', 'Ritu Chadha', 'Abhrajit Ghosh']
Leveraging social network for predicting demand and estimating available resources for communication network management
509,252
A joint network coding and superposition coding (JNSC) scheme is proposed for information exchange between more than two users in a wireless relaying network. In this paper we consider two scenarios in a relaying network with four nodes: single and multiple information exchange loops between three source nodes, and two alternative transmission schemes, i.e. pure time division (PTD) and pure network coding (PNC), are also considered in order to compare with JNSC. The achievable rate regions of the PTD, PNC and JNSC schemes are all characterized, indicating the JNSC scheme is not always superior to the other two schemes. The sum rate optimization problem with a certain traffic pattern is also solved. We showed that the maximum coding gains of the JNSC and PNC schemes compared with the PTD scheme is achieved as the transmission rate of each node is one third of the sum rate in the network. Simulation results also reveal this phenomenon.
['Chun-Hung Liu', 'Ari Arapostathis']
Joint Network Coding and Superposition Coding for Multi-User Information Exchange in Wireless Relaying Networks
367,549
Sensing coverage is an important issue in wireless mobile sensor networks. The strategy of how to deploy sensor nodes in an environment, particularly in an unknown expanse, will affect the utility of the network just like the quality of communication. In this paper, using the concept of molecule spreading from physics, we present an efficient method for sensor deployment, assuming that global information is not available. Our algorithm, i.e., Self-Deployment by Density Control (SDDC), uses density control by each node to concurrently deploy sensor nodes. We make the nodes form clusters to achieve area density balance. The characteristics in SDDC are concurrent multisensor moving, distributed operation, localized calculation, and self-deployment. Simulations show its good performances compared to the incremental self-deployment algorithm.
['Ruay-Shiung Chang', 'Shuo-Hung Wang']
Self-Deployment by Density Control in Sensor Networks
52,850
This paper proposes an interactive attraction system, "Groveling on the Wall," which provides an experience of groveling on the walls/ceilings of a virtual building with a gravity illusion (Figure1). To create a gravity illusion for the system, which is safer than real experiences, we developed two physical mechanisms: the groveling interface and the gradient controls of the user's body. We prepared 1) gradient controls of both the user's back and head, and 2) sudden fall control of the user's head. The groveling interface detects the rotations of four caterpillar belts for both legs and hands. The virtual world visualized in an HMD (Head-mounted Display) corresponds to the user's state. The proposed system gives the user a special sense of groveling on the walls/ceilings, with direct fear of falling and preliminary fear of sliding down when the user tries to move forward.
['Kaede Ueno', 'Naoto Yoshida', 'Tomoko Yonezawa']
Groveling on the wall: interactive VR attraction using gravity illusion
951,126
For optimizations like physical synthesis and static timing analysis, efficient interconnect delay and slew computation is critical. Since one cannot often afford to run asymptotic waveform evaluation (Pillage and Rohrer, 1990), constant time solutions are required. This work presents the first complete solution to closed-form formulas for both delay and also for slew. Our metrics are derived from matching circuit moments to the lognormal distribution. From a single table, one can easily implement the metrics for delay and slew for both step and ramp inputs. Experiments validate the effectiveness of the metrics for nets from a real industrial design.
['Charles J. Alpert', 'Frank Liu', 'Chandramouli V. Kashyap', 'Anirudh Devgan']
Closed-form delay and slew metrics made easy
101,313
Histone proteins are often noted for their high degree of sequence conservation. It is less often recognized that the histones are a heterogeneous protein family. Furthermore, several classes of non-histone proteins containing the histone fold motif exist. Novel histone and histone fold protein sequences continue to be added to public databases every year. The Histone Database (http://genome.nhgri.nih.gov/histones/) is a searchable, periodically updated collection of histone fold-containing sequences derived from sequence-similarity searches of public databases. Sequence sets are presented in redundant and non-redundant FASTA form, hotlinked to GenBank sequence files. Partial sequences are also now included in the database, which has considerably augmented its taxonomic coverage. Annotated alignments of full-length non-redundant sets of sequences are now available in both web-viewable (HTML) and downloadable (PDF) formats. The database also provides summaries of current information on solved histone fold structures, post-translational modifications of histones, and the human histone gene complement.
['Steven A. Sullivan', 'Daniel W. Sink', 'Kenneth L. Trout', 'Izabela Makalowska', 'Patrick M. Taylor', 'Andreas D. Baxevanis', 'David Landsman']
The Histone Database
386,025
Between 2011 and 2013, convenience store retail business grew dramatically in Thailand. As a result, most companies have increasingly been choosing the application of performance measurement systems. This significantly results in poor performance measurement regarding future business lagging measure. To solve this problem, this research presents a hybrid predictive performance measurement system (PPMS) using the neuro-fuzzy approach based on particle swarm optimization (ANFIS-PSO). It is constructed from many leading aspects of convenience store performance measures and projects the competitive level of future business lagging measure. To do so, monthly store performance measures were first congregated from the case study value chains. Second, data cleaning and preparations by headquarter accounting verification were carried out before the proposed model construction. Third, these results were used as the learning dataset to derive a predictive performance measurement system based on ANFIS-PSO. The fuzzy value of each leading input was optimized by parallel processing PSO, before feeding to the neuro-fuzzy system. Finally, the model provides a future performance for the next month’s sales and expense to managers who focused on managing a store using desirability function (\(D_{i})\). It boosted the sales growth in 2012 by ten percentages using single PPMS. Additionally, the composite PPMS was also boosted by the same growth rate for the store in the blind test (July 2013–February 2014). From the experimental results, it can be concluded that ANFIS-PSO delivers high-accuracy modeling, delivering much smaller error and computational time compared to artificial neural network model and supports vector regression but its component searching time differs significantly because of the complexity of each model.
['Pongsak Holimchayachotikul', 'Komgrit Leksakul']
Predictive performance measurement system for retail industry using neuro-fuzzy system based on swarm intelligence
650,943
Keyphrases for a document concisely describe the document using a small set of phrases. Keyphrases were previously shown to improve several document processing and retrieval tasks. In this work, we study keyphrase extraction from research papers by leveraging citation networks. We propose CiteTextRank for keyphrase extraction from research articles, a graph-based algorithm that incorporates evidence from both a document's content as well as the contexts in which the document is referenced within a citation network. Our model obtains significant improvements over the state-of-the-art models for this task. Specifically, on several datasets of research papers, Cite Text Rank improves precision at rank 1 by as much as 9-20% over state-of-the-art baselines.
['Sujatha Das Gollapalli', 'Cornelia Caragea']
Extracting keyphrases from research papers using citation networks
761,913
Protein complexes play a key role in many biological processes. Various computational approaches have been developed to identify complexes from protein-protein interaction (PPI) networks. However, high false-positive rate of PPIs makes the identification challenging. In this paper, we propose a protein semantic similarity measure based on the ontology structure of Gene Ontology (GO) terms and GO annotations to estimate the reliability of interactions in PPI networks. Interaction pairs with low GO semantic similarity are removed from the network as unreliable interactions. Then, a cluster-expanding algorithm is applied to identify complexes with core-attachment structure on the filtered network. We have applied our method on three different yeast PPI networks. The effectiveness of our method is examined on two benchmark complex datasets. Experimental results show that our method outperforms other state-of-the-art approaches in most evaluation metrics. Removing interactions with low similarity significantly improves the performance of complex identification.
['Jian Wang', 'Dong Xie', 'Hongfei Lin', 'Zhihao Yang', 'Yijia Zhang']
Identifying Protein Complexes from PPI Networks Using GO Semantic Similarity
156,012
This article is proposing a useful summary of tuning rules for controllers that have been developed for electric drives. The paper gives application information about tuning rules and discusses controller architecture and performance indices. These strategies are used for comparing performance and controller techniques robustness, which are analyzed and designed to meet certain performance specification.
['R. C. Dixon', 'O. P. Maltseva', 'Nikolay. V. Koyain', 'L.S. Udut']
Optimum controller design for electric drives technology
218,127
A broad range of applications has led to various wireless sensor networks (WSNs) with different design considerations. Limited battery power is one of the most challenging aspects of WSN protocol design, and, therefore, energy efficiency has long been the focus of research. One of the most common approaches for energy conservation is to alternate each sensor node between sleep and wake-up states. In this paper, we propose ADP, an adaptive energy efficient approach that meets the requirement of low energy consumption and, at the same time, considers the underlying dynamic traffic load. ADP enhances energy efficiency by dynamically adjusting sensor nodes' sleep and wake-up cycles. ADP utilizes a cost function intended to strike a balance between the conflicting goals of conserving energy (waking up as rarely as possible) and at the same time minimizing sensed events' reporting latency (waking up as frequently as possible). It also incorporates a feedback mechanism that constantly monitors residual energy level and the importance of the event to be reported, as well as predicts the next sensing event occurrence time. Simulation experiments with different traffic loads have shown that ADP improves energy efficiency while keeping latency low.
['Afraa Attiah', 'Muhammad Faisal Amjad', 'Omar Nakhila', 'Cliff C. Zou']
ADP: An adaptive feedback approach for energy-efficient wireless sensor networks
692,375
We consider different transmission options on the reverse link of cellular systems for packet data. The different transmission options are classified based on the nature of in-cell and out-of-cell interference power statistics. The categories are: (a) no in-cell interference, averaged out-of cell interference; (b) no in-cell interference, bursty out-of-cell interference; and (c) averaged in-cell interference, averaged out-of-cell interference. Depending on whether the reverse link transmission is time multiplexed one user at a time transmission, or simultaneous transmission by multiple users with or without in-cell orthogonality, the interference structure falls into one of the above three categories. We analyze the throughput performance of the system in each of these cases when incremental redundancy is employed to combat uncertainty in the interference power. We compare the different options under an in-cell rise-over-thermal (IROT) constraint and provide some insights for reverse link design for next-generation cellular systems. Our results show that transmission option (a) with an optimal choice of the number of simultaneous transmissions within the cell has the best performance over several different scenarios. Time-multiplexed transmissions, despite the bursty out-of-cell interference power structure, has throughput comparable to that of a multiple-user orthogonal transmission system for small cells where mobiles have sufficient transmit power to meet the target IROT.
['Suman Das', 'Harish Viswanathan']
A comparison of reverse link access schemes for next-generation cellular systems
337,047
In this paper, we propose a novel approach of signal power gradient by which a robot adaptively searches a location-unknown sensor. While moving, the robot measures signal strength and estimates the direction of power gradient along which the robot moves in the next step. The correctness of estimated direction is analyzed and the probability of correct direction is obtained. Since the robot continuously measures signal strength while moving, it can effectively overcome the motion errors. Simulation results demonstrate that the robot can successfully reach the location-unknown sensor with probability close to one when the signal to noise ratio at the initial location is as low as 0 dB and the standard deviation of motion error is 10% step size.
['Yi Sun', 'Jizhong Xiao', 'Xiaohai Li', 'Flavio Cabrera-Mora']
Adaptive Source Localization by a Mobile Robot Using Signal Power Gradient in Sensor Networks
502,037
Un regard lexico-scientométrique sur le défi EGC 2016.
['Guillaume Cabanac', 'Gilles Hubert', 'Hong Diep Tran', 'Cécile Favre', 'Cyril Labbé']
Un regard lexico-scientométrique sur le défi EGC 2016.
758,740
Pairwise clustering methods are able to handle relational data, in which a set of objects is described via a matrix of pairwise (dis)similarities. Here, we consider a cost function for pairwise clustering which maximizes model entropy under the constraint that the error for reconstructing objects from class information is fixed to a small value. Based on the analysis of structural transitions, we derive a new incremental pairwise clustering method which increases the number of clusters until a certain value of a Lagrange multiplier is reached. In addition, the calculation of phase transitions is used for speed-up. The incremental duplication of clusters helps to avoid local optima, and the stopping criterion automatically determines the number of clusters. The performance of the method is assessed on artificial and real-world data.
['Sambu Seo', 'Johannes Mohr', 'Klaus Obermayer']
A New Incremental Pairwise Clustering Algorithm
232,710
In this paper, the outage performance of downlink non-orthogonal multiple access (NOMA) is investigated for the case where each user feeds back only one bit of its channel state information (CSI) to the base station. Conventionally, opportunistic one-bit feedback has been used in fading broadcast channels to select only one user for transmission. In contrast, the considered NOMA scheme adopts superposition coding to serve all users simultaneously in order to improve user fairness. A closed-form expression for the common outage probability (COP) is derived, along with the optimal diversity gains under two types of power constraints. Particularly, it is demonstrated that the diversity gain under a long-term power constraint is twice as large as that under a short-term power constraint. Furthermore, we study dynamic power allocation optimization for minimizing the COP, based on one-bit CSI feedback. This problem is challenging, since the objective function is non-convex; however, under the short-term power constraint, we demonstrate that the original problem can be transformed into a set of convex problems. Under the long-term power constraint, an asymptotically optimal solution is obtained for high signal-to-noise ratio.
['Peng Xu', 'Yi Yuan', 'Zhiguo Ding', 'Xuchu Dai', 'Robert Schober']
On the Outage Performance of Non-Orthogonal Multiple Access With 1-bit Feedback
739,499
This paper focuses on the coordination of multiple robots with kinodynamic constraints along specified paths. The presented approach generates continuous velocity profiles that avoid collisions and minimize the completion time for the robots. The approach identifies collision segments along each robot's path and then optimizes the motions of the robots along their collision and collision-free segments. For each path segment for each robot, the minimum and maximum possible traversal times that satisfy the dynamics constraints are computed by solving the corresponding two-point boundary value problems. Then the collision avoidance constraints for pairs of robots can be combined to formulate a mixed integer nonlinear programming (MINLP) problem. Since this nonconvex MINLP model is difficult to solve, we describe two related mixed integer linear programming (MILP) formulations that provide schedules that are lower and upper bounds on the optimum; the upper bound schedule is a continuous velocity schedule. The approach is illustrated with robots modeled as double integrators subject to velocity and acceleration constraints. An implementation that coordinates 12 nonholonomic car-like robots is described.
['Jufeng Peng', 'Srinivas Akella']
Coordinating the motions of multiple robots with kinodynamic constraints
47,534
Demonstration of an Optical Chip-to-Chip Link in a 3D Integrated Electronic-Photonic Platform
['Krishna T. Settaluri', 'Sen Lin', 'Sajjad Moazeni', 'Erman Timurdogan', 'Chen Sun', 'Michele Moresco', 'Zhan Su', 'Yu-Hsin Chen', 'Gerald Leake', 'Douglas LaTulipe', 'Colin McDonough', 'Jeremiah Hebding', 'Douglas Coolbaugh', 'Michael R. Watts', 'Vladimir Stojanovic']
Demonstration of an Optical Chip-to-Chip Link in a 3D Integrated Electronic-Photonic Platform
664,672
The purpose of this track is to provide a forum to discuss the challenges posed by these technologies in the teaching profession and to offer practical answers for teachers to better make use of them (i.e., not only the technical mastery but also didactical strategies to promote students' learning). Nowadays, it is widely assumed that the teaching profession moves from teacher-centered instruction to student based learning through the use of interactive environments. It is therefore crucial for teachers (both pre-service and in-service) to be able to efficiently use these new tools. Teacher education programmes must then provide the means, models and frameworks to cope with the needs for the new forms of teaching and learning.
['Juanjo Mena', 'Maria Assunção Flores']
Teacher education research and the use of information and communication technologies
954,898
T-cell epitopes play vital roles in immune response. Its recognition by T-cell receptors is a precondition for the activation of T-cell clone. This recognition is antigen-specific. Therefore, identifying the pattern of a MHC restricted T-cell epitopes is of great importance for immunotherapy and vaccine design. In this paper, we designed a new kernel based on weighted cross-correlation coefficients for support vector machine and applied it to the direct prediction of T-cell epitopes. The experiment was carried on an MHC type I restricted T-cell clone LAU203-1.5. The results showed that this approach is efficient and promising
['Jing Huang']
A New Kernel Based on Weighted Cross-Correlation Coefficient for SVMs and Its Application on Prediction of T-cell Epitopes
56,908
We have implemented sample sort and a parallel version of Quicksort on a cache-coherent shared address space multiprocessor: the SUN ENTERPRISE 10000. Our computational experiments show that parallel Quicksort outperforms sample sort. Sample sort has been long thought to be the best, general parallel sorting algorithm, especially for larger data sets. On 32 processors of the ENTERPRISE 10000 the speedup of parallel Quicksort is more than six units higher than the speedup of sample sort, resulting in execution times that were more than 50% faster than sample sort. On one processor, parallel quicksort achieved 15% percent faster execution times than sample sorting. Moreover, because of its low memory requirements, parallel Quicksort could sort data sets at twice the size that sample sort could under the same system memory restrictions.
['Philippas Tsigas', 'Yi Zhang']
A simple, fast parallel implementation of Quicksort and its performance evaluation on SUN Enterprise 10000
339,630
Peer-to-peer live streaming offers plenty of live television programs for users, and has become one of the most popular Internet applications. However, some ubiquitous problems such as long startup delay and unsmooth playback seriously restrict the quality of service of live streaming, whereas deploying dedicated servers immoderately will suffer from excessive costs. In this paper, we introduce economical-underloaded-emergent (EUE) principle to instruct resource scheduling for live streaming systems based on CDN-P2P hybrid architecture. Complying with this principle, we differentiate peers’ chunk requests according to their playback deadline and propose a set of mechanisms to provide distinct service for diverse requests. The results of simulation experiment demonstrate that EUE principle effectively optimize system performance, and achieve the remarkable reduction of startup delay and increase of chunk arrival ratio.
['Chao Hu', 'Ming Chen', 'Changyou Xing', 'Bo Xu']
EUE principle of resource scheduling for live streaming systems underlying CDN-P2P hybrid architecture
19,768
Hyperspectral imaging usually lack of spatial resolution due to limitations of hardware design of imaging sensors. On the contrary, latest imaging sensors capture a RGB image with resolution of multiple times larger than a hyperspectral image. In this paper, we present an algorithm to enhance and upsample the resolution of hyperspectral images. Our algorithm consists of two stages: spatial upsampling stage and spectrum substitution stage. The spatial upsampling stage is guided by a high resolution RGB image of the same scene, and the spectrum substitution stage utilizes sparse coding to locally refine the upsampled hyperspectral image through dictionary substitution. Experiments show that our algorithm is highly effective and has outperformed state-of-the-art matrix factorization based approaches.
['Hyeokhyen Kwon', 'Yu-Wing Tai']
RGB-Guided Hyperspectral Image Upsampling
573,925
We present a unifying approach to the efficient evaluation of propositional answer-set programs. Our approach is based on backdoors which are small sets of atoms that represent "clever reasoning shortcuts" through the search space. The concept of backdoors is widely used in the areas of propositional satisfiability and constraint satisfaction. We show how this concept can be adapted to the nonmonotonic setting and how it allows to augment various known tractable subproblems, such as the evaluation of Horn and acyclic programs.#R##N##R##N#In order to use backdoors we need to find them first. We utilize recent advances in fixed-parameter algorithmics to detect small backdoors. This implies fixed-parameter tractability of the evaluation of propositional answer-set programs, parameterized by the size of backdoors. Hence backdoor size provides a structural parameter similar to the treewidth parameter previously considered. We show that backdoor size and treewidth are incomparable, hence there are instances that are hard for one and easy for the other parameter. We complement our theoretical results with first empirical results.
['Johannes Klaus Fichte', 'Stefan Szeider']
Backdoors to tractable answer-set programming
390,911
Environmental risk assessment is often affected by severe uncertainty. The frequently invoked precautionary principle helps to guide risk assessment and decision-making in the face of scientific uncertainty. In many contexts, however, uncertainties play a role not only in the application of scientific models but also in their development. Building on recent literature in the philosophy of science, this paper argues that precaution should be exercised at the stage when tools for risk assessment are developed as well as when they are used to inform decision-making. The relevance and consequences of this claim are discussed in the context of the threshold of the toxicological concern approach in food toxicology. I conclude that the approach does not meet the standards of an epistemic version of the precautionary principle.
['Karim Bschir']
Risk, Uncertainty and Precaution in Science: The Threshold of the Toxicological Concern Approach in Food Toxicology
791,544
We present GUEB a static tool detecting Use after Free vulnerabilities on disassembled code. This tool has been evaluated on a real vulnerability in the ProFTPD application (CVE-2011-4130).
['Josselin Feist', 'Laurent Mounier', 'Marie-Laure Potet']
Statically Detecting Use After Free on Binary Code
268,867
An interesting and challenging problem in digital image forensics is the identification of the device used to acquire an image. Although the source imaging device can be retrieved exploiting the file's header e.g., EXIF, this information can be easily tampered. This lead to the necessity of blind techniques to infer the acquisition device, by processing the content of a given image. Recent studies are concentrated on exploiting sensor pattern noise, or extracting a signature from the set of pictures. In this paper we compare two popular algorithms for the blind camera identification. The first approach extracts a fingerprint from a training set of images, by exploiting the camera sensor's defects. The second one is based on image features extraction and it assumes that images can be affected by color processing and transformations operated by the camera prior to the storage. For the comparison we used two representative dataset of images acquired, using consumer and mobile cameras respectively. Considering both type of cameras this study is useful to understand whether the theories designed for classic consumer cameras maintain their performances on mobile domain.
['Giovanni Maria Farinella', 'Mario Valerio Giuffrida', 'Victor Digiacomo', 'Sebastiano Battiato']
On Blind Source Camera Identification
791,787
We present two decentralized iterative power control algorithms (IPCAs) for the uplink and downlink communication in multi-service CDMA wireless environments in order to expand the system capacity while at the same time satisfy the various QoS requirements. The proposed dynamic IPCAs are decentralised in the sense that they can be implemented in a distributed mode at the individual cell sites and mobile stations based on local measurements and information, and without the need of co-ordination between the different cells. The main feature of these two power control algorithms is that they combine the allocation as well as the correction of power in a multi-service wireless system. The formulation of the power control problem as presented in this paper, as well as the proposed algorithms, are especially applicable in multi-service wireless environments that support several classes of applications where each class has different quality of service requirements. Finally we present some numerical results that show that the convergence of the algorithm is very fast and the powers approach the optimal values within a small number of iterations.
['G.V. Kotsakis', 'Symeon Papavassiliou', 'Panagiotis Demestichas']
Decentralized power control algorithms for multi-service CDMA-based cellular systems
317,257
The cluster server is widely used in different types of systems. However, different types of applications require different types of resources from the cluster. Multimedia services heavily depend on the available communication bandwidth between server and clients. On the other hand, some scientific applications rely on CPU computation power. If a cluster server can support both types of application, resource utilization can be enhanced substantially. Unfortunately, when both types of applications compete for the network bandwidth, the QoS of the multimedia services will be decreased since the data have a temporally dependency feature. The authors propose cooperative scheduling between these two types of application in a cluster server system such that when the applications compete for bandwidth for communication, the influence of QoS of multimedia video services can be minimized.
['Hon-Hing Wan', 'Xia Lin']
Cooperative scheduling for multimedia services and computation intensive applications for cluster server
202,015
Hybrid systems such as Cyber Physical Systems (CPS) are becoming more important with time. Apart from CPS there are many hybrid systems in nature. To perform a simulation based analysis of a hybrid system, a simulation framework is presented, named SAHISim. It is based on the most popular simulation interoperability standards, i.e. High Level Architecture (HLA) and Functional Mock-up Interface (FMI). Being a distributed architecture it is able to execute on cluster, cloud and other distributed topologies. Moreover, as it is based on standards so it allows many different simulation packages to interoperate, making it a flexible and robust solution for simulation based analysis. The underlying algorithm which enables the synchronization of different simulation components is discussed in detail. A test example is presented, whose results are compared to a monolithic simulation of the same model for verification of results.
['Muhammad Usman Awais', 'Wolfgang Gawlik', 'Gregor De-Cillia', 'Peter Palensky']
Hybrid simulation using SAHISim framework: a hybrid distributed simulation framework using waveform relaxation method implemented over the HLA and the functional mock-up interface
692,144
Object recognition may be a hard computer vision task under severe occlusion circumstances. This problem is efficiently solved in this paper through a new 3D recognition method for free-form objects. The technique uses the Depth Gradient Image Based on Silhouette representation (DGI-BS) and settles the problem of identification-pose under occlusion and noise requirements. DGI-BS synthesizes both surface and contour information of an object avoiding restrictions concerning the layout and visibility. Object recognition is carried out by means of a simple matching algorithm in the DGI-BS space which yields a point-to-point correspondence between scene and model. The method has been successfully tested in real scenes under occlusion and noise circumstances.
['Pilar Merchán', 'Antonio Adán', 'Santiago Salamanca']
Identification and pose under severe occlusion in range images
219,941
Cancer is a complex disease in which a variety of phenomena interact over a wide range of spatial and temporal scales. In this article a theoretical framework will be introduced that is capable of linking together such processes to produce a detailed model of vascular tumour growth. The model is formulated as a hybrid cellular automaton and contains submodels that describe subcellular, cellular and tissue level features. Model simulations will be presented to illustrate the effect that coupling between these different elements has on the tumour's evolution and its response to chemotherapy.
['Helen M. Byrne', 'Markus R. Owen', 'Tomás Alarcón', 'Philip K. Maini']
Cancer disease: integrative modelling approaches
244,722
Intellectual property theft by reverse engineering leads to loss of revenue and threatens security of integrated circuits. A number of design obfuscation techniques been proposed to counter IC reverse engineering. Most such techniques rely on functional misdirection at gate or module level but, do not prevent leakage of visual information about design characteristics from the physical layout. For example, capitalizing on relative difference in sizes among various functional units and their pin counts, an attacker may still be able to hypothesize about function of a unit, irrespective of any obfuscation performed on them. Hence, we propose meta-obfuscation techniques to harden other obfuscation mechanisms against physical information leakage. Our aim is to add complexity in the visual domain so that the physical characteristics of a design do not have notable correlation to its functionality. We explore various metrics to quantify the quality of the proposed meta-obfuscation and apply them in design automation. Experimental results show improvement in metrics used and also, highlight relevant overheads for meta-obfuscation.
['Vinay C. Patil', 'Arunkumar Vijayakumar', 'Sandip Kundu']
On meta-obfuscation of physical layouts to conceal design characteristics
926,138
Recently people pay more and more attention on how to effectively and efficiently analyze the result of regular physical examinations to provide the most helpful information for individual health management. In this paper, we design and develop an interactive system of virtual healthcare assistant to help people, especially for those who suffer from chronic diseases (e.g., metabolic syndrome) to easily understand their health conditions and then well manage it. This system analyzes the result of regular physical examination to evaluate the health risk and provide personalized healthcare services for users in terms of diet and exercise guideline recommendations. We developed some interactive ways for users to easily feedback their vital signs to the system and quickly get the suggestions for health management from the system. Besides the browser-based system, we also developed a mobile App that can regularly remind users to carry out the recommendations, which are provided by the system. To prove the system is feasible in the real-world clinical environment, we also applied the Institutional Review Board (IRB) for a human subject research to validate this system. Other than the functional features, there are also several important non-functional features of the extensibility and the convenience for use. First, we use the physical examination result as the raw data to be analyzed. It's very convenient for users with very low cost. Second, the system design is extendable, so it can be easily adjusted to work for any chronic ills, even other kinds of diseases. Moreover, it can be extended to provide other kinds of healthcare guideline recommendations as well. These features constitute the main contributions of this work.
['Jerry C. C. Tseng', 'Bo-Hau Lin', 'Yu-Feng Lin', 'Vincent S. Tseng', 'Miin-Luen Day', 'Shyh-Chyi Wang', 'Kuen-Rong Lo', 'Yi-Ching Yang']
An interactive healthcare system with personalized diet and exercise guideline recommendation
655,462
Link scheduling is a fundamental design issue in multihop wireless networks. All existing link scheduling algorithms require the precise information of the positions, and/or communication/interference radii of all nodes. For practical networks, it is not only difficult or expensive to obtain these parameters, but also often impossible to get their precise values. The link scheduling determined by the imprecise values of these parameters may fail to guarantee the same approximation bounds of the link scheduling determined by precise values. Therefore, the existing link scheduling algorithms lack performance robustness. In this paper, we propose a robust link scheduling, which can be easily computed with only the information on whether a given pair of links have conflict or not and therefore is robust. In addition, our link scheduling does not compromise the approximation bound and indeed sometimes can achieve better approximation bound. Particularly, under the 802.11 interference model, its approximation bound is 16 in general and 6 with uniform interference radii, an improvement over the respective best-known approximation bounds 23 and 7.
['Peng-Jun Wan', 'Chao Ma', 'Zhu Wang', 'Boliu Xu', 'Minming Li', 'Xiaohua Jia']
Weighted wireless link scheduling without information of positions and interference/communication radii
163,709
We describe a new model of collaborative production called Commons-based peer production (CBPP). This model is frequently supported by digital platforms characterized by peer to peer relationships, resulting in the provision of common resources. Traditionally, it is associated with cases such as Wikipedia or Free Software, but we have recently observed an expansion into other areas. On the basis of an extensive empirical work, we enquired - How does CBPP apply value? and How does value creation function in CBPP? We present an updated version of the meaning of value and sustain the relevance of this debate. After that, we propose how to measure value. We formulate what we call internal and external indicators of value. The first are linked to the internal performance of the CBPP and the second relates to its social value and reputation. Finally we highlight the main features of value that we identified and discuss the limits that we found developing and implementing the proposed diversity indicators.
['Mayo Fuster Morell', 'Jorge Salcedo', 'Marco Berlinguer']
Debate About the Concept of Value in Commons-Based Peer Production
870,760
LatticeRnn: Recurrent Neural Networks Over Lattices.
['Faisal Ladhak', 'Ankur Gandhe', 'Markus Dreyer', 'Lambert Mathias', 'Ariya Rastrow', 'Bjorn Hoffmeister']
LatticeRnn: Recurrent Neural Networks Over Lattices.
872,052