abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
This paper introduces design fragments as a fundamental component of a design process for strong traceability in the design of concurrent systems. Design fragments represent reusable alternatives for the independent design of the communication requirements in a concurrent system. They are defined with formal relations to segments of communicating state machines, and are applied such that they satisfy the semantics of the communications defined by the specification. This paper introduces the concept of design fragments, the approach to developing and using them, along with an illustrative example. >
['Joanne L. Boyd', 'Gerald M. Karam']
Using design fragments to modularize the design of communications in concurrent systems
20,402
SemaPlorer is an easy to use application that allows end users to interactively explore and visualize a very large, mixed-quality and semantically heterogeneous distributed semantic data set in real-time. Its purpose is to acquaint oneself about a city, touristic area, or other area a user is interested in. By visualizing the data using a map, media, and different context views, SemaPlorer advances beyond simple storage and retrieval of large numbers of triples, as the interaction with the large data set is driven by the user. SemaPlorer leverages different semantic data sources such as DBpedia, GeoNames, WordNet, and personal FOAF files. These make a significant portion of the data provided for the Billion Triple Challenge. SemaPlorer intriguingly connects with a large Flickr data set converted to RDF. The storage infrastructure bases on Amazon's Elastic Computing Cloud (EC2) and Simple Storage Service. We apply NetworkedGraphs as a conceptual layer on top of EC2, realizing a large, federated data infrastructure for semantically heterogeneous data sources from within and outside of the cloud. Therefore, the application is scalable with respect to the amount of distributed components working together as well as the number of triples managed overall. Hence, SemaPlorer is flexible enough to leverage for exploration of almost arbitrary additional data sources that might be added in the future. We conducted a formative evaluation of the SemaPlorer application with 20 test subjects. The results of this evaluation are analyzed and their implication to future work discussed. SemaPlorer won the first prize at the Billion Triple Challenge of the International Semantic Web Conference in Karlsruhe, 2008.
['Simon Schenk', 'Carsten Saathoff', 'Steffen Staab', 'Ansgar Scherp']
SemaPlorer-Interactive semantic exploration of data and media based on a federated cloud infrastructure
140,961
The paper deals with the rotational stability of a rigid body under the constant internal forces. For this problem, first, the stiffness tensor is constructed and its basic properties are analyzed. The internal force parameterization is done with the use of the virtual linkage/spring model. Within this parameterization, necessary and sufficient conditions of stability are obtained in the analytical form. In the space of the internal forces they form a region given by intersection of a plane and a singular quadric. Since the stability conditions guarantee only positive definiteness of the stiffness tensor, the contact friction is taken into account separately. In this paper analysis of the unilateral constraints is done under a study case, where achieving stable grasp of a convex object, with the stretching internal forces created by friction, is studied in an analytical example.
['Mikhail M. Svinin', 'Makoto Kaneko', 'Toshio Tsuji']
Multi-arm/finger grasping: one view to the stability problem
405,762
In this paper we present a boundary search based ACO algorithm for solving nonlinear constrained optimization problems. The aim of this work is twofold. Firstly, we present a modified search engine which implements a boundary search approach based on a recently proposed ACO metaheuristic for continues problems. Secondly, we propose the incorporation of the stochastic ranking technique to deal with feasible and infeasible solutions during the search which focuses on the boundary region. In our experimental study we compare the overall performance of the proposed ACO algorithm by including two different complementary constraint-handling techniques: a penalty function and stochastic ranking. In addition, we include in our comparison of results the stochastic ranking algorithm, which was originally implemented using an evolution strategy as its search engine.
['Guillermo Leguizamón', 'Carlos A. Coello Coello']
A boundary search based ACO algorithm coupled with stochastic ranking
211,593
We study the connection capacity of a class of rearrangeable nonblocking (RNB) and strictly nonblocking (SNB) networks with/without crosstalk-free constraint, model their routing problems as weak or strong edge-colorings of bipartite graphs, and propose efficient routing algorithms for these networks using parallel processing techniques. This class of networks includes networks constructed from banyan networks by horizontal concatenation of extra stages and/or vertical stacking of multiple planes. We present a parallel algorithm that runs in O(lg/sup 2/ N) time for the RNB networks of complexities ranging from O(N lg N) to O(N/sup 1.5/ lg N) crosspoints and parallel algorithms that run in O(min{d* lg N, /spl radic/N}) time for the SNB networks of O(N/sup 1.5/ lg N) crosspoints, using a completely connected multiprocessor system of N processing elements. Our algorithms can be translated into algorithms with an O(lg N lg lg N) slowdown factor for the class of N-processor hypercubic networks, whose structures are no more complex than a single plane in the RNB and SNB networks considered.
['Enyue Lu', 'S. Q. Zheng']
Parallel routing algorithms for nonblocking electronic and photonic switching networks
329,155
One of the most formidable issues of RL application to real robot tasks is how to find a suitable state space, and this has been much more serious since recent robots tends to have more sensors and the environment including other robots becomes more complicated. In order to cope with the issue, this paper presents a method of self task decomposition for modular learning system based on self-interpretation of instructions given by a coach. The proposed method is applied to a simple soccer situation in the context of RoboCup.
['Yasutake Takahashi', 'Tomoki Nishi', 'Minoru Asada']
Self task decomposition for modular learning system through interpretation of instruction by coach
887,390
Robot phase entrainment on quadruped CPG controller
['Vítor Matos', 'Cristina P. Santos']
Robot phase entrainment on quadruped CPG controller
629,504
A method for visualizing the function computed by a feedforward neural network is presented. It is most suitable for models with continuous inputs and a small number of outputs, where the output function is reasonably smooth, as in regression and probabilistic classification tasks. The visualization makes readily apparent the effects of each input and the way in which the functions deviate from a linear function. The visualization can also assist in identifying interactions in the fitted model. The method uses only the input-output relationship and thus can be applied to any predictive statistical model, including bagged and committee models, which are otherwise difficult to interpret. The visualization method is demonstrated on a neural network model of how the risk of lung cancer is affected by smoking and drinking.
['Tony A. Plate', 'J. Bert', 'John A. Grace', 'P. Band']
Visualizing the Function Computed by a Feedforward Neural Network
408,549
Mobile phone data can provide rich information on human activities and their social relationships which are dynamic in nature. Analysis of such social networks emerging from phone calls of mobile users can be useful in many aspects. In this paper we report the methods and results from a case study on the analysis of a social network from mobile phone data. The analysis involves tracking the dynamics of the network, identifying key individuals and their close associates, and identifying individuals having communication pattern similar to the key individuals. We introduce novel measures to quantify, the evolution in the network, significance of an individual, and social association of an individual. In order to group individuals having similar communication pattern, we applied recently proposed online clustering approach called eClustering (evolving clustering) due to its adaptive nature and low computational overhead. The results show the pertinence of the proposed quantification measures to analysis of evolving social network.
['Rashmi Dutta Baruah', 'Plamen Angelov']
Evolving social network analysis: A case study on mobile phone data
390,105
Haptic interfaces enable us to interact with a virtual world using our sense of touch. This paper presents a method for realizing haptic interaction with water. Our method displays forces acting on rigid objects due to water with a high frame rate (500 Hz). To achieve this, we present a fast method for simulating the dynamics of water. We decompose the dynamics into two parts. One is a linear flow expressed by a wave equation used to compute water waves. The other is a more complex and non-linear flow around the object. The fluid forces due to the non-linear flow is precomputed by solving Navier-Stokes equations, and stored in a database, named the Fluid Resistance Map. The precomputed non-linear flow and the linear flow are combined to compute the forces due to water.
['Yoshinori Dobashi', 'Makoto Sato', 'Shoichi Hasegawa', 'Tsuyoshi Yamamoto', 'Mitsuaki Kato', 'Tomoyuki Nishita']
A fluid resistance map method for real-time haptic interaction with fluids
366,836
Ontologies and historical archives: A way to tell new stories
['Annamaria Goy', 'Diego Magro', 'Marco Rovera']
Ontologies and historical archives: A way to tell new stories
583,997
Managing the evolution of the Enterprise Architecture (EA) is a key challenge for modern enterprises. Current approaches to address this challenge focus on EA plans, indicating projected future states of the architecture. Nevertheless, these plans neglect the role of the information technology (IT) project, which actually performs the transformation of the current to a planned EA. In this paper, we account for the importance of IT projects as drivers of EA transformation by providing a viewpoint for roadmapping the development of the EA over time. Complementing, we further introduce a conceptual model, which explicates the information demands for such roadmap plans.
['Sabine Buckl', 'Alexander M. Ernst', 'Florian Matthes', 'Christian M. Schweda']
Visual Roadmaps for Managed Enterprise Architecture Evolution
382,086
Empirical evidence suggests that most nodes in BitTorrent (BT) participate in multiple torrents, but surprisingly little research exists on this topic. Here, we focus on a multi-torrent system, and specifically on (a) what incentives could be provided for nodes to contribute resources as seeds in a multi-torrent environment, and (b) what are the resulting performance consequences. We show why the current BT lacks incentives for nodes to stay as seeds. Motivated by that, we propose a cross torrent based method to encourage nodes to stay as seeds and present an extensive performance study to illustrate the benefits of our approach.
['Yan Yang', 'Alix L. H. Chow', 'Leana Golubchik']
Multi-torrent: a performance study and applications
243,052
Manufacturing systems are getting more and more distributed, mainly because of the growing interconnection through networks of machines, manufacturing lines, cells, plants or enterprises. Agent technology has proven to be a solution to manage distributed systems, reinforcing the importance of information and communication, and focusing attention on the interoperability between the different parts. In this paper, the interoperability of a distributed manufacturing system is addressed, in an application employing software agents faced to heterogeneous resources. The approach consists to unify all the data exchanged by the agents in a common framework, and follows first the design of all the information in a meta model, then the construction of an ontology, and finally the specification of a common description language.
['Daniel Diep', 'Christos E. Alexakos', 'Thomas Wagner']
An ontology-based interoperability framework for distributed manufacturing control
244,900
Klassifikation von Standardebenen in der 2D-Echokardiographie mittels 2D-3D-Bildregistrierung
['Christoph Bergmeir', 'Navneeth Subramanian']
Klassifikation von Standardebenen in der 2D-Echokardiographie mittels 2D-3D-Bildregistrierung
239,113
In this letter, we investigated an inversion technique to estimate snow water equivalence (SWE) under Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) sensor configurations. Through our numerical simulations by the advanced integral equation model (AIEM), we found that the ground surface emission signals at 18.7 and 36.5 GHz were highly correlated regardless of the ground surface properties (dielectric and roughness properties) and can be well described by a linear function. It leads to a new development for describing the relationship between snow emission signals observed at 18.7 and 36.5 GHz as a linear function. The intercept (A) and slope (B) of this linear equation depend only on snow properties and can be estimated from the observations directly. This development provides a new technique that separates the snowpack and ground surface emission signals. With the parameterized snow emission model from a simulated database that was derived using a multiscattering microwave emission model (dense medium radiative transfer model-AIEM-matrix doubling) over dry snow covers, we developed an algorithm to estimate the SWE using the microwave radiometer measurements. Evaluations on this technique using both the model simulated data and the field experimental data with the airborne Polarimetric Scanning Radiometer data from National Aeronautics and Space Administration Cold Land Processes Experiment 2003 showed promising results, with root-mean-square errors of 32.8 and 31.85 mm, respectively. This newly developed inversion method has the advantages over the AMSR-E SWE baseline algorithm when applied to high-resolution airborne observations.
['Lingmei Jiang', 'Jiancheng Shi', 'Saibun Tjuatja', 'Kun Shan Chen', 'Jinyang Du', 'Lixin Zhang']
Estimation of Snow Water Equivalence Using the Polarimetric Scanning Radiometer From the Cold Land Processes Experiments (CLPX03)
273,991
Smartphone users might be interrupted while interacting with an application, either by intended or unintended circumstances. In this paper, we report on a large-scale observational study that investigated mobile application interruptions in two scenarios: (1) intended back and forth switching between applications and (2) unintended interruptions caused by incoming phone calls. Our findings reveal that these interruptions rarely happen (at most 10% of the daily application usage), but when they do, they may introduce a significant overhead (can delay completion of a task by up to 4 times). We conclude with a discussion of the results, their limitations, and a series of implications for the design of mobile phones.
['Luis A. Leiva', 'Matthias Böhmer', 'Sven Gehring', 'Antonio Krüger']
Back to the app: the costs of mobile application interruptions
267,084
Tschüss, Hans! (Ein etwas anderer Nachruf)
['Alois Potton']
Tschüss, Hans! (Ein etwas anderer Nachruf)
77,418
We study the randomness needed for approximating the output distribution of a multiple-access channel, where the original input processes are independent of each other. The approximation is achieved by simulating (possibly alternative) input processes at each of the entries, where the sources of randomness available for the simulators are independent of each other, and the simulators do not cooperate. The resolvability region of a multiple-access channel is defined as the set of all random-bit rate pairs at which accurate output approximation is possible, where the simulation accuracy is measured by the variational distance between finite-dimensional output distributions. Inner and outer bounds on the resolvability region are derived, and close relations between the concepts of resolvability region and capacity region are demonstrated.
['Yossef Steinberg']
Resolvability theory for the multiple-access channel
228,183
This paper reported a cell system that fabricates hepatic lobule-like microtissue (HLLM) based on alginate-Calcium electrodeposition and cell microcapsule method. The property of the fabricated HLLM includes: 1) With a shape similar to our native hepatic lobule tissue. 2) High cell dense (∼10 8 ). 3) Easily be moved and retrieved for further 3D assembly. Therefore, we proposed a repetitive single-step manipulation method to assemble these single HLLM into 3D multilayer hepatic lobule model. The pick-up success rate was approximating 90%. The assembly speed was about 0.4 microtissue/min and the cell viability was higher than 80%. Unlike other assembly methods, our method is sufficiently repeatable and easy to assemble many single micro modules. The fabricated 3D hepatic lobule model with high cell density is essential for tissue engineering applications.
['Zeyang Liu', 'Masaru Takeuchi', 'Masahiro Nakajima', 'Toshio Fukuda', 'Yasuhisa Hasegawa', 'Qiang Huang']
Assembly of hepatic lobule-like microtissue with repetitive single-step contact manipulation
988,319
Reasoning about Knowledge in Philosophy: The Paradigm of Epistemic Logic.
['William J. Rapaport', 'Jaakko Hintikka']
Reasoning about Knowledge in Philosophy: The Paradigm of Epistemic Logic.
698,200
We present a finite-volume scheme for compressible Euler flows where the grid is cartesian and it does not fit to the body. The scheme, based on the definition of an ad hoc Riemann problem at solid boundaries, is simple to implement and it is formally second order accurate. Error convergence rates with respect to several exact test cases are investigated and examples of flow solutions in one, two and three dimensions are presented.
['Yannick Gorsse', 'Angelo Iollo', 'Haysam Telib', 'Lisl Weynans']
A simple second order cartesian scheme for compressible Euler flows
529,081
A scalable bit matrix machine model was proposed by us previously (G. Vesztergombi et al., 1997). Now we extend this IRAM type model for numerical calculations. The k digit numbers are represented in a bit parallel way, thus the number of rows in the memory is scaled up correspondingly, which means that the number of 1 bit CPUs is also increased proportionally. Still relying on the simple string communication interprocessor network, we prove that loading and multiplication of n/spl times/n matrices are executable in O(n) time. Speed estimates are calculated using emulation on the 8192 processor CERN-ASTRA machine.
['G. Vesztergombi', 'Géza Ódor', 'F. Rohrbach', 'Geza Varga']
Scalable matrix multiplication algorithm for IRAM architecture machine
285,932
We have developed a Web based integrated platform for the identification of genomic islands in which various measures that capture bias in nucleotide compositions have been implemented, viz., GC content (both at the whole genome and at three codon positions in genes), genomic signature, k-mer distribution (k=2-6), codon usage bias and amino acid usage bias. The analysis carried out in sliding windows (default size 10 Kb) is compared with the genomic average for each measure. The output is displayed in a tabular format for each window which may be filtered if the values of the measures differ by 1.5 s (standard deviations) from the genomic average.
['Ruchi Jain', 'Sandeep K. Ramineni', 'Nita Parekh']
Integrated Genomic Island Prediction Tool (IGIPT)
305,407
Stream Volume Prediction in Twitter with Artificial Neural Networks.
['Gabriela Dominguez', 'Juan Zamora', 'Miguel Guevara', 'Héctor Allende', 'Rodrigo Salas']
Stream Volume Prediction in Twitter with Artificial Neural Networks.
788,457
Diamond STING is a new version of the STING suite of programs for a comprehensive analysis of a relationship between protein sequence, structure, function and stability. We have added a number of new functionalities by both providing more structure parameters to the STING Database and by improving/expanding the interface for enhanced data handling. The integration among the STING components has also been improved. A new key feature is the ability of the STING server to handle local files containing protein structures (either modeled or not yet deposited to the Protein Data Bank) so that they can be used by the principal STING components: JavaProtein Dossier (JPD) and STING Report. The current capabilities of the new STING version and a couple of biologically relevant applications are described here. We have provided an example where Diamond STING identifies the active site amino acids and folding essential amino acids (both previously determined by experiments) by filtering out all but those residues by selecting the numerical values/ranges for a set of corresponding parameters. This is the fundamental step toward a more interesting endeavor—the prediction of such residues. Diamond STING is freely accessible at http://sms.cbi.cnptia.embrapa.br and http://trantor.bioc.columbia.edu/SMS.
['Goran Neshich', 'Luiz C. Borro', 'Roberto H. Higa', 'Paula R. Kuser', 'Michel Eduardo Beleza Yamagishi', 'Eduardo Franco', 'João N. Krauchenco', 'Renato Fileto', 'André Afonso Ribeiro', 'George Barreto Bezerra', 'Thiago M. Velludo', 'Tomás S. Jimenez', 'Noboru Furukawa', 'Hirofumi Teshima', 'Koji Kitajima', 'K. Abdulla Bava', 'Akinori Sarai', 'Roberto C. Togawa', 'Adauto L. Mancini']
The Diamond STING server
135,549
This work presents a new deformation algorithm for computer graphics and animation applications. The algorithm is motivated by the mass-spring systems and the ChainMail technique and possesses advantages of both methods. It uses a one-step approach to find the deformation, and therefore is very fast. Simulation parameters can easily be chosen in order to represent deformation characteristics. The algorithm is applicable to various simulation applications and produces very promising results.
['Alpaslan Duysak', 'Jian J. Zhang']
Fast simulation of deformable objects
329,482
In this paper, we present a blackout and brownout insensitive energy harvesting ASIC for ultra-low power autonomous sensor applications. It utilizes both RF or DC power inputs to deliver a regulated voltage for on-chip and off-chip devices. A voltage limiter has been integrated for overvoltage protection. As the ASIC utilizes two regulators alternately, a kick-start method has been used for switching the operation mode. The ASIC is implemented in a 0.18 μm CMOS process. It is able to use a wide range of input DC voltages (0.8…2 V at −40…85 °C) or input AC powers down to −14 dBm.
['Jarno Salomaa', 'Mika Pulkkinen', 'Tuomas Haapala', 'Shailesh Singh Chouhan', 'Kari Halonen']
Energy harvesting ASIC for autonomous sensors
880,218
Scaling Entity Resolution to Large, Heterogeneous Data with Enhanced Meta-blocking.
['George Papadakis', 'George Papastefanatos', 'Themis Palpanas', 'Manolis Koubarakis']
Scaling Entity Resolution to Large, Heterogeneous Data with Enhanced Meta-blocking.
751,734
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: first, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
['Nahur Fonseca', 'Mark Crovella']
Bayesian packet loss detection for TCP
274,278
In the paper a Lyapunov matrices approach to the parametric optimization problem of a time delay system with a PI-controller is presented. The value of a quadratic performance index of quality is equal to the value of the Lyapunov functional at the initial state of a time delay system. The Lyapunov functional is determined by means of the Lyapunov matrix.
['Józef Duda']
Lyapunov matrices approach to the parametric optimization of a time delay system with a PI controller
896,931
A simple method for fabricating a constant phase element (CPE) has been discussed. Dependence of the phase angle on several physical parameters have also been elaborated. Finally, a fractional-order differentiator circuit has been constructed using the CPE, and its performance has been compared with the simulated results
['Karabi Biswas', 'Siddhartha Sen', 'Pranab K. Dutta']
Realization of a Constant Phase Element and Its Performance Study in a Differentiator Circuit
457,491
Hierarchical data structures are common in modern applications. Tree integration is one of the tools that is not fully researched in this scope. Therefore in this paper we define a complex tree to model common hierarchical structures. Complex tree integration aim is determined by specific integration criteria. In this paper we define and analyze a criterion measuring generalization of knowledge --- upper semantic precision. We analyze the criterion in terms of simpler syntactic criteria and describe an extended example of an information retrieval system using this criterion.
['Marcin Maleszka']
Knowledge Generalization during Hierarchical Structures Integration
352,809
This paper demonstrates the design and implementation of a real-time wireless physiological monitoring system for nursing centers, whose function is to monitor online the physiological status of aged patients via wireless communication channel and wired local area network. The collected data, such as body temperature, blood pressure, and heart rate, can then be stored in the computer of a network management center to facilitate the medical staff in a nursing center to monitor in real time or analyze in batch mode the physiological changes of the patients under observation. Our proposed system is bidirectional, has low power consumption, is cost effective, is modular designed, has the capability of operating independently, and can be used to improve the service quality and reduce the workload of the staff in a nursing center
['Bor-Shing Lin', 'Bor-Shyh Lin', 'Nai-Kuan Chou', 'Fok-Ching Chong', 'Sao-Jie Chen']
RTWPMS: A Real-Time Wireless Physiological Monitoring System
15,006
This paper deals with the interpretation and processing of database queries with preference conditions of the form “attribute is low (resp. medium, high)” in the situation where the user is not aware of the actual content of the database but still wants to retrieve the best possible answers (relatively to that content). An approach to the definition of the terms “low”, “medium” and “high” in a contextual and relative manner is introduced. A processing algorithm aimed at efficiently retrieving the top-k answers to such a query is also outlined.
['Patrick Bosc', 'O. Pivert', 'Amine Mokhtari']
On fuzzy queries with contextual predicates
377,302
Sensitivity to spatial details drops across the visual periphery, and hence video streaming systems that gracefully degrades quality away from the viewpoint of the observer, provides an optimum viewing experience with potentially large bitrate savings. As reaction latency is an important performance parameter of such systems, good prediction of future gaze locations at the transmission end is very important. A major research question here is: whether a gaze prediction model designed using a pristine undistorted video, is also able to predict the gaze pattern of users when they watch a distorted/ adaptively distorted version of the same video. With several improvements to existing gaze prediction schemes, in combination with a controlled subjective experiment, we confirm not only that HEVC coding distortions have no significant impact on the predictability of gaze patterns, but also that gaze prediction errors can be restricted to 1.5 degrees of viewing angle for a round trip delay of up to 200ms.
['Yashas Rai', 'Patrick Le Callet', 'Gene Cheung']
Role of HEVC coding artifacts on gaze prediction in interactive video streaming systems
868,220
a b s t r a c t Article history: Available online 30 July 2014 The Generalized Sidelobe Canceler (GSC) is a beamforming scheme which is applied in many fields such as audio, RADAR, SONAR and telecommunications. Recently, the adaptive Reduced Rank GSC (RR-GSC) has been proposed for applications with a large number of sensors. Due to its dimensionality reduction step, the adaptive RR-GSC achieves an enhanced performance in comparison with the standard GSC. However, both standard GSC and RR-GSC have their performance drastically degraded in the presence of colored noise. In this paper, we propose to extend further the GSC and the RR-GSC for colored noise scenarios. As shown in this paper, such improvement in colored noise scenarios can be obtained by incorporating a stochastic or a deterministic prewhitening step in the GSC and RR-GSC algorithms. Since the prewhitening increases the computational complexity, a block-wise reduced rank stochastic gradient GSC beamformer is also proposed. The block-wise step allows only one prewhitening step per block while in the previous schemes one per sample was needed. Another proposed advance in colored noise scenarios is the incorporation of the Vandermonde Invariance Transform (VIT). The VIT works as a pre-beamformer which reduces the interferent power of the undesired sources and the colored noise effect. We show by means of
['Ricardo Kehrle Miranda', 'João Paulo Carvalho Lustosa da Costa', 'Felix Antreich']
High accuracy and low complexity adaptive Generalized Sidelobe Cancelers for colored noise scenarios
171,383
Steel pipe rolling forming process is an elastic-plastic deformation process, the boundary conditions is very complicated and nonlinearity, so it is difficult to get the exact solution by using the analytical method and the traditional implicit static algorithm based on finite element method. The contact collide arithmetic based on explicit dynamic program LS-DYNA provides an effective way for accurate research on the elastic-plastic deformation of the hot-rolled seamless pipe rolling process. This paper takes the SMS MEER rolling mill of Ф 177PQF steel tube rolling units in Ansteel seamless steel pipe mill as the research object, makes a coupled thermo-mechanical finite element method computation on steel pipe rolling process, gets the residual stress and strain change rule in the rolling process. Written the rolling process parametric programs by APDL, analyzed the influence of roller spacing and velocity parameters on residual stress and strain, which provides a reliable theory basis for improving the performance of hot rolling seamless steel pipe and the optimization of rolling technological parameters.
['Chang Li', 'Guangbing. Zhao', 'Xing Han']
Finite element analysis of hot-rolled seamless pipe rolling process elastic-plastic deformation
97,822
Interactive systems based on Augmented Reality (AR) and Tangible User Interfaces (TUI) hold great promise for enhancing the learning and understanding of abstract phenomena. In particular, they enable to take advantage of numerical simulation and pedagogical supports, while keeping the learner involved in true physical experimentations. In this paper, we present three examples based on AR and TUI, where the concepts to be learnt are difficult to perceive. The first one, Helios, targets K-12 learners in the field of astronomy. The second one, Hobit is dedicated to experiments in wave optics. Finally, the third one, Teegi, allows one to get to know more about brain activity. These three hybrid interfaces have emerged from a common basis that jointly combines research and development work in the fields of Instructional Design and Human-Computer Interaction, from theoretical to practical aspects. On the basis of investigations carried out in real context of use and on the grounding works in education and HCI which corroborate the design choices that were made, we formalize how and why the hybridization of the real and the virtual enables to leverage the way learners understand intangible phenomena in Sciences education.
['Stéphanie Fleck', 'Martin Hachet']
Making Tangible the Intangible: Hybridization of the Real and the Virtual to Enhance Learning of Abstract Phenomena
965,642
Researchers have proposed the core-based trees (CBTs) and protocol independent multicasting (PIM) protocols to route multicast data an internetworks. We compare the simulated performance of CBT and PIM using the OPNET network simulation tool. Performance metrics include end-to-end delay, network resource usage, join time, the size of the tables containing multicast routing information, and the impact of the timers introduced by the protocols. We also offer suggestions to improve PIM sparse mode while retaining the ability to offer both shared tree and source-based tree routing.
['Thomas J. Billhartz', 'J.B. Cain', 'E. Farrey-Goudreau', 'D. Fieg', 'Stephen Gordon Batsell']
Performance and resource cost comparisons for the CBT and PIM multicast routing protocols
453,016
The use of digital technologies is now widespread and increasing, but is not always optimized for effective learning. Teachers in higher education have little time or support to work on innovation and improvement of their teaching, which often means they simply replicate their current practice in a digital medium. This paper makes the case for a learning design support environment to support and scaffold teachers' engagement with and development of technology-enhanced learning, based on user requirements and on pedagogic theory. To be able to adopt, adapt, and experiment with learning designs, teachers need a theory-informed way of representing the critical characteristics of good pedagogy as they discover how to optimize learning technologies. This paper explains the design approach of the Learning Design Support Environment project, and how it aims to support teachers in achieving this goal.
['Diana Laurillard', 'Patricia Charlton', 'Brock Craft', 'Dionisios Dimakopoulos', 'D. Ljubojevic', 'George D. Magoulas', 'Elizabeth Masterman', 'Rosa Trobajo Pujadas', 'Edgar A. Whitley', 'Kim David Whittlestone']
A constructionist learning environment for teachers to model learning designs
192,031
Se presenta un estudio que explora la disposicion de los estudiantes de los programas de Ingenieria de Sistemas, Ingenieria de Telecomunicaciones y la Especializacion de Seguridad Informatica de la Universidad Piloto de Colombia a participar en actividades relacionadas con delitos informaticos o una cultura hacker, teniendo en cuenta principalmente su entorno academico y la relacion con los docentes.
['Sandra Lorena Manchola', 'Gloria Hazlady Cornejo Suarez', 'B. O. E. Herrera']
Investigación sobre el hacker y sus posibles comienzos en la comunidad estudiantil. Caso Universidad Piloto de Colombia
861,747
Archaeologists are currently producing huge numbers of digitized photographs to record and preserve artefact finds. These images are used to identify and categorize artefacts and reason about connections between artefacts and perform outreach to the public. However, finding specific types of images within collections remains a major challenge. Often, the metadata associated with images is sparse or is inconsistent. This makes keyword-based exploratory search difficult, leaving researchers to rely on serendipity and slowing down the research process. We present an image-based retrieval system that addresses this problem for biface artefacts. In order to identify artefact characteristics that need to be captured by image features, we conducted a contextual inquiry study with experts in bifaces. We then devised several descriptors for matching images of bifaces with similar artefacts. We evaluated the performance of these descriptors using measures that specifically look at the differences between the sets of images returned by the search system using different descriptors. Through this nuanced approach, we have provided a comprehensive analysis of the strengths and weaknesses of the different descriptors and identified implications for design in the search systems for archaeology.
['Mark G. Eramian', 'Ekta Walia', 'Christopher Power', 'Paul A. Cairns', 'Andrew Lewis']
Image-based search and retrieval for biface artefacts using features capturing archaeologically significant characteristics
955,284
The proliferation of mobile devices and popularity of appli-cations like Facebook and Twitter has allowed people to stay connected to their far-spread networks. However, little attention has been spent on connections in the local, physi-cal community. These collocated connections are important for building social capital, sharing resources, and providing physical support. Movement is a visualization that uses lo-cation data generated automatically by mobile devices to increase community awareness following a new standard of privacy preservation. Movement also consists of an app that allows for direct connection to people with shared lo-cation histories, again in a secure and private manner. An integrated demo at CSCW will display the popular venues visited by conference attendees and allow users to connect with others who visited the same locations.
['Xiao Ma', 'Ross McLachlan', 'Donghun Lee', 'Mor Naaman', 'Emily Sun']
Movement: A Secure Community Awareness Application and Display
671,744
As the number of website users in Asia grows, there is an increasing need to gain an overview of human–computer interaction (HCI) research about users and websites in that context. This article presents an overview of HCI research on website usability in Asia “from within,” which outlines the articles written by researchers with affiliations to universities in that part of the world. Based on a key word approach to major HCI research outlets, 60 articles from 2001 to 2011 were identified and analyzed. Results indicate that academic websites, e-commerce websites, and tourism websites were the most studied website domains in Asia. Typically, university graduates were used as participants in a laboratory setup and asked to navigate and find information on a website. No systematic use of cultural variables or theories to code, analyze, and interpret data and findings was found. The article discusses the results and the need for a greater sensitivity to what is “local” and “from within” in HCI research and wha...
['Ather Nawaz', 'Torkil Clemmensen']
Website Usability in Asia “From Within”: An Overview of a Decade of Literature
370,079
Visually induced motion sickness, or "cybersickness," has been well documented in all kinds of vehicular simulators and in many virtual environments. It probably occurs in all virtual environments. Cybersickness has many known determinants, including (a short list) field-of-view, flicker, transport delays, duration of exposure, gender, and susceptibility to motion sickness. Since many of these determinants can be controlled, a major objective in designing virtual environments is to hold cybersickness below a specified level a specified proportion of the time. More than 20 years ago C. W. Simon presented a research strategy based on fractional factorial experiments that was capable in principle of realizing this objective. With one notable exception, however, this strategy was not adopted by the human factors community, The main reason was that implementing Simon's strategy was a major undertaking, very time-consuming, and very costly, In addition, many investigators were not satisfied that Simon had adequately addressed issues of statistical reliability. The present paper proposes a modified Simonian approach to the same objective (holding cybersickness below specified standards) with some loss in the range of application but a greatly reduced commitment of resources.
['Marshall B. Jones', 'Robert S. Kennedy', 'Kay M. Stanney']
Toward systematic control of cybersickness
275,069
In this paper, we investigate a novel real-time pricing scheme, which considers both renewable energy resources and traditional power resources and could effectively guide the participants to achieve individual welfare maximization in the system. To be specific, we develop a Lagrangian-based approach to transform the global optimization conducted by the power company into distributed optimization problems to obtain explicit energy consumption, supply, and price decisions for individual participants. Also, we show that these distributed problems derived from the global optimization by the power company are consistent with individual welfare maximization problems for end-users and traditional power plants. We also investigate and formalize the vulnerabilities of the real-time pricing scheme by considering two types of data integrity attacks: Ex-ante attacks and Ex-post attacks, which are launched by the adversary before or after the decision-making process. We systematically analyze the welfare impacts of these attacks on the real-time pricing scheme. Through a combination of theoretical analysis and performance evaluation, our data shows that the real-time pricing scheme could effectively guide the participants to achieve welfare maximization, while cyber-attacks could significantly disrupt the results of real-time pricing decisions, imposing welfare reduction on the participants.
['Xialei Zhang', 'Xinyu Yang', 'Jie Lin', 'Guobin Xu', 'Wei Yu']
On Data Integrity Attacks Against Real-Time Pricing in Energy-Based Cyber-Physical Systems
702,069
We continue the line of work initiated by Katz (Eurocrypt 2007) on using tamper-proof hardware for universally composable secure computation. As our main result, we show an ecient oblivious-transfer (OT) protocol in which two parties each create and exchange a single, stateless token and can then run an unbounded number of OTs. Our result yields what we believe is the most practical and ecient known approach for oblivious transfer based on tamper-proof tokens, and implies that the parties can perform (repeated) secure computation of arbitrary functions without exchanging additional tokens. Motivated by this result, we investigate the minimal number of stateless tokens needed for universally composable OT/ secure computation. We prove that our protocol is optimal in this regard for constructions making black-box use of the tokens (in a sense we dene). We also show that nonblack-box techniques can be used to obtain a construction using only a single stateless token.
['Seung Geol Choi', 'Jonathan Katz', 'Dominique Schröder', 'Arkady Yerukhimovich', 'Hong-Sheng Zhou']
Efficient) Universally Composable Oblivious Transfer Using a Minimal Number of Stateless Tokens
518,210
In this letter, we study quickest spectrum sensing for cognitive radios with multiple receive antennas in Gaussian and Rayleigh channels. We derive the probability density function for the fading case and analytically compute the upper bound and asymptotic worst-case detection delay for both of the cases. The extension into multiple antennas allows us to gain insights into the reduction in detection delay that multiple antennas can provide. Although sensing in a Rayleigh channel is more challenging, good sensing performance is still demonstrated.
['Effariza Hanafi', 'Philippa A. Martin', 'Peter J. Smith', 'Alan J. Coulson']
Extension of Quickest Spectrum Sensing to Multiple Antennas and Rayleigh Channels
245,380
The present contribution aims at creating color images printed with fluorescent inks that are only visible under UV light. The considered fluorescent inks absorb light in the UV wavelength range and reemit part of it in the visible wavelength range. In contrast to normal color printing which relies on the spectral absorption of light by the inks, at low concentration fluorescent inks behave additively, i.e. their light emission spectra sum up. We first analyze to which extent different fluorescent inks can be superposed. Due to the quenching effect, at high concentrations of the fluorescent molecules, the fluorescent effect diminishes. With an ink-jet printer capable of printing pixels at reduced dot sizes, we reduce the concentration of the individual fluorescent inks and are able to create from the blue, red and greenish-yellow inks the new colorants white and magenta. In order to avoid quenching effects, we propose a color halftoning method relying on diagonally oriented pre-computed screen dots, which are printed side by side. For gamut mapping and color separation, we create a 3D representation of the fluorescent ink gamut in CIELAB space by predicting halftone fluorescent emission spectra according to the spectral Neugebauer model. Thanks to gamut mapping and juxtaposed halftoning, we create color images, which are invisible under daylight and have, under UV light, a high resemblance with the original images.
['Roger D. Hersch', 'Philipp Donzé', 'Sylvain Chosson']
Color images visible under UV light
315,919
Complex event patterns involving Kleene closure are finding application in a variety of stream environments for tracking and monitoring purposes. In this paper, we propose a compact language, SASE+, that can be used to define a wide variety of Kleene closure patterns, analyze the expressive power of the language, and outline an automata-based implementation for efficient Kleene closure evaluation over event streams.
['Daniel Gyllstrom', 'Jagrati Agrawal', 'Yanlei Diao', 'Neil Immerman']
On Supporting Kleene Closure over Event Streams
64,958
This paper addresses the efficient high-compression coding of palettized color images. The most common methods for the lossy compression of color images rely on independent block oriented transform coding of the three or four color components. These techniques do not make use of the high redundancy of the color components and introduce some very undesirable errors at high compression, in particular block distortion. We present an efficient and original technique to code color images with a small number of colors. This is an important class of images in multimedia applications. The technique codes the according luminance image using an existing segmented image coding method for monochrome images. The color information is independently represented in a bit map. The method does not rely on the commonly used color separation and shows a far better subjective image quality than JPEG at high compression.
['J. van Overloop', 'Wilfried Philips', 'Dimitri Torfs', 'Ignace Lemahieu']
Segmented image coding of palettized images
127,821
Background and Objective In order to facilitate clinical research across multiple institutions, data harmonization is a critical requirement. Common data elements (CDEs) collect data uniformly, allowing data interoperability between research studies. However, structural limitations have hindered the application of CDEs. An advanced modeling structure is needed to rectify such limitations. The openEHR 2-level modeling approach has been widely implemented in the medical informatics domain. The aim of our study is to explore the feasibility of applying an openEHR approach to model the CDE concept.#N##N#Materials and Methods Using the National Institute of Neurological Disorders and Stroke General CDEs as material, we developed a semiautomatic mapping tool to assist domain experts mapping CDEs to existing openEHR archetypes in order to evaluate their coverage and to allow further analysis. In addition, we modeled a set of CDEs using the openEHR approach to evaluate the ability of archetypes to structurally represent any type of CDE content.#N##N#Results Among 184 CDEs, 28% (51) of the archetypes could be directly used to represent CDEs, while 53% (98) of the archetypes required further development (extension or specialization). A comprehensive comparison between CDEs and openEHR archetypes was conducted based on the lessons learnt from the practical modeling.#N##N#Discussion CDEs and archetypes have dissimilar modeling approaches, but the data structure of both models are essentially similar. This study proposes to develop a comprehensive structure to model CDE concepts instead of improving the structure of CED.#N##N#Conclusion The findings from this research show that the openEHR archetype has structural coverage for the CDEs, namely the openEHR archetype is able to represent the CDEs and meet the functional expectations of the CDEs. This work can be used as a reference when improving CDE structure using an advanced modeling approach.
['Ching-Heng Lin', 'Yang-Cheng Fann', 'Der-Ming Liou']
An exploratory study using an openEHR 2-level modeling approach to represent common data elements
693,264
On the Appropriateness of Complex-Valued Neural Networks for Speech Enhancement.
['Lukas Drude', 'Bhiksha Raj', 'Reinhold Haeb-Umbach']
On the Appropriateness of Complex-Valued Neural Networks for Speech Enhancement.
872,025
We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matern stationary co-variance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets.
['Christopher J. Paciorek', 'Mark J. Schervish']
Nonstationary Covariance Functions for Gaussian Process Regression
355,391
Today more and more users on C2C marketplaces turn to be professional sellers and their online stores become more like B2C websites. Since that, many C2C marketplaces introduced the trust evaluation model based on B2C to evaluate the credibility of each online store, like Taobao proposed the dynamic score in 2009. But the dynamic score had limitations on some aspects and aroused many criticisms. In this paper we analyzed the developing trend of C2C marketplace in China, and discussed the problems the present trust evaluating system faced. Base on that, we presented a dynamic model to evaluating the credibility of each store on C2C marketplaces. This model considered all the transaction factors and AHP was adopted to calculate the trust of transaction. Meanwhile, according the former researches, the current transaction records more than the history records, so a penalty/compensation function was introduced to describe the trust score will change with the time. This dynamic model will help to establish a credible environment on C2C marketplaces.
['Yun Yang', 'Juhua Chen']
A Dynamic Trust Evaluation Model on C2C Marketplaces
111,168
This paper presents research on a Vivaldi antenna with the ability to change its operating frequency between narrowband and wideband. In total, five different modes are supported: a wideband and four narrowband modes. The wideband mode extends from a frequency of 1–3.7 GHz. The narrowband modes operate at 2.3, 2.6, 3, and 3.2 GHz, respectively. The wideband to narrowband reconfiguration is achieved by switching from a coplanar waveguide feed to a slot-line feed. We also compare simulated and measured results for the antenna; including magnitudes of reflection coefficient, radiation patterns and gain.
['Sahar Chagharvand', 'M. R. Hamid', 'Muhammad Ramlee Kamarudin', 'James R. Kelly']
Wide-to-narrowband reconfigurable Vivaldi antenna using switched-feed technique
697,057
We consider n × n matrix whose elements are fuzzy numbers (hereinafter a fuzzy matrix) and we introduce notions of regularity of a fuzzy matrix and the inverse matrix of a fuzzy matrix (hereinafter the fuzzy inverse) in this paper. It is shown that the fuzzy inverse is a fuzzy matrix as well. Also we pay attention to the calculation of the fuzzy inverse in a special case. Main results are based on Rohn’s results in the field of linear problems with inexact data.
['Julija Lebedinska']
On another view of an inverse of an interval matrix
531,018
This paper presents an online learning with regularized kernel based one-class extreme learning machine (ELM) classifier and is referred as online RK-OC-ELM. The baseline kernel hyperplane model considers whole data in a single chunk with regularized ELM approach for offline learning in case of one-class classification (OCC). Further, the basic hyper plane model is adapted in an online fashion from stream of training samples in this paper. Two frameworks viz., boundary and reconstruction are presented to detect the target class in online RKOC-ELM. Boundary framework based one-class classifier consists of single node output architecture and classifier endeavors to approximate all data to any real number. However, one-class classifier based on reconstruction framework is an autoencoder architecture, where output nodes are identical to input nodes and classifier endeavor to reconstruct input layer at the output layer. Both these frameworks employ regularized kernel ELM based online learning and consistency based model selection has been employed to select learning algorithm parameters. The performance of online RK-OC-ELM has been evaluated on standard benchmark datasets as well as on artificial datasets and the results are compared with existing state-of-the art one-class classifiers. The results indicate that the online learning one-class classifier is slightly better or same as batch learning based approaches. As, base classifier used for the proposed classifiers are based on the ELM, hence, proposed classifiers would also inherit the benefit of the base classifier i.e. it will perform faster computation compared to traditional autoencoder based one-class classifier.
['Chandan Gautam', 'Aruna Tiwari', 'Sundaram Suresh', 'Kapil Ahuja']
Online Learning with Regularized Kernel for One-class Classification
985,499
Satellite communication networks have advantages of global coverage and inherent broadcast capability, and offer a solution for providing broadband access to end-users. The regular mesh topologies, typical for satellite networks, comprise of Inter-Satellite Links(ISLs). But due to dynamic changing of traffic load and inter-plane ISL distance variation, adaptive routing is an absolute requirement for optimizing the network utilization. In this paper, we present a low-complexity probabilistic routing(LCPR) algorithm for polar orbits satellite constellation networks. The traditional algorithms choose the path with minimum hops according to the routing tables stored in the on-board equipment. Different from them, we make all the satellite to choose the next hop with minimum propagation delay according to the longitude and latitude of the current node and destination. Without routing table stored in the satellites, the algorithm reduces the space complexity effectively. The whole algorithm has no iteration process, thus decreasing the time complexity to some degree. Additionally, in the proposed algorithm, each satellite can inform the neighbouring satellites of the queue utilization condition periodically so that packet may be sent to a respectively free node adaptively according to the probability. Results from simulations show that LCPR has better performance than other routing algorithms in term of throughput and packet loss rate, which is especially suited large-scale users condition.
['Xinmeng Liu', 'Zhuqing Jiang', 'Chonghua Liu', 'Shanbao He', 'Chao Li', 'Yuying Yang', 'Aidong Men']
A low-complexity probabilistic routing algorithm for polar orbits satellite constellation networks
698,060
The improvement of communication systems is conducted through communication protocol engineering and optimization processes. This paper presents an effort to upgrade the IEEE 802.16 and .16e protocol performance regarding the delay during subscriber?s network entry or base station handover. Our communication protocol engineering and optimization process resulted in a new uplink channel descriptor (UCD)-aware initial ranging transmission opportunity slots distribution. Using an analytical performance evaluation, we prove the relevance of the new algorithm and the increase of the WiMax network performance.
['Pero Latkoski', 'Borislav Popovski']
Analysis and Optimization of Network Entry Delay in WiMax Networks
371,906
Metro maps are schematic diagrams of public transport networks that serve as visual aids for route planning and navigation tasks. It is a challenging problem in network visualization to automatically draw appealing metro maps. There are two aspects to this problem that depend on each other: the layout problem of finding station and link coordinates and the labeling problem of placing nonoverlapping station labels. In this paper, we present a new integral approach that solves the combined layout and labeling problem (each of which, independently, is known to be NP-hard) using mixed-integer programming (MIP). We identify seven design rules used in most real-world metro maps. We split these rules into hard and soft constraints and translate them into an MIP model. Our MIP formulation finds a metro map that satisfies all hard constraints (if such a drawing exists) and minimizes a weighted sum of costs that correspond to the soft constraints. We have implemented the MIP model and present a case study and the results of an expert assessment to evaluate the performance of our approach in comparison to both manually designed official maps and results of previous layout methods.
['Martin Nöllenburg', 'Alexander Wolff']
Drawing and Labeling High-Quality Metro Maps by Mixed-Integer Programming
65,787
Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted. ed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to outline a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted.
['Ian F. Akyildiz', 'Xudong Wang', 'Weilin Wang']
Wireless mesh networks: a survey
329,923
Active contour models, colloquially known as snakes, are quite popular for several applications such as object boundary detection, image segmentation, object tracking, and classification via energy minimization. While energy minimization may be accomplished using traditional optimization methods, approaches based on nature-inspired evolutionary algorithms have been developed in recent years. One such evolutionary algorithm that has been used extensively in active contours is the particle swarm optimization (PSO). However, conventional PSO converges slowly and gets trapped in local minimum easily which results in inaccurate detection of concavities in the object boundary. This is taken care of by using proposed multiswarm PSO in which a swarm is set for every control point in the snake and then all the swarms search for their best points simultaneously through information sharing among them. The performance of the multiswarm PSO-based search process is further enhanced by using dynamic adaptation of the inertia factor. In this paper, we propose using a set of fuzzy rules to adjust the inertia weight on the basis of the current normalized snake energy and the current value of inertia. Experimental results demonstrate the effectiveness of the proposed method compared to conventional approaches.
['Ajay Khunteta', 'Debashis Ghosh']
Object Boundary Detection Using Active Contour Model via Multiswarm PSO with Fuzzy-Rule Based Adaptation of Inertia Factor
882,565
Connecting Customer Relationship Management Systems to Social Networks
['Hanno Zwikstra', 'Frederik Hogenboom', 'Damir Vandic', 'Flavius Frasincar']
Connecting Customer Relationship Management Systems to Social Networks
590,286
Model-driven Engineering (MDE) is a paradigm that promotes the use of models and automatic model transformations to handle complex software developments. Model transformations promise to reduce the effort for manipulating models. However, building transformations themselves is not easy. Higher-order Transformations (HOTs) are a means for automatically building model transformations. Building HOTs is in itself a complex task mainly because there are no standard languages for implementing them, and there are not many HOTs available in the literature to learn from. This situation is even worse when more sophisticated HOTs are required with two input models. We consider a real application to generate transformations for tailoring software process, because the generated transformation needs to have two input models: the organizational process and the project context model. In this paper, we show three different techniques for implementing this HOT and discuss their benefits and limitations.
['Luis Silvestre', 'María Cecilia Bastarrica', 'Sergio F. Ochoa']
Implementing HOTs that Generate Transformations with Two Input Models
977,423
In the last few years there has been a growing need for organizations to build cooperative information systems. These organizations and their processes that manipulate vast quantities of information are based on heterogeneous, distributed and autonomous data sources. However, without appropriated techniques of negotiation, any execution of organizational information systems would yield to disjoining and error-prone behavior, while requiring excessive effort to build and maintain. In this paper, we present a flexible negotiation framework, which is based on social constraints and conversation plans. The infrastructure of this framework may well fit the negotiation needs of organizational information systems in a highly dynamic and unpredictable environment. This framework has been implemented as a negotiation service in our cooperative application environment. Some examples are given from manufacturing applications.
['Nacereddine Zarour', 'Mahmoud Boufaida', 'Lionel Seinturier']
A Negotiation Framework for Organizational Information Systems
370,531
Robust Control for Asynchronous Switched Nonlinear Systems with Time Varying Delays
['Ahmad Taher Azar', 'Fernando E. Serrano']
Robust Control for Asynchronous Switched Nonlinear Systems with Time Varying Delays
947,783
In recent years, considerable advances have been made in the field of storing and processing moving data points. Data structures that discretely update moving objects or points are called Kinetic Data Structures. A particular type of data structure that constructs geometric spanner for moving point sets is called Deformable Spanner. This paper explores the underlying philosophy of deformable spanner and describes its rudiments. Deformable Spanners create a multi-resolution hierarchical subgraph that reflects a great deal about the original graph while maintaining all the existing proximity information in the graph. This proximity information plays a crucial role in wide variety of application areas, such as fast similarity queries in metric spaces, clustering in both dynamic and static multidimensional data, and cartographic representation of the complex networks. The motivation behind our work is twofold: first, to provide comprehensive understanding of the deformable spanner with sample illustrations without sacrificing depth of coverage; second, to analyze and improve deformable spanner’s covering property for 2D case in Euclidean space.
['Sinan Kockara']
Balls Hierarchy: A Proximity Query for Metric Spaces and Covering Theorem
465,092
Cooperative wireless network medium access schemes can achieve high throughput through collision resolution. By using a multi-beam adaptive array (MBAA) at a base station or access point, it can concurrently communicate with multiple nodes/users and thus the network performance can be further enhanced. In this paper, we provide an efficient packet resolution method and analyze the throughput of cooperative wireless medium access scheme exploiting MBAAs.
['Xin Li', 'Yimin Zhang']
Throughput analysis of cooperative wireless medium access scheme exploiting multi-beam adaptive arrays
464,718
Distributed SPARQL throughput increase: on the effectiveness of workload-driven RDF partitioning
['Cosmin Basca', 'Abraham Bernstein']
Distributed SPARQL throughput increase: on the effectiveness of workload-driven RDF partitioning
638,902
Recently, the capacity region of the Gaussian broadcast channel has been characterized. For a given transmit power constraint, those points on the boundary of the capacity region can be regarded as the set of optimal operational points. The present work addresses the problem of selecting the point within this set that satisfies given constraints on the ratios between rates achieved by the different users in the network. This problem is usually known as rate balancing. To this end, the optimum iterative approach for general MIMO channels is revisited and adapted to an OFDM transmission scheme. Specifically, an algorithm is proposed that exploits the structure of the OFDM channel and whose convergence speed is essentially insensitive to the number of subcarriers. This is in contrast to a straightforward extension of the general MIMO algorithm to an OFDM scheme. Still, relatively high complexity and the need of a time-sharing policy to reach certain rates are at least two obstacles for a practical implementation of the optimum solution. Based on a novel decomposition technique for broadcast channels a suboptimum non-iterative algorithm is introduced that does not require time-sharing and very closely approaches the optimum solution.
['Pedro Tejera', 'Wolfgang Utschick', 'Josef A. Nossek', 'Gerhard Bauch']
Rate Balancing in Multiuser MIMO OFDM Systems
69,905
The performance of a mobile wireless network depends on the time-varying connectivity of the network as nodes move around. Hence, there has been a growing interest in the distribution of intermeeting times between two nodes in mobile wireless networks. We study the distribution of intermeeting times under the generalized Hybrid Random Walk mobility model. We show that when 1) the (conditional) probability that two nodes can communicate directly with each other given that they are in the same cell is small and 2) node's transitions in locations are independent over time, the distribution of intermeeting times can be well approximated using an exponential distribution. Moreover, the mean of intermeeting times can be estimated using the number of cells in the network and the aforementioned conditional probability of having a communication link when the two nodes are in the same cell. We also offer some insight behind the emergence of an exponential distribution, borrowing well-known results in existing literature on rare events.
['Richard J. La']
Distributional Convergence of Intermeeting Times under the Generalized Hybrid Random Walk Mobility Model
519,368
High-level, directive-based solutions are becoming the programming models (PMs) of the multi/many-core architectures. Several solutions relying on operating system (OS) threads perfectly work with a moderate number of cores. However, exascale systems will spawn hundreds of thousands of threads in order to exploit their massive parallel architectures and thus conventional OS threads are too heavy for that purpose. Several lightweight thread (LWT) libraries have recently appeared offering lighter mechanisms to tackle massive concurrency. In order to examine the suitability of LWTs in high-level runtimes, we develop a set of microbenchmarks consisting of commonly-found patterns in current parallel codes. Moreover, we study the semantics offered by some LWT libraries in order to expose the similarities between different LWT application programming interfaces. This study reveals that a reduced set of LWT functions can be sufficient to cover the common parallel code patterns andthat those LWT libraries perform better than OS threads-based solutions in cases where task and nested parallelism are becoming more popular with new architectures.
['Adrián Castelló', 'Antonio J. Peña', 'Sangmin Seo', 'Rafael Mayo', 'Pavan Balaji', 'Enrique S. Quintana-Ortí']
A Review of Lightweight Thread Approaches for High Performance Computing
954,838
In this paper, as part of the adaptive resource allocation and management (ARAM) system (Alagoz, 2001), we propose an adaptive admission control strategy, which is aimed at combating link congestion and compromised channel conditions inherent in multimedia satellite networks. We present the performance comparisons of a traditional (fixed) admission control strategy versus the new adaptive admission control strategy for a direct broadcast satellite (DBS) network with return channel system (DBS-RCS). Performance comparisons are done using the ARAM simulator. The traffic mix in the simulator includes both available bit rate (ABR) traffic and variable bit rate (VBR) traffic. The dynamic channel conditions in the simulator reflect time variant error rates due to external effects such as rain. In order to maximize the resource utilization, both for fixed and adaptive approaches, assignment of the VBR services are determined based on the estimated statistical multiplexing and other system attributes, namely, video source, data transmission, and channel coding rates. In this paper, we focus on the admission control algorithms and assess their impact on quality-of-service (QoS) and forward link utilization of DBS-RCS. We show that the proposed adaptive admission control strategy is profoundly superior to the traditional admission control strategy with only a marginal decrease in QoS. Since the ARAM system has several parameters and strategies that play key roles in terms of the performance measures, their sensitivity analysis are also studied to verify the above foundations.
['Fatih Alagoz', 'Branimir R. Vojcic', 'David Walters', 'Amina AlRustamani', 'Raymond L. Pickholtz']
Fixed versus adaptive admission control in direct broadcast Satellite networks with return channel systems
527,975
Industrial Wireless Sensor-Actuator Networks (WSANs) enable Internet of Things (IoT) to be incorporated in industrial plants. The dynamics of industrial environments and stringent reliability requirements necessitate high degrees of fault tolerance. WirelessHART is an important industrial standard for WSANs that have seen world-wide deployments. WirelessHART employs graph routing to enhance network reliability through multiple paths. Since many industrial devices operate on batteries in harsh environments where changing batteries is prohibitively labor-intensive, WirelessHART networks need to achieve a long network lifetime. To meet industrial demand for long-term reliable communication, this paper studies the problem of maximizing network lifetime for WirelessHART networks under graph routing. We first formulate the network lifetime maximization problem and prove it is NP-hard. Then, we propose an optimal algorithm based on integer programming, a linear programming relaxation algorithm and a greedy heuristic algorithm to prolong the network lifetime of WirelessHART networks. Experiments in a physical testbed and simulations show our algorithms can improve the network lifetime by up to 60% while preserving the reliability benefits of graph routing.
['Chengjie Wu', 'Dolvara Gunatilaka', 'Abusayeed Saifullah', 'Mo Sha', 'Paras Babu Tiwari', 'Chenyang Lu', 'Yixin Chen']
Maximizing Network Lifetime of WirelessHART Networks under Graph Routing
763,678
In this paper, we show a compiler that generates high speed pipeline circuits for loop and recursive programs written in C programming language, which are the most time exhaustive parts in many application problems. The compiler has following features. First, all operations (except for memory accesses) are divided into cascades of 8-bit width (at maximum) operations in order to achieve high speed clock cycle. Second, in order to fulfil the pipeline, variables that have data feedback dependencies between loop cycles are specially scheduled based on several kinds of optimizing techniques. Furthermore, computations of each loop cycle are speculatively started in every clock cycle even if an array on the same memory bank may be accessed more than once in a loop cycle and there may be data feedback dependencies caused by the array accesses. When the array is accessed more than once, the pipeline is stalled while the array access operations are executed sequentially, and when the feedback dependencies are detected, the speculative computations are cancelled, and restarted after the updates of array are finished. Experiments on simple combinatorial programs showed that the pipeline circuits generated by the compiler run about 39-47 MHz on ALTERA EPF10KA serious (which is as fast as hand optimized circuits), and the speed up by the speculative execution is more than two.
['Tsutomu Maruyama', 'Tsutomu Hoshino']
A C to HDL compiler for pipeline processing on FPGAs
50,011
Correlation filters have recently made significant improvements in visual object tracking on both efficiency and accuracy. In this paper, we propose a sparse correlation filter, which combines the effectiveness of sparse representation and the computational efficiency of correlation filters. The sparse representation is achieved through solving an l 0 regularized least squares problem. The obtained sparse correlation filters are able to represent the essential information of the tracked target while being insensitive to noise. During tracking, the appearance of the target is modeled by a sparse correlation filter, and the filter is re-trained after tracking on each frame to adapt to the appearance changes of the target. The experimental results on the CVPR2013 Online Object Tracking Benchmark (OOTB) show the effectiveness of our sparse correlation filter-based tracker.
['Yanmei Dong', 'Min Yang', 'Mingtao Pei']
Visual tracking with sparse correlation filters
883,737
In this article a radial basis function network (RBFN) approach for fast inverse kinematics computation and effective geometrically bounded singularities prevention of redundant manipulators is presented. The approach is based on establishing some characterizing matrices, representing some bounded geometrical concepts, in order to yield a simple performance index and a null space vector for singularities avoidance/prevention and safe path generation. Here, this null space vector is computed using a properly trained RBFN and included in the computation of the inverse kinematics being performed also by another properly trained RBFN.
['René V. Mayorga', 'Pronnapa Sanongboon']
A radial basis function network approach for geometrically bounded manipulator inverse kinematics computation
136,205
In this paper, we study the vehicle-to-vehicle communication and destination discovery problems that are often seen in VANET. Then we present a novel routing protocol called COoperative Destination discovery Scheme with ADaptive routing (CODS-AD). CODS-AD not only enables vehicles to cooperatively assist source node to discover the position of destination node, doing so without the support of location services. It also enhances the ability to dynamically adjust the forwarding strategy according to the traffic density of road segment. This improves the delivery ratio and reduces unnecessary waste of bandwidth in the case of lower traffic density. Based on these studies, we modeled the real vehicle mobility with vehicle-to-vehicle wireless transmissions using TraNS and NS2 that enables us to simulate in a much more realistic environment. Finally, the simulation results show that CODS-AD can efficiently locate the position of destination nodes hence it decreases the control overheads and effectively improves the delivery ratios.
['Chun-Chih Lo', 'Jeng-Wei Lee', 'Che-Hung Lin', 'Mong-Fong Horng', 'Yau-Hwang Kuo']
A Cooperative destination discovery scheme to support adaptive routing in VANETs
507,857
The paper describes the MOLA Tool, which supports the model transformation language MOLA. MOLA Tool consists of two parts: MOLA definition environment and MOLA execution environment. MOLA definition environment is based on the GMF (Generic Modeling Framework) and contains graphical editors for metamodels and MOLA diagrams, as well as the MOLA compiler. The main component of MOLA execution environment is a MOLA virtual machine, which performs model transformations, using an SQL database as a repository. The execution environment may be used as a plug-in for Eclipse based modeling tools (e.g., IBM Rational RSA). The current status of the tool is truly academic.
['Audris Kalnins', 'Edgars Celms', 'Agris Sostaks']
Tool support for MOLA
95,811
Recognition of voiced sounds with a continuous state HMM.
['S. M. Houghton', 'Colin J. Champion', 'Philip Weber']
Recognition of voiced sounds with a continuous state HMM.
734,243
In this paper, we use crossing number and wire area arguments to find lower bounds on the layout area and maximum edge length of a variety of computationally useful networks. In particular, we describe 1) an N-node planar graph which has layout area Θ(NlogN), and maximum edge length Θ(N1/2/log1/2N), 2) an N-node graph with an O(N1/2)-separator which has layout area Θ(Nlog2N) and maximum edge length Θ(N1/2logN/loglogN), and 3) an N-node graph with an O(N1-1/r)-separator which has maximum edge length Θ(N1-1/r) for any r ≥ 3.
['Frank Thomson Leighton']
New lower bound techniques for VLSI
139,449
Apoptosis in Cancer Cells
['Eva Blahovcová', 'Henrieta Škovierová', 'Ján Strnádel', 'Dušan Mištuna', 'Erika Halasova']
Apoptosis in Cancer Cells
855,170
We have implemented an ontology-based text-mining tool for predicting disease outbreaks. This tool is designed to be used as a free and open-source plug-in for InSTEDD’s interactive biosurveillance system Riff. Availability. This tool, in its source code, is freely available from http://code.google.com/p/e-dop/.
['Nicolae Dragu', 'Fouad Elkhoury', 'Takunari Miyazaki', 'Ralph Morelli', 'Nicolás di Tada']
Ontology-Based Text Mining for Predicting Disease Outbreaks
431,499
The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.
['Seongwan Kim', 'Yuseok Ban', 'Sangyoun Lee']
Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter
992,781
A new methodology is presented for the automated recognition-identification of musical recordings that have suffered from a high degree of playing speed and frequency band distortion. The procedure of recognition is essentially based on the comparison between an unknown musical recording and a set of model ones, according to some predefined specific characteristics of the signals. In order to extract these characteristics from a musical recording, novel feature extraction algorithms are employed. This procedure is applied to the whole set of model musical recordings, thus creating a model characteristic database. Each time we want an unknown musical recording to be identified, the same procedure is applied to it, and subsequently, the derived characteristics are compared with the database contents via an introduced set of criteria. The proposed methodology led to the development of a system whose performance was extensively tested with various types of broadcasted musical recordings. The system performed successful recognition for the 94% of the tested recordings. It should be noted that the presented system is parallelizable and can operate in real time.
['D. Fragoulis', 'George Rousopoulos', 'Thanasis Panagopoulos', 'Constantin Alexiou', 'Constantin Papaodysseus']
On the automated recognition of seriously distorted musical recordings
159,695
Evaluation of Vehicular Camera Performance through ISO-based Image Quality Quantification.
['Kyung-Woo Ko', 'Kee-Hyon Park', 'Yeong-Ho Ha', 'Cheol-Hee Lee']
Evaluation of Vehicular Camera Performance through ISO-based Image Quality Quantification.
784,517
One of the most representative and studied queries in Spatial Databases is the (K) Nearest-Neighbor (NNQ), that discovers the (K) nearest neighbor(s) to a query point. An extension that is important for practical ap­plications is the (K) Group Nearest Neighbor Query (GNNQ), that discovers the (K) nearest neighbor(s) to a group of query points (considering the sum of distances to all the members of the query group). This query has been studied during the recent years, considering data sets indexed by efficient spatial data structures. We study (K) GNNQs, considering non-indexed data sets, since this case is frequent in practical applications. And we present two (RAM-based) Plane-Sweep algorithms, that apply optimizations emerging from the geometric properties of the problem. By extensive experimentation, using real and synthetic data sets, we highlight the most efficient algorithm.
['George Roumelis', 'Michael Vassilakopoulos', 'Antonio Corral', 'Yannis Manolopoulos']
Plane-sweep algorithms for the K group nearest-neighbor query
662,161
Data-centric storage is an effective and important technique in sensor networks, however, most data-centric storage schemes may not be energy efficient and load balanced due to non-uniform event and query distributions. This paper proposes EEBASS, it utilizes an approximation algorithm to solve the optimal storage placement problem according to the variance of event and query distributions, aiming to minimize the total energy consumption for data-centric storage scheme. And it further leverages a ring based replication structure to achieve the load balance goal. Simulation results show that EEBASS is more energy efficient and balanced than traditional data-centric storage mechanisms in sensor network.
['Lei Xie', 'Lijun Chen', 'Daoxu Chen', 'Li Xie']
EEBASS: Energy-Efficient Balanced Storage Scheme for Sensor Networks
290,424
Mobile devices have been augmented by Multi-path TCP (MPTCP), enabling them to exploit the path diversity. Signal aware MPTCP (SA-MPTCP) takes the signal quality, playing a significant role in energy waste, into account to enhance MPTCP energy consumption. The problem is formulated as a decision making problem under uncertainty, endeavoring to optimize energy efficiency by selecting the best policy. The simulation results for bulk data transfer show that 42% and 17% energy have been saved in uploading and downloading compared to the base MPTCP, respectively.
['Mohammad Javad Shamani', 'Weiping Zhu', 'Saeid Rezaie', 'Vahid Naghshin']
Signal aware multi-path TCP
664,065
The paper presents a classification of mathematical problems encountered during partitioning of data when designing parallel algorithms on networks of heterogeneous computers. We specify problems with known efficient solutions and open problems. Based on this classification, we suggest an API for partitioning mathematical objects commonly used in scientific and engineering domains for solving problems on networks of heterogeneous computers. These interfaces allow the application programmers to specify simple and basic partitioning criteria in the form of parameters and functions to partition their mathematical objects. These partitioning interfaces are designed to be used along with various programming tools for parallel and distributed computing on heterogeneous networks.
['Alexey L. Lastovetsky', 'Ravi Reddy']
Classification of Partitioning Problems for Networks of Heterogeneous Computers
243,670
To deliver quality of service (QoS) in high-speed local networks requires providing router solutions to deal with the specific challenges and constraints present in these networks. Two basic mechanisms for providing QoS are resource reservation and prioritization. These mechanisms are not mutually exclusive. Several methods for combining prioritization and resource reservation in a connection-oriented local network environment are evaluated. Switch organizations based on these combinations are presented to illustrate their properties and allow a qualitative comparison. Through simulation, these combinations are evaluated in terms of their ability to provide predictable overall communication performance. Connection properties such as latency, utilization and jitter are plotted for mixtures of high, medium, and low bandwidth connections consisting of VBR and CBR traffic.
['Douglas H. Summerville', 'Lynwald Edmunds']
An analysis of resource scheduling with prioritization for QoS in LANs
231,396
Assembly code analysis is one of the critical processes for detecting and proving software plagiarism and software patent infringements when the source code is unavailable. It is also a common practice to discover exploits and vulnerabilities in existing software. However, it is a manually intensive and time-consuming process even for experienced reverse engineers. An effective and efficient assembly code clone search engine can greatly reduce the effort of this process, since it can identify the cloned parts that have been previously analyzed. The assembly code clone search problem belongs to the field of software engineering. However, it strongly depends on practical nearest neighbor search techniques in data mining and databases. By closely collaborating with reverse engineers and Defence Research and Development Canada (DRDC), we study the concerns and challenges that make existing assembly code clone approaches not practically applicable from the perspective of data mining. We propose a new variant of LSH scheme and incorporate it with graph matching to address these challenges. We implement an integrated assembly clone search engine called Kam1n0. It is the first clone search engine that can efficiently identify the given query assembly function's subgraph clones from a large assembly code repository. Kam1n0 is built upon the Apache Spark computation framework and Cassandra-like key-value distributed storage. A deployed demo system is publicly available. Extensive experimental results suggest that Kam1n0 is accurate, efficient, and scalable for handling large volume of assembly code.
['Steven H. H. Ding', 'Benjamin C. M. Fung', 'Philippe Charland']
Kam1n0: MapReduce-based Assembly Clone Search for Reverse Engineering
880,489
Rapidly urbanization can cause many serious social, environmental and ecological problems, so it is important to monitor urbanization in spatial distribution and dynamic change using RS and GIS technique. We use the 1:100000 resource and environmental dynamic vector database of China to acquire the 1KM grid urban land change data collection for decade from end of 80's to 2000 which captures all of the high resolution urban land use information by calculating area percentage for urban land use category within every 1KM grid cell. Based on this data, we analyzed the spatial relationship between the urban development and population, social economy, and nature environment status. In the result, some developing features and trends of Chinese urbanization were brought forward.
['Jianfeng He']
Using RS and GIS technique to monitor China urbanization development
488,358
This paper proposed a ship handling simulator using actual training ship for third grade pilot trainees. The ship handling simulator was achieved by developing the control system which reproduced a large vessel's maneuverability on the actual training ship. Moreover, visual system that gave scenery from large vessel's bridge for trainees by using Augmented Reality was developed. In order to evaluate the influence of the developed system on trainees, trainee's mental workload when the system was used was measured at the actual ship experiment. The experimental result showed that developed system derives trainee's high motivation for maneuvering training.
['Rei Takaseki', 'Tadatsugi Okazaki']
Evaluation of Override Ship Maneuvering Simulator Using Augmented Reality
607,814
Most computational engineering based loosely on biology uses continuous variables to represent neural activity. Yet most neurons communicate with action potentials. The engineering view is equivalent to using a rate-code for representing information and for computing. An increasing number of examples are being discovered in which biology may not be using rate codes. Information can be represented using the timing of action potentials, and efficiently computed with in this representation. The "analog match" problem of odour identification is a simple problem which can be efficiently solved using action potential timing and an underlying rhythm. By using adapting units to effect a fundamental change of representation of a problem, we map the recognition of words (having uniform time-warp) in connected speech into the same analog match problem. We describe the architecture and preliminary results of such a recognition system. Using the fast events of biology in conjunction with an underlying rhythm is one way to overcome the limits of an event-driven view of computation. When the intrinsic hardware is much faster than the time scale of change of inputs, this approach can greatly increase the effective computation per unit time on a given quantity of hardware.
['J. J. Hopfield', 'Carlos D. Brody', 'Sam T. Roweis']
Computing with Action Potentials
509,689
Semantic tags of points of interest (POIs) are a crucial prerequisite for location search, recommendation services, and data cleaning. However, most POIs in location-based social networks (LBSNs) are either tag-missing or tag-incomplete. This article aims to develop semantic annotation techniques to automatically infer tags for POIs. We first analyze two LBSN datasets and observe that there are two types of tags, category-related ones and sentimental ones, which have unique characteristics. Category-related tags are hierarchical, whereas sentimental ones are category-aware. All existing related work has adopted classification methods to predict high-level category-related tags in the hierarchy, but they cannot apply to infer either low-level category tags or sentimental ones. In light of this, we propose a latent-class probabilistic generative model, namely the spatial-temporal topic model (STM), to infer personal interests, the temporal and spatial patterns of topics/semantics embedded in users’ check-in activities, the interdependence between category-topic and sentiment-topic, and the correlation between sentimental tags and rating scores from users’ check-in and rating behaviors. Then, this learned knowledge is utilized to automatically annotate all POIs with both category-related and sentimental tags in a unified way. We conduct extensive experiments to evaluate the performance of the proposed STM on a real large-scale dataset. The experimental results show the superiority of our proposed STM, and we also observe that the real challenge of inferring category-related tags for POIs lies in the low-level ones of the hierarchy and that the challenge of predicting sentimental tags are those with neutral ratings.
['Tieke He', 'Hongzhi Yin', 'Zhenyu Chen', 'Xiaofang Zhou', 'Shazia Wasim Sadiq', 'Bin Luo']
A Spatial-Temporal Topic Model for the Semantic Annotation of POIs in LBSNs
857,424
An Integrated and Iterative Research Direction for Interactive Digital Narrative
['Hartmut Koenitz', 'Teun Dubbelman', 'Noam Knoller', 'Christian Roth']
An Integrated and Iterative Research Direction for Interactive Digital Narrative
916,395
Fast Fourier Transformation Algorithm for Single-Chip Cloud Computers Using RCCE.
['Wasuwee Sodsong', 'Bernd Burgstaller']
Fast Fourier Transformation Algorithm for Single-Chip Cloud Computers Using RCCE.
782,288