abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
Motivation: Most secondary structure prediction programs target only alpha helix and beta sheet structures and summarize all other structures in the random coil pseudo class. However, such an assignment often ignores existing local ordering in so-called random coil regions. Signatures for such ordering are distinct dihedral angle pattern. For this reason, we propose as an alternative approach to predict directly dihedral regions for each residue as this leads to a higher amount of structural information.#R##N##R##N#Results: We propose a multi-step support vector machine (SVM) procedure, dihedral prediction (DHPRED), to predict the dihedral angle state of residues from sequence. Trained on 20 000 residues our approach leads to dihedral region predictions, that in regions without alpha helices or beta sheets is higher than those from secondary structure prediction programs.#R##N##R##N#Availability: DHPRED has been implemented as a web service, which academic researchers can access from our webpage http://www.fz-juelich.de/nic/cbb#R##N##R##N#Contact: [email protected]
['Olav Zimmermann', 'Ulrich H. E. Hansmann']
Support vector machines for prediction of dihedral angle regions
106,284
Reliable evaluation of Information Retrieval systems requires large amounts of relevance judgments. Making these annotations is quite complex and tedious for many Music Information Retrieval tasks, so performing such evaluations requires too much effort. A low-cost alternative is the application of Minimal Test Collection algorithms, which offer quite reliable results while significantly reducing the annotation effort. The idea is to incrementally select what documents to judge so that we can compute estimates of the effectiveness differences between systems with a certain degree of confidence. In this paper we show a first approach towards its application to the evaluation of the Audio Music Similarity and Retrieval task, run by the annual MIREX evaluation campaign. An analysis with the MIREX 2011 data shows that the judging effort can be reduced to about 35% to obtain results with 95% confidence.
['Julián Urbano', 'Markus Schedl']
Towards minimal test collections for evaluation of audio music similarity and retrieval
493,336
['Max Van Kleek', 'Brennan Moore', 'David R. Karger', 'Paul André', 'm.c. schraefel']
Atomate it! End-user context-sensitive automation using heterogeneous information sources on the web
604,386
This paper proposes a scenario-based stochastic optimal control strategy, considering the stochastic driver behaviors, to deal with energy management issue for parallel HEVs. Firstly, after modelling the dynamic system of parallel HEV including both mechanical and electrical systems, a stochastic model predictive control(SMPC) problem with average constraints is proposed for energy management issue regarding the demaned torque in the prediction horizon as stochastic variable. Moreover, in order to make the proposed problem solvable, two scenarios are chosen and weighted based on the known conditioned transition probability distribustion of demanded torque to transform the original problem into equivalent deterministic nonlinear model predictive control(NMPC) problem. The formulated equivalent problem is solved by employing the Continuation/GMRES algorithm. Afterwards, on-line learning algorithm for updating conditioned transition probabilities of demanded torque is developed since the drive behavior varies as route and environment change. Finally, validation simulation is carried on by a HEV simulator established in the GT-Suite Software.
['Xun Shen', 'Jiangyan Zhang', 'Tielong Shen']
Real-time scenario-based stochastic optimal energy management strategy for HEVs
978,003
We describe our experiences in building a city-wide infrastructure for wide-area wireless experimentation. Our infrastructure has two components -(i) a vehicular testbed consisting of wireless nodes, each equipped with both cellular (EV-DO) and WiFi interfaces, and mounted on city buses plying in Madison, Wisconsin, and (ii) a software platform to utilize these testbed nodes to continuously monitor and characterize performance of large scale wireless networks, such as city-wide mesh networks, unplanned deployments of WiFi hotspots, and cellular networks. Beyond our initial efforts in building and deploying this infrastructure, we have also utilized it to gain some initial understanding of the diversity of user experience in large-scale wireless networks, especially under various mobility scenarios. Since our vehicle-mounted testbed nodes have fairly deterministic mobility patterns, they provide us with much needed performance data on parameters such as RF coverage and available bandwidth, as well as quantify the impact of mobility on performance. We use our initial measurements from this testbed to showcase its ability to provide an efficient, low-cost, and robust method to monitor our target wireless networks. These initial measurements also highlight the challenges we face as we continue to expand this infrastructure. We discuss what these challenges are and how we intend to address them.
['Justin Ormont', 'Jordan Walker', 'Suman Banerjee', 'Ashwin Sridharan', 'Mukund Seshadri', 'Sridhar Machiraju']
A city-wide vehicular infrastructure for wide-area wireless experimentation
544,824
Abstract#R##N##R##N#One of the most serious problems when using IIR-Filters is the behaviour of the settling which depends on the incoming signal and the values which are in the state vector or memory of the filter. The description of the state vector is given in the z-domain for the second direct form and for the cascaded structure. If the incoming signal is monochromatic, and when magnitude, phase and frequency are given or can be calculated, it is possible to determine the state vector (memory) of the filter in order that the settling time of a response to a step change of the monochromatic signal is zero. Various results are shown for one case when the preinitialization is done exactly with no noise present and for other cases when the monochromatic signal is overlaid by white noise.
['Dieter Nagel', 'Günter Wolf']
Letter shortening of the settling time of digital IIR-filters
438,062
For many real-world optimization problems, the robustness of a solution is of great importance in addition to the solution's quality. By robustness, we mean that small deviations from the original design, e.g., due to manufacturing tolerances, should be tolerated without a severe loss of quality. One way to achieve that goal is to evaluate each solution under a number of different scenarios and use the average solution quality as fitness. However, this approach is often impractical, because the cost for evaluating each individual several times is unacceptable. In this paper, we present a new and efficient approach to estimating a solution's expected quality and variance. We propose to construct local approximate models of the fitness function and then use these approximate models to estimate expected fitness and variance. Based on a variety of test functions, we demonstrate empirically that our approach significantly outperforms the implicit averaging approach, as well as the explicit averaging approaches using existing estimation techniques reported in the literature
['Ingo Paenke', 'Jürgen Branke', 'Yaochu Jin']
Efficient search for robust solutions by means of evolutionary algorithms and fitness approximation
366,056
Many modern embedded systems use networks to communicate. This increases the attack surface: the adversary does not need to have physical access to the system and can launch remote attacks. By exploiting software bugs, the attacker might be able to change the behavior of a program. Security violations in safety-critical systems are particularly dangerous since they might lead to catastrophic results. Hence, safety-critical software requires additional protection. We present an approach to detect and prevent control flow attacks. Such attacks maliciously modify program's control flow to achieve the desired behavior. We develop ControlFreak, a hardware watchdog to monitor program execution and to prevent illegal control flow transitions. The watchdog employs chained signatures to detect any modification of the instruction stream and any illegal jump in the program even if signatures are maliciously modified.
['Sergei Arnautov', 'Christof Fetzer']
ControlFreak: Signature Chaining to Counter Control Flow Attacks
578,375
['Guangzhao Cui', 'Yan Wang', 'Dong Han', 'Yanfeng Wang', 'Zicheng Wang', 'Yanmin Wu']
An Encryption Scheme Based on DNA Microdots Technology
704,370
To speed up fetching Web pages, we give an intelligent technology of Web pre-fetching. We use a simplified WWW data model to represent the data in the cache of a Web browser to mine the association rules. We store these rules in a knowledge base so as to predict the user's actions. Intelligent agents are responsible for mining the users' interest and pre-fetching Web pages, based on the interest association repository. In this way user browsing time has been reduced transparently.
['Baowen Xu', 'Weifeng Zhang', 'William Song', 'Hongji Yang', 'Chih-Hung Chang']
Application of data mining in Web pre-fetching
365,044
This paper describes the dynamic modeling of linear object deformation based on differential geometry coordinates. Deformable linear objects such as cables and strings are widely used in our daily life, electric industries, medical operations. Modeling, control, and manipulation of deformable linear objects are keys to many applications. We have proposed the differential geometry coordinates to describe the 2D/3D deformation of a linear object with the minimum number of parameters. Based on this description, we have formulated the static deformation of a linear object using the differential geometry coordinates but the dynamic deformation has not been investigated yet. In this paper, we apply differential geometry coordinates to the dynamic modeling of linear objects. First, we formulate the dynamic 2D deformation of an inextensible linear object based on a differential geometry coordinate system. Second, we show simulation results using the proposed modeling technique. Next, we apply the proposed dynamic modeling to the control of a flexible link.
['Hidefumi Wakamatsu', 'Kousaku Takahashi', 'Shinichi Hirai']
Dynamic Modeling of Linear Object Deformation based on Differential Geometry Coordinates
134,667
Satellite ocean radar data are used to assess the flat surface reflectivity for seawater at 36 GHz by comparison to an existing model for dielectric constant variation. Sea surface temperature (SST) is the dominant control, and results indicate a 14% variation in the normalized radar cross section (NRCS) at Ka-band (35.75 GHz) that is in close agreement with model prediction. Consistent results are obtained globally using near-nadir incidence data from both the SARAL AltiKa radar altimeter and Global Precipitation Measurement mission rain radar. The observations affirm that small but systematic SST-dependent corrections at Ka-band may require consideration prior to NRCS use in ocean surface wave investigations and applications. As an example, we demonstrate a systematic improvement in AltiKa ocean wind speed inversions after such an SST adjustment. Lower frequency C- and Ku-band results are also assessed to confirm the general agreement with prediction and a much smaller variation due to SST.
['Douglas Vandemark', 'Bertrand Chapron', 'Hui Feng', 'Alexis Mouche']
Sea Surface Reflectivity Variation With Ocean Temperature at Ka-Band Observed Using Near-Nadir Satellite Radar Data
695,310
Novel implementation techniques, such as the use of direct-charge-transfer stage, noise coupling, and dynamic element matching can improve the performance of wideband ΔΣ ADCs. However, they introduce extra loop delay, which compromises the low-distortion property and even the loop stability. This paper shows how the addition of independent feedback and feed-forward branches to the loop filter can compensate the extra loop delay, and restore the desired signal and noise transfer functions. The design methodology is then generalized for different kinds of ΔΣ ADCs, and the low-distortion property is analyzed. Two wideband delta-sigma ADCs have been designed and simulated to verify the theory.
['Yan Wang', 'Pavan Kumar Hanumolu', 'Gabor C. Temes']
Design Techniques for Wideband Discrete-Time Delta-Sigma ADCs With Extra Loop Delay
357,552
For every major sport, analysts can and often do extract large amounts of data, which can be leveraged by media and fans, athletes, and organizations. These efforts are often done in collaboration with leading technology vendors, who also have recognized the tremendous value of sports analytics. The ubiquity, diversity, and relative accessibility of sports data makes it a particularly attractive domain for a range of visualization researchers. Motivated by the significant growth and popularity as well as overall potential of sports data visualization, this special issue gathers state-of-the-art research on this emerging topic.
['Rahul C. Basole', 'Dietmar Saupe']
Sports Data Visualization [Guest editors' introduction]
898,390
The speed of today's worms demands automated detection, but the risk of false positives poses a difficult problem. In prior work, we proposed a host-based intrusion-detection system for worms that leveraged collaboration among peers to lower its risk of false positives, and we simulated this approach for a system with two peers. In this paper, we build upon that work and evaluate our ideas ``in the wild.'' We implement Wormboy 2.0, a prototype of our vision that allows us to quantify and compare worms' and non-worms' temporal consistency , similarity over time in worms' and non-worms' invocations of system calls. We deploy our prototype to a network of 30 hosts running Windows XP with Service Pack 2 to monitor and analyze 10,776 processes, inclusive of 511 unique non-worms (873 if we consider unique versions to be unique non-worms). We identify properties with which we can distinguish non-worms from worms 99% of the time. We find that our collaborative architecture, using patterns of system calls and simple heuristics, can detect worms running on multiple peers. And we find that collaboration among peers significantly reduces our probability of false positives because of the unlikely appearance on many peers simultaneously of non-worm processes with worm-like properties.
['David J. Malan', 'Michael D. Smith']
Exploiting temporal consistency to reduce false positives in host-based, collaborative detection of worms
108,807
['Matthias Baaz', 'Stefan Hetzl', 'Daniel Weller']
On the complexity of proof deskolemization
112,302
This paper studies the basic design challenges associated with multirate sensor arrays. A multirate sensor array is a sensor array in which each sensor node communicates a low-resolution measurement to a central processing unit. The objective is to design the individual sensor nodes and the central processing unit such that, at the end, a unified high-resolution measurement is reconstructed. A multirate sensor array can be modeled as an analysis filterbank in discrete-time. Using this model, the design problem is reduced to solving the following two problems: a) how to design the sensor nodes such that the time-delay of arrival (TDOA) between the sensors can be estimated and b) how to design a synthesis filterbank to fuse the low-rate data sent by the sensor nodes given the TDOA? In this paper, we consider a basic two-channel sensor array. We show that it is possible to estimate the TDOA between the sensors if the analysis filters incorporated in the array satisfy specific phase-response requirements. We then provide practical sample designs that satisfy these requirements. We prove, however, that a fixed synthesis filterbank cannot reconstruct the desired high-resolution measurement for all TDOA values. As a result, we suggest a fusion system that uses different sets of synthesis filters for even and odd TDOAs. Finally, we use the H/sub /spl infin// optimality theory to design optimal synthesis filters.
['Omid S. Jahromi', 'Parham Aarabi']
Theory and design of multirate sensor arrays
306,257
Motivated by a physical phenomenon, the diffusion process, this paper develops a diffusion-based algorithm for workspace generation of highly articulated manipulators. This algorithm makes the workspace generation problem as simple as solving a diffusion-type equation which has an explicit solution. This equation is a partial differential equation defined on the motion group and describes the evolution of the workspace density function depending on the manipulator length and kinematic properties. Numerical simulations using this algorithm are also presented.
['Yunfeng Wang', 'Gregory S. Chirikjian']
A diffusion-based algorithm for workspace generation of highly articulated manipulators
328,881
This paper addresses the problem of the blind source separation which consists of recovering a set of signals of which only instantaneous linear mixtures are observed. A blind source separation approach exploiting the difference in the time-frequency (t-f) signatures of the sources is considered. The approach is based on the diagonalization of a combined set of 'spatial time-frequency distributions'. Asymptotic performance analysis of the proposed method is performed. Numerical simulations are provided to demonstrate the effectiveness of our approach and to validate the theoretical expression of the asymptotic performance.
['Adel Belouchrani', 'Moeness G. Amin']
Blind source separation using time-frequency distributions: algorithm and asymptotic performance
60,344
['Benoit Combemale', 'Xavier Crégut', 'Alain Caplain', 'Bernard Coulette']
Modélisation rigoureuse en SPEM de procédé de développement.
781,584
We describe a set of image editing and viewing tools that explicitly take into account the resolution of the display on which the image is viewed. Our approach is twofold. First, we design editing tools that process only the visible data, which is useful for images larger than the display. This encompasses cases such as multi-image panoramas and high-resolution medical data. Second, we propose an adaptive way to set viewing parameters such brightness and contrast. Because we deal with very large images, different locations and scales often require different viewing parameters. We let users set these parameters at a few places and interpolate satisfying values everywhere else. We demonstrate the efficiency of our approach on different display and image sizes. Since the computational complexity to render a view depends on the display resolution and not the actual input image resolution, we achieve interactive image editing even on a 16 gigapixel image.
['Won-Ki Jeong', 'Micah K. Johnson', 'Insu Yu', 'Jan Kautz', 'Hanspeter Pfister', 'Sylvain Paris']
Display-aware image editing
419,191
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
['Sheng Chen', 'Xia Hong', 'Chris J. Harris']
Probability Density Estimation With Tunable Kernels Using Orthogonal Forward Regression
94,415
['Rajkumar Jain', 'Narendra S. Chaudhari']
Formulation of 3-Clustering as a 3-SAT Problem.
772,675
We present a new approach to integrated motion estimation and segmentation by combining methods from discrete and continuous optimization. The velocity of each of a set of regions is modeled as a Gaussian-distributed random variable and motion models and segmentation are obtained by alternated maximization of a Bayesian a-posteriori probability. We show that for fixed segmentation the model parameters are given by a closed-form solution. Given the velocities, the segmentation is in turn determined using graph cuts which allows a globally optimal solution in the case of two regions. Consequently, there is no contour evolution based on differential increments as for example in level set methods. Experimental results on synthetic and real data show that good segmentations are obtained at speeds close to real-time.
['Thomas Schoenemann', 'Daniel Cremers']
Near real-time motion segmentation using graph cuts
518,421
Ullman and Basri (1991) have shown that a 3D scene can be implicitly represented as a linear combination of two or more basis-images. We outline a simple visualisation system which uses this result to provide interactive viewpoint control. Moreover, we show that each novel view can be rendered without significant departure from the standard graphics pipeline. Essentially, only texture-map, z-buffer and accumulation-buffer facilities are required. The need for dense feature-correspondence is also avoided. It is desirable for image-based rendering to follow the standard pipeline, in order to exploit hardware-support for existing graphics APIs. As an example, we present the results of an OpenGL implementation of the system. Our observations are relevant to augmented reality, visualisation, animation and video coding applications. We also consider the possible use of image-based representations in network-graphics.
['Miles E. Hansard', 'Bernard E. Buxton']
Image-based rendering via the standard graphics pipeline
461,834
The efficiency of light-emitting-diode (LED) lights approaches that of fluorescent lamps. LED light sources find more applications than conventional light bulbs due to their compactness, lower heat dissipation, and real-time color-changing capability. Stabilizing the colors of red-green-blue (RGB) LED lights is a challenging task, which includes color light intensity control using switching-mode power converters, color point maintenance against LED junction temperature change, and limiting LED device temperature to prolong the LED lifetime. In this paper, we present a LED junction temperature measurement technique for a pulsewidth modulation diode forward current controlled RGB LED lighting system. The technique has been automated and can effectively stabilize the color without the need for using expensive feedback systems that involve light sensors. Performance in terms of chromaticity and luminance stability for a temperature-compensated RGB LED system will be presented.
['Xiaohui Qu', 'Siu-Chung Wong', 'Chi K. Tse']
Temperature Measurement Technique for Stabilizing the Light Output of RGB LED Lamps
475,003
With escalating technological advances and increased computing demands, the director of a university based computing facility finds greater professional responsibilities to perform with somewhat diminishing resources. To partially resolve this imbalance of task and resources, the leaders of computing organizations must seek to utilize their own time as efficaciously as possible. Planning to achieve maximum efficiency in a given time frame is a complex and individualized process. Even though people react differently to time constraints, each can seek to improve individual productivity within a constant time parameter. Practical ways to manage time to improve performance in a changing university computing environment is the theme of this paper.
['Darleen V. Pigford']
Improving personal efficiency: Time management in today's changing university computing environment
328,555
Growing trend of using spatial information in various domains has increased the need for spatial data analysis. As spatial data analysis involves the study of interaction between spatial objects, Probabilistic Relational Models (PRMs) can be a good choice for modeling probabilistic dependencies between such objects. However, standard PRMs do not support spatial objects. Here, we present a general solution for incorporating spatial information into PRMs. We also explain how our model can be learned from data and discuss on the possibility of its extension to support spatial autocorrelation.
['Rajani Chulyadyo', 'Philippe Leray']
Integrating spatial information into probabilistic relational models
598,491
Skeleton-driven animation is a widespread technique, which is frequently used in film and video game productions to animate 3D characters. The process of preparing characters for skeletal animation is referred to as character rigging. Commercial applications such as Maya or 3DS Max provide many tool that support this process, including the 'joint tool' and the 'paint skin weights tool'. Most of these tools are difficult to use for novice users. Even for professional artists, it requires many hours of intensive effort.
['Seungbae Bang', 'Byungkuk Choi', 'Roger Blanco i Ribera', 'Meekyoung Kim', 'Sung-Hee Lee', 'Junyong Noh']
Interactive rigging
684,309
Objective: This article provides a critical review of research pertaining to the measurement of human factors (HF) issues in current and future air traffic control (ATC). Background: Growing worldwide air traffic demands call for a radical departure from current ATC systems. Future systems will have a fundamental impact on the roles and responsibilities of ATC officers (ATCOs). Valid and reliable methods of assessing HF issues associated with these changes, such as a potential increase (or decrease) in workload, are of utmost importance for advancing theory and for designing systems, procedures, and training. Method: We outline major aviation changes and how these relate to five key HF issues in ATC. Measures are outlined, compared, and evaluated and are followed by guidelines for assessing these issues in the ATC domain. Recommendations for future research are presented. Results: A review of the literature suggests that situational awareness and workload have been widely researched and assessed using a variety of measures, but researchers have neglected the areas of trust, stress, and boredom. We make recommendations for use of particular measures and the construction of new measures. Conclusion: It is predicted that, given the changing role of ATCOs and profound future airspace requirements and configurations, issues of stress, trust, and boredom will become more significant. Researchers should develop and/or refine existing measures of all five key HF issues to assess their impact on ATCO performance. Furthermore, these issues should be considered in a holistic manner. Application: The current article provides an evaluation of research and measures used in HF research on ATC that will aid research and ATC measurement.
['Janice Langan-Fox', 'Michael J. Sankey', 'James M. Canty']
Human Factors Measurement for Future Air Traffic Control Systems
249,097
This paper presents a new approach to Automatic Test Pattern Generation for sequential circuits. Traditional topological algorithms nowadays are able to deal with very large circuits, but often fail when highly sequential subnetworks are found. On the other hand, symbolic techniques based on Binary Decision Diagrams proved themselves very efficient on small or medium circuits, no matter their sequential complexity. A state-of-the-art structural ATPG is extended by identifying some critical areas in the circuit and resorting to symbolic techniques when such areas need to be considered. Experimental results prove that the combined approach considerably enhances fault coverage while reducing CPU time when compared to a purely topological approach.
['Fulvio Corno', 'Paolo Ernesto Prinetto', 'M. Sonza Reorda', 'Uwe Gläser', 'Heinrich Theodor Vierhaus']
Improving topological ATPG with symbolic techniques
247,937
Crowdsourcing environments have shown promise in solving diverse tasks in limited cost and time. This type of business model involves both the expert and non-expert workers. Interestingly, the success of such models depends on the volume of the total number of workers. But, the survival of the fittest controls the stability of these workers. Here, we show that the crowd workers who fail to win jobs successively loose interest and might dropout over time. Therefore, dropout prediction in such environments is a promising task. In this paper, we establish that it is possible to predict the dropouts in a crowdsourcing market from the success rate based on the arrival pattern of workers.
['Malay Bhattacharyya']
Dropout Prediction in Crowdsourcing Markets
889,941
Separation performance is improved in frequency-domain blind source separation (BSS) of speech with independent component analysis (ICA) by applying a parametric Pearson distribution system. ICA adaptation rules include a score function determined by approximated source distribution, and better approximation improves separation performance. Previously, conventional hyperbolic tangent (tanh) or generalized Gaussian distribution (GGD) was uniformly applied to the score function for all frequency bins, despite the fact that a wideband speech signal has different distributions at different frequencies. To obtain better score functions, we propose the integration of a parametric Pearson distribution system with ICA learning rules. The score function is estimated by using appropriate Pearson distribution parameters for each frequency bin. We consider three estimation methods with Pearson distribution parameters and conduct separation experiments with real speech signals convolved with actual room impulse responses. Consequently, the signal-to-interference ratio (SIR) of the proposed methods significantly improve over 3 dB compared to conventional methods.
['Hiroko Kato', 'Yuichi Nagahara', 'Shoko Araki', 'Hiroshi Sawada', 'Shoji Makino']
Parametric-Pearson-based independent component analysis for frequency-domain blind speech separation
358,665
Large-scale deployment of electric vehicles (EVs) adds load to the current power system. Without control of the temporal dispersion of EV charging, a new rapid peak load may occur and cause severe problems in the power system. Therefore, we propose two algorithms for load leveling by decentralized autonomous control. The first algorithm uses an off-peak rate period, and the second changes the charging start time according to the charging duration. We evaluated the second algorithm in two cases: a linear case, where the start time varies linearly with the charging duration, and a quadratic case, where the start time varies non-linearly, that is, quadratically with the charging duration. In the linear case, a gradually sloped peak occurs in the morning. On the other hand, in the quadratic case, the charging hours are dispersed appropriately, and the daily load curve is almost flat in the night.
['Masaaki Takagi', 'Naoto Tagashira', 'Hiroshi Asano']
EV charging schedule for load leveling by non-linearly-distributed start time
933,904
We study the size-cost of Boolean operations on constant height deterministic pushdown automata, i.e., on the traditional pushdown automata with a built-in constant limit on the height of the pushdown. We design a simulation showing that a complement can be obtained with a polynomial tradeoff. For intersection and union, we show an exponential simulation, and prove that the exponential blow-up cannot be avoided.
['Zuzana Bednárová', 'Viliam Geffert', 'Carlo Mereghetti', 'Beatrice Palano']
The size-cost of Boolean operations on constant height deterministic pushdown automata
552,273
['William Benjamin St. Clair', 'David C. Noelle']
Asymmetric Intercortical Projections Support The Learning Of Temporal Associations.
995,740
In this paper, we propose an adaptive beamforming scheme that generates the beam weight using dominant eigenvectors of the spatial covariance matrix. The number of eigenvectors used for the generation of beam weight is determined to maximize the signal-to-noise ratio (SNR) for given feedback constraints (e.g., the amount of feedback information and feedback delay). It is shown that the conventional limited feedback beamforming and eigen-beamforming are special cases of the proposed scheme. Simulation results show that the proposed beamforming scheme can effectively be applied to spatially correlated channels.
['Jae-Yun Ko', 'Yong-Hwan Lee']
Adaptive beamforming with dimension reduction in spatially correlated MISO channels
399,748
['Abdullah Al-Dujaili', 'François Merciol', 'Sébastien Lefèvre']
GraphBPT: An Efficient Hierarchical Data Structure for Image Representation and Probabilistic Inference
627,464
In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a regularized demodulation algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the weighted curve evolution scheme that enhances the Region Competition/ Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods.
['Iasonas Kokkinos', 'Georgios Evangelopoulos', 'Petros Maragos']
Texture Analysis and Segmentation Using Modulation Features, Generative Models, and Weighted Curve Evolution
19,792
This thesis proposal aims to provide a new approach to the study of complex adaptive systems in social sciences through a methodological framework for modeling and simulating these systems like artificial societies. Agent based modeling (ABM) is well fitted for the study of social systems as it focuses on how local interactions among agents generate emergent larger and global social structures and patterns of behavior. The issues addressed by our framework are presented as well as its most important components.
['Candelaria E. Sansores', 'Juan Pavón']
Agent-based modeling of social complex systems
858,395
In previous work we have discussed a semantic anchoring framework that enables the semantic specification of Domain-Specific Modeling languages by specifying semantic anchoring rules to predefined semantic units. This framework is further extended to support heterogeneous systems by developing a method for the composition of semantic units. In this paper, we explain the semantic unit composition through a case study.
['Kai Chen', 'Janos Sztipanovits', 'Sandeep Neema']
A Case Study on Semantic Unit Composition
58,961
Word clusters improve performance in many NLP tasks including training neural network language models, but current increases in datasets are outpacing the ability of word clusterers to handle them. In this paper we present a novel bidirectional, interpolated, refining, and alternating (BIRA) predictive exchange algorithm and introduce ClusterCat, a clusterer based on this algorithm. We show that ClusterCat is 3‐85 times faster than four other well-known clusterers, while also improving upon the predictive exchange algorithm’s perplexity by up to 18% . Notably, ClusterCat clusters a 2.5 billion token English News Crawl corpus in 3 hours. We also evaluate in a machine translation setting, resulting in shorter training times achieving the same translation quality measured in BLEU scores. ClusterCat is portable and freely available.
['Jon Dehdari', 'Liling Tan', 'Josef van Genabith']
Scaling Up Word Clustering
822,246
Research into digitally mediated networks is important as these are becoming increasingly intertwined with other aspects of our everyday lives as we invest as much effort in the relationships developed there as elsewhere. Over the past few years we have witnessed the rise of digital media usage (at least in the developed world) as exemplified by such Web 2.0 enabled networks as Facebook, YouTube and the like. It appears that Wittel's (2001) hypothesis that ‘network sociality’ will become ever more important has come to fruition. Socialization for many has become deeply embedded in technology and is characterised by an assimilation of work and play (Wittel 2001). As Judith Donath states: “information that was once local is becoming global. The dramas of high school friends, blind date traumas, and mundane job irritations, once hot gossip only to be the immediate circle of the people involved, are now published for worldwide consumption on blogs and network sites.” (Donath 2007).
['Ben Light']
Digitally mediated social networking practices: A focus on connectedness and disconnectedness
222,259
Peer-to-peer (P2P) network systems have an open and dynamic nature for sharing files and real-time data transmission. While P2P systems have already many existing and envisioned applications, the security issue of P2P systems is still worth deeply researching. Some traditional approaches mainly rely on cryptography to ensure data authentication and integrity. These approaches, however, only addressed part of the security issues in P2P systems. In this paper, we propose a trust management model for P2P file-sharing systems. We use the incomplete experience to get the trust rating in P2P systems, and use aggregation mechanism to indirectly combine and obtain other node's trust rating. Simulation results and analysis show that our proposed trust management model can quickly detect the misbehavior peers and limit the impacts of them in a P2P file-sharing system.
['Huafeng Wu', 'Chaojian Shi', 'Haiguang Chen', 'Chuanshan Gao']
A Trust Management Model for P2P File Sharing System
476,121
Currently, software engineering is becoming even more complex due to distributed computing. In this new context, portability while providing the programmer with the single system image of a classical Java Virtual Machine (JVM) is one of the key issues. Hence a cluster-aware JVM, which can transparently execute Java applications in a distributed fashion on the nodes of a cluster, is really desirable. This way multi-threaded server applications can take advantage of cluster resources without increasing their programming complexity. However, such kind of JVM is not easy to design and one of the most challenging tasks is the development of an efficient, scalable and automatic dynamic memory manager. Inside this manager, one important module is the automatic recycling mechanism or Garbage Collector (GC). It is a module with very intensive processing demands that must concurrently run with user's application. Hence, it consumes a very critical portion of the total execution time spent inside JVM in uniprocessor systems, and its overhead increases in distributed GCs because of the update of changing references in different nodes. In this work we propose an object placement strategy based on the connectivity graph and executed by the GC. Our results show that the choice of an efficient technique produces significant differences both in performance and inter-nodes messaging overhead. Moreover, our presented strategy improves performance with respect to state-of-the-art distributed JVM proposals.
['José Manuel Velasco', 'David Atienza', 'Katzalin Olcoz', 'Francisco Tirado']
Efficient Object Placement Including Node Selection in a Distributed Virtual Machine
609,153
We investigate a new approach for the problem of source separation/classification of non-orthogonal, non-stationary noisy signals impinging on an array of sensors. We propose a solution to the problem when the contaminating noise is temporally and spatially correlated. The observations are projected onto a nested set of multiresolution spaces prior to classical eigendecomposition. An inherent invariance property of the signal subspace is observed in a subset of the multiresolution spaces that depends on the level of approximation expressed by the orthogonal basis. This feature, among others revealed by the algorithm, is eventually used to separate the correlated signal sources in the context of `best basis' selection. The technique shows robustness to source non-stationarity as well as anisotropic properties of the channel characteristics under no constraints on the array design. We illustrate the high performance of the technique on simulated and experimental multichannel neurophysiological data measurements.
['Karim G. Oweiss', 'David J. Anderson']
Exploiting Signal Subspace Invariance to Resolve Non-Stationary, Non-Orthogonal Sensor Array Signals in Correlated Noise Fields
437,945
This paper presents a blending principle of tail fins and reaction jets to achieve a fast response for a dual-controlled missile under a slew-rate limit. The blending principle can be categorized as controlling either the net force or the net moment according to how the two actuators cooperate with each other. When compared with controlling the net moment, controlling the net force of aerodynamic lift and jet thrust allows direct control of the acceleration, thus enabling a much faster response but at the expense of large control effort. In this work, for the initial transient period a force controller is designed to achieve fast response under the slew-rate limit, then the transition control is proposed, which begins with force control and makes a transition to moment control to reduce the control usage. This transition does not involve switching from one controller to the other. Rather, the angle of attack is properly shaped corresponding to the desired moment, allowing a smooth and stable transition from force control to moment control. The smooth transition by the proposed strategy from force control to moment control is demonstrated with nonlinear missile dynamics. The proposed approach shows a very fast response, while its input usage is almost same as the conventional moment control.
['Seung-Hyun Kim', 'Dongsoo Cho', 'H. Jin Kim']
Force and moment blending control for fast response of agile dual missiles
810,749
The testbed allows experimenting with highlevel distributed information fusion, dynamic resource management and configuration management given multiple constraints on the resources and their communication networks. The testbed provides general services that are useful for testing many information fusion applications. Services include a multi-layer plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop.
['Pierre Valin', 'Eloi Bosse', 'Adel Guitouni', 'Hans Wehn', 'Jens Happe']
Testbed for distributed high-level information fusion and dynamic resource management
290,617
Analytically studying the outage performance of channel-state-information (CSI)-assisted amplify-and-forward (AF) cooperative networks with maximal ratio combining (MRC) receivers at the destination has always been a difficult task. In this paper, we will derive upper and lower performance bounds - including outage probability, ergodic achievable rate, and average symbol error rate - for such a network that selects the “best relay” out of multiple relays to carry out the relay transmission. Our results show that the derived analytical bounds are able to accurately predict the simulated performance. The analytical technique used here can further be applied to investigate multiple-relay systems under dissimilar fading channel conditions.
['Qing F. Zhou', 'Francis C. M. Lau']
Performance Bounds of Opportunistic Cooperative Communications With CSI-Assisted Amplify-and-Forward Relaying and MRC Reception
532,838
This paper presents a fast approach for detecting coherent signals encountered in mul-tipath propagation environment. The developed approach utilizes a Toeplitz matrix reconstruction method along with DOA estimation using Root-MUSIC. The proposed approach exhibits several features such as robustness, high accuracy, and computational efficiency. Nevertheless, the primary advantage of the proposed Toeplitz-based de-correlation scheme is the fast detection of the maximum possible number of coherent signals, without the need of iteration and re-estimation.
['A. Goian', 'M. I. AlHajri', 'Raed M. Shubair', 'Luis Weruaga', 'A. R. Kulaib', 'R. AlMemari', 'M. Darweesh']
Fast detection of coherent signals using pre-conditioned root-MUSIC based on Toeplitz matrix reconstruction
558,790
Performance-based contracting (PBC) is envisioned to lower the asset ownership cost while ensuring desired system performance. System availability, widely used as a performance metric in such contracts, is affected by multiple factors such as equipment reliability, spares stock, fleet size, and service capacity. Prior studies have either focussed on ensuring parts availability or advocating the reliability allocation during design. This paper investigates a single echelon repairable inventory model in PBC. We focus on reliability improvement and its interaction with decisions affecting service time, taking into account the operating fleet size. The study shows that component reliability in a repairable inventory system is a function of the operating fleet size and service rate. A principal-agent model is further developed to evaluate the impact of the fleet size on the incentive mechanism design. The numerical study confirms that the fleet size plays a critical role in determining the penalty and cost sharing rates when the number of backorders is used as the negative incentive scheme.
['H. Mirzahosseinian', 'Rajesh Piplani', 'Tongdan Jin']
The Impact of Fleet Size on Performance-Based Incentive Management
622,403
Predicting the occurrence of a particular event of interest at future time points is the primary goal of survival analysis. The presence of incomplete observations due to time limitations or loss of data traces is known as censoring which brings unique challenges in this domain and differentiates survival analysis from other standard regression methods. The popularly used survival analysis methods such as Cox proportional hazard model and parametric survival regression suffer from some strict assumptions and hypotheses that are not realistic in most of the real-world applications. To overcome the weaknesses of these two types of methods, in this paper, we reformulate the survival analysis problem as a multi-task learning problem and propose a new multi-task learning based formulation to predict the survival time by estimating the survival status at each time interval during the study duration. We propose an indicator matrix to enable the multi-task learning algorithm to handle censored instances and incorporate some of the important characteristics of survival problems such as non-negative non-increasing list structure into our model through max-heap projection. We employ the L2,1-norm penalty which enables the model to learn a shared representation across related tasks and hence select important features and alleviate over-fitting in high-dimensional feature spaces; thus, reducing the prediction error of each task. To efficiently handle the two non-smooth constraints, in this paper, we propose an optimization method which employs Alternating Direction Method of Multipliers (ADMM) algorithm to solve the proposed multi-task learning problem. We demonstrate the performance of the proposed method using real-world microarray gene expression high-dimensional benchmark datasets and show that our method outperforms state-of-the-art methods.
['Yan Li', 'Jie Wang', 'Jieping Ye', 'Chandan K. Reddy']
A Multi-Task Learning Formulation for Survival Analysis
727,483
Recently, cloud services have fast emerged as a major form of IT services. They have changed the way of operating global or local services. Notwithstanding these promising features of cloud services, we can see many business sectors that are not very positive for adopting cloud services. One of its major reasons is controllability on security. It is widely believed that a purchasing organization loses control over operation, especially over security when adopting SaaS for their critical businesses. However, the global competition forces business sectors to change their style of thinking. Academia is never an exception. In 2012, Japan AXIES (Academic eXchange for Information Environment and Strategy) released the criteria for adopting cloud services for academic environments. Its scope includes outsourcing major functions of a univeristy management to cloud service providers. In this paper, we analyze the AXIES report in terms of risk analysis. Its security model is analyzed in terms of TPM and SLA. The criticality levels for services are defined. The assurance levels and cloud security levels are required to match a given criticality level. Cloud security model such as CCM and identity trust model such as NIST 800-63 are integrated into a cloudbased trust model.
['Kazutshuna Yamaji', 'Motonori Nakamura', 'Yasuhiro Nagai', 'Tomohiro Ito', 'Hiroyuki Sato']
Specifying a Trust Model for Academic Cloud Services
782,376
['Phil D. Green', 'Martin Cooke', 'H.H. Lafferty', 'Anthony J. H. Simons']
A speech recognition strategy based on making acoustic evidence and phonetic knowledge explicit.
503,199
The location of the distribution facilities and the vehicles from these facilities are interdependent in many distribution systems. Such a concept recognizes the interdependence; attempts to integrate these two decisions have been limited. Multi-objective location-routing problem (MLRP) is combined the facility location and the vehicle routing decision and satisfied the different objectives. Due to the problem complexity, simultaneous solution methods are limited, which exits in different objectives with conflicts in functions satisfied. Two kinds of optimal mathematical models are proposed for the solution of MLRP. The three methods have been emphatically developed for MLRP. Multi-objective genetic algorithm (MGA) is introduced to solving MLRP based on Pareto. MGA architecture makes it possible to search the solution space efficiently, which provides a path for searching the solution with two-objective LRP. At Last the practical prove is given by random analysis for regional distribution with nine cities.
['Zhang Qian']
Research on multi-objective location-routing problem (MLRP) with random analysis for regional distribution
410,164
This paper discusses the architectural challenges of transitioning from services to experiences. In particular, it focuses on evolution from traditional IPTV services to more social scenarios, in which groups of people in different locations watch synchronized multimedia content together. In addition to the multimedia content, the shared experiences envisioned in this article provide a real-time communication channel between the participants. Based on an implemented architecture, this paper identifies a number of challenges and analyze them. The most important challenges highlighted in this article include: shared experience modeling, universal session handling, synchronization, and quality of service. This article is the first stone paving the way for a truly interoperable ecosystem, which can offer cross-domain experiences to the users.
['Ishan Vaishnavi', 'Pablo Cesar', 'Dick C. A. Bulterman', 'Oliver Friedrich']
From IPTV services to shared experiences: Challenges in architecture design
391,963
QoS routing, which satisfies diverse application requirements and optimizes network resource utilization, needs accurate link states to compute paths. Suitable link state update (LSU) algorithms which ensure timely propagation of link state information are thus critical. Since traffic fluctuation is one of the key reasons for link state uncertainty and existing approaches cannot effectively describe its statistical characteristics, we propose a novel stability-based (SB) LSU mechanism which consists of a second-moment-based triggering policy and a corresponding stability-based routing algorithm. They incorporate knowledge of link state stability in computing a stability measure for link metrics. With extensive simulations, we investigate the performance of the SB LSU mechanism and evaluate its effectiveness compared with existing approaches. Simulation results show that SB LSU can achieve good performance in terms of traffic rejection ratio, successful transmission ratio, efficient throughput and link state stability while maintaining a moderate volume of update traffic.
['Miao Zhao', 'Huiling Zhu', 'Victor O. K. Li', 'Zhengxin Ma']
A stability-based link state updating mechanism for QoS routing
236,787
We present propagation analysis results for so called typical and bad urban macrocellular scenarios measured at 5.3 GHz carrier frequency and 100 MHz chip rate in Helsinki. Propagation characteristics between these scenarios have been compared, and small and large scale channel parameters have been extracted for stochastic geometry based channel models.
['Terhi Rautiainen', 'Juha O. Juntunen', 'Kimmo Kalliola']
Propagation Analysis at 5.3 GHz in Typical and Bad Urban Macrocellular Environments
172,840
The performance of a fingerprint matching system is affected by the nonlinear deformation introduced in the fingerprint impression during image acquisition. This nonlinear deformation causes fingerprint features such as minutiae points and ridge curves to be distorted in a complex manner. A technique is presented to estimate the nonlinear distortion in fingerprint pairs based on ridge curve correspondences. The nonlinear distortion, represented using the thin-plate spline (TPS) function, aids in the estimation of an "average" deformation model for a specific finger when several impressions of that finger are available. The estimated average deformation is then utilized to distort the template fingerprint prior to matching it with an input fingerprint. The proposed deformation model based on ridge curves leads to a better alignment of two fingerprint images compared to a deformation model based on minutiae patterns. An index of deformation is proposed for selecting the "optimal" deformation model arising from multiple impressions associated with a finger. Results based on experimental data consisting of 1,600 fingerprints corresponding to 50 different fingers collected over a period of two weeks show that incorporating the proposed deformation model results in an improvement in the matching performance.
['Arun Ross', 'Sarat C. Dass', 'Anil K. Jain']
Fingerprint warping using ridge curve correspondences
72,001
IT managers are under constant pressure to deliver high quality IT services at low cost to their internal customers. Although this task seems to be virtually impossible it is the daily life of many IT professionals and as such of high practical importance. However, research on IT service delivery ITSD is rare and little analysis is devoted to the question how organizational settings and specific capabilities impact the performance of ITSD. Addressing this gap, this paper identifies critical success factors for ITSD management. A research framework was developed and tested using a multiple case study approach. The analysis was conducted using qualitative comparative analysis QCA. Findings show that a central organizational unit responsible for service delivery a so called "retained organization" is not necessarily a condition for high performing ITSD. Outstanding performance was only found in firms where adequate organizational structures are in place and ITSD-specific competencies like knowledge integration and measurement capabilities were cultivated.
['Anna Wiedemann', 'Andy Weeger', 'Heiko Gewald']
Organizational Structure vs. Capabilities: Examining Critical Success Factors for Managing IT Service Delivery
497,109
Topological data analysis (TDA) has rapidly grown in popularity in recent years. One of the emerging tools is persistent local homology, which can be used to extract local structure from a dataset. In this paper, we provide a survey that explores this new tool, emphasizing its use in data analysis.
['Brittany Terese Fasy', 'Bei Wang']
Exploring persistent local homology in topological data analysis
742,822
Material transport within a semiconductor factory (fab) can be broken into two categories - interbay (bay to bay movement) and intrabay (within bay) movement. Due to the fragile properties of semiconductors, there are risks associated with any movement of material. These risks range from misprocessed or damaged wafers, to throughput loss. These risks are increased during manual intrabay handling, where several operations may be taking place in a populated, confined area. Intrabay automated material handling is a means by which the above-mentioned risks can be reduced or eliminated. Simulation can play a key role in the design of intrabay automation systems, in verifying that the system will not negatively affect throughput of an area. A simulation model has been developed to analyze the effects of intrabay AMHS on a manufacturing bay. Model architecture is discussed, and three case studies demonstrate applications of the model.
['Thomas Jefferson', 'Madhav Rangaswami', 'Gregory Stoner']
Simulation in the design of ground-based intrabay automation systems
442,103
This paper focuses on urban environmental system identification. Wind-flow modeling is a challenging task because of simplifications of complex processes as well as geometry simplifications. Even a more detailed model may not be accurate because of uncertainties in parameter values. Measurement data can be employed in order to improve our understanding of the system. This paper underlines the challenges related to identification of input-parameter values of physics-based models and proposes a system-identification approach that is appropriate for wind-flow modeling. The approach has been tested with the case study of an experimental facility in the Future Cities Laboratory in Singapore, called the “BubbleZERO”.
['Didier Vernay', 'Benny Raphael', 'I.F.C. Smith']
Augmenting simulations with measurements
202,093
We present a novel cost-effective multicast capable optical cross connect (MC-OXC) node architecture which improves efficiency of optical power by constraining splitting to only two output ports, in order to reduce power losses derived from splitting into more than two output ports. This node would manage the following actions when necessary: (a) tap and binary- splitting, which consists of tapping a small percentage of the signal power to the local node (4-8%) and an w-splitting action (n=2); and (b) tap-and-continue. We call this type of node 2-STC node (binary-split-tap-continue). We compare it with other well known state-of-art proposals and analyze its benefits in terms of number of devices and power losses. An evaluation of applicability is given, showing that the binary-split restriction shows a good trade-off between power losses, bandwidth consumption and architectural simplicity. We conclude that the 2-STC node improves power efficiency and contributes to get a good trade-off between use of resources and optical power.
['Gonzalo M. Fernandez', 'David Larrabeiti', 'C. Vázquez', 'P. C. Lallana']
Power-Cost-Effective Node Architecture for Light-Tree Routing in WDM Networks
26,885
In a distributed multimedia integration environment, there is a need for transmission control methods based on real-time communication protocols, that provide a relative quality of service (QOS) to allow a variety of receiving capacities for group members and their requests and that provide a dynamic QOS to cope with temporary CPU overload and network congestion. Therefore, this study proposes a cooperative control method for end-to-end QOS control at a transport layer and for point-to-point control at a network layer. The former is mainly a set of flow-monitoring and flow-adjusting functions, and the latter a set of flow-control functions. The evaluation results show that dynamic QOS control with the two-level monitoring is preferable to conventional one in terms of the stabilization of average packet-loss rate and jitters.
['Yuko Onoe', 'Hideyuki Tokuda']
Evaluation of media scaling applied multicast protocol
372,129
Towards establishing electrical interfaces with patterned in vitro neurons, we have previously described the fabrication of hybrid elastomer-glass devices polymer-on-multielectrode array technology and obtained single-electrode recordings of extracellular potentials from confined neurons (Claverol-Tintureacute , 2005). Here, we demonstrate the feasibility of spatially localized multisite recordings from individual microchannel-guided neurites extending from microwell-confined somas with good signal-to-noise ratios (20 dB) and spike magnitudes of up to 300 muV. Single-cell current source density (scCSD) analysis of the spatio-temporal patterns of membrane currents along individual processes is illustrated
['Enrique Claverol-Tinturé', 'Joan Cabestany', 'Xavier Rosell']
Multisite Recording of Extracellular Potentials Produced by Microchannel-Confined Neurons In-Vitro
176,943
The lack of proper support for multicast services in the Internet has hindered the widespread use of applications that rely on group communication services such as mobile software agents. Although they do not require high bandwidth or heavy traffic, these types of applications need to cooperate in a scalable, fair and decentralized way. This paper presents GMAC, an overlay network that implements all multicast related functionality-including membership management and packet forwarding-in the end systems. GMAC introduces a new approach for providing multicast services for mobile agent platforms in a decentralized way, where group members cooperate in a fair way, minimize the protocol overhead, thus achieving great scalability. Simulations comparing GMAC with other approaches, in aspects such as end-to-end group propagation delay, group latency, group bandwidth, protocol overhead, resource utilization and failure recovery, show that GMAC is a scalable and robust solution to provide multicast services in a decentralized way to mobile software agent platforms with requirements similar to MoviLog.
['Pablo Gotthelf', 'Alejandro Zunino', 'Cristian Mateos', 'Marcelo Campo']
GMAC: An overlay multicast network for mobile agent platforms
456,849
Learning mathematics seems not to be an easy task for many students. One of the reasons why mathematics may be difficult to learn is because mathematical concepts (e.g. numbers and functions) are not intuitive or accessed through everyday experience (Chiappini and Bottino, 1999). One way of trying to facilitate learning mathematics is through the use of interactive visualizations. The aim of this study is to draw attention to the importance of user experience and information design principles in order to design effective interactive visualization in learning mathematics. This article reviews some studies on visualization in learning mathematics, describes some principles both for information design and for user experience, and discusses their relevance in creating effective interactive visualization in learning mathematics.
['Virginia Tiradentes Souto']
Interactive Visualizations in Learning Mathematics: Implications for Information Design and User Experience
30,179
In wireless mesh networks, it is desirable to utilize overlapping coverage of multiple parallel relays to assist the source-destination transmission. In this letter, we consider selection diversity (SD) to select the strongest link amongst the direct and N amplify-and-forward (AF) relay links. We derive new closed-form expressions for the symbol error rate (SER) in independent but not necessarily identically distributed (i.n.d.) Rayleigh fading relay channels. Our results are given as both lower bound and asymptotic expressions based on an accurate upper bound on the signal-to-noise ratio (SNR) of the relay links. Our asymptotic results provide key performance parameters such as the array gain and diversity order, which prove that a full N+1 diversity order is achieved. We show that SD can offer an array gain advantage over maximal-ratio combining which entails all the relays to transmit. Numerical results are shown to validate the analysis.
['Phee Lep Yeoh', 'Maged Elkashlan', 'Zhuo Chen', 'Iain B. Collings']
SER of Multiple Amplify-and-Forward Relays with Selection Diversity
203,609
The neural correlates of memory formation in humans have long been investigated by exposing subjects to diverse material and comparing responses to items later remembered to those forgotten. Tasks requiring memorization of sensory sequences afford unique possibilities for linking neural memorization processes to behavior, because, rather than comparing across different items of varying content, each individual item can be examined across the successive learning states of being initially unknown, newly learned, and eventually, fully known. Sequence learning paradigms have not yet been exploited in this way, however. Here, we analyze the event-related potentials of subjects attempting to memorize sequences of visual locations over several blocks of repeated observation, with respect to pre- and post-block recall tests. Over centro-parietal regions, we observed a rapid P300 component superimposed on a broader positivity, which exhibited distinct modulations across learning states that were replicated in two separate experiments. Consistent with its well-known encoding of surprise, the P300 deflection monotonically decreased over blocks as locations became better learned and hence more expected. In contrast, the broader positivity was especially elevated at the point when a given item was newly learned, i.e., started being successfully recalled. These results implicate the Broad Positivity in endogenously-driven, intentional memory formation, whereas the P300, in processing the current stimulus to the degree that it was previously uncertain, indexes the cumulative knowledge thereby gained. The decreasing surprise/P300 effect significantly predicted learning success both across blocks and across subjects. This presents a new, neural-based means to evaluate learning capabilities independent of verbal reports, which could have considerable value in distinguishing genuine learning disabilities from difficulties to communicate the outcomes of learning, or perceptual impairments, in a range of clinical brain disorders.
['Natalie Anna Steinemann', 'Clara Moisello', 'M. Felice Ghilardi', 'Simon P. Kelly']
Tracking neural correlates of successful learning over repeated sequence observations.
724,253
Analysis of many critical systems is usually based on the simulation of numerical models. This solution is suitable for analyzing systems with continuous and deterministic behaviors that evolve over time. However, real critical systems are more complex and can exhibit non-deterministic behavior due to unexpected events. Furthermore, critical systems present both discrete and continuous behaviors, which interact regularly. Both features can be modeled with hybrid formal methods, taking advantage of exploration techniques like model checking.#R##N##R##N#We have selected dam management as a case study. A dam is a critical system that has a hybrid behavior, there are continuous variables such as the water level, and discrete states such as the opening degrees of the spillways. At present, Decision Support Systems, based on numerical models, are used to manage complete river basins. Dams are modelled as black boxes which store and release water. A Decision Support Tool (DST) for dam management provides information about the possible consequences of dam operator actions, which can help to ensure the safety of the dam, as well as the efficient use of the water. In this work we have used formal methods to model a dam as a hybrid system, and we have obtained decision support information from the analysis performed with model checking.
['María-del-Mar Gallardo', 'Pedro Merino', 'Laura Panizo', 'Antonio Linares']
Developing a Decision Support Tool for Dam Management with SPIN
593,797
The advent of electronic publication creates strong interest in converting existing printed documents into electronic formats. During this process, image reproduction problems can occur due to the formation of Moire patterns in the screened halftone areas. Therefore, the optimal quality of a scanned document is achieved if halftone regions are first identified and processed separately. We propose a complete algorithm to achieve this objective. A wavelet based halftone segmentation algorithm is first designed to locate possible halftone regions using a decision function. We then introduce a suboptimal FIR descreening filter to efficiently handle various screening frequencies and angles. Experimental results are offered to illustrate the performance of our algorithm.
['Chung-Hui Kuo', 'A. Ravishankar Rao', 'Gerhard Robert Thompson']
Wavelet based halftone segmentation and descreening filter design
412,787
['Christian Hansen', 'Stephan Zidowitz', 'Bernhard Preim', 'Karl J. Oldhafer', 'Horst K. Hahn']
Impact of Model-based Risk Analyses for Liver Surgery Planning.
805,302
This paper describes the improvements which can be made to a spark-ignition engine by extensive use of automatic control. Particular emphasis is placed on fast transient phases produced by simultaneous action on the throttle and the electronic fuel injection device. The aim is to achieve better performance for the fuel/air ratio regulation system, thereby improving engine efficiency and exhaust emission during these transient phases. The authors begin by presenting an average dynamic model of the intake manifold validated on an engine test bench and goes on to develop a closed-loop system controlling average pressure in the intake manifold using the reference tracking model method. The air supply control system is combined with a predictor to compensate for delays in the injection procedure. The paper concludes with a comparison between the results obtained using simulation and those obtained experimentally from the engine. >
['P Bidan', 'S. Boverie', 'V. Chaumerliac']
Nonlinear control of a spark-ignition engine
364,265
This paper proposes an optimized H.264 intra prediction arithmetic to reduce the time required to complete the intra prediction by saving ninety percent waste time during the process of intra prediction. Firstly, the 16times16 luminance prediction can be decomposed equally into 16 parts and inserted into the same macro-block's 4x4 luminance prediction to save the 16times16 luminance prediction time in one macro-block's luminance prediction. Then, by means of changing the computation sequence of 4times4 luminance prediction, a large amount of waiting time between 4times4 blocks' luminance predictions can be further removed. Almost all the waste time can be reused or eliminated by these optimizations. At the last part of this paper, an optimized hardware implementation aiming at fitting to the whole H.264 video coding system and a method to realize 16times16 luminance DC prediction on it are proposed.
['Shaobo Wang', 'Xiaolin Zhang', 'Yuan Yao', 'Zhe Wang']
H.264 Intra Prediction Architecture Optimization
90,309
This paper presents the task definition, resources, participation, and comparative results for the Web People Search task, which was organized as part of the SemEval-2007 evaluation exercise. This task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name.
['Javier Artiles', 'Julio Gonzalo', 'Satoshi Sekine']
The SemEval-2007 WePS Evaluation: Establishing a benchmark for the Web People Search Task
439,142
Predicting the age of a person through face image analysis holds the potential to drive an extensive array of real world applications from human computer interaction and security to advertising and multimedia. In this paper the first application of the random forest for age regression is proposed. This method offers the advantage of few parameters that are relatively easy to initialize. Our method learns salient anthropometric quantities without a prior model. Significant implications include a dramatic reduction in training time while maintaining high regression accuracy throughout human development.
['Albert Montillo', 'Haibin Ling']
Age regression from faces using random forests
336,361
In this paper a 6-RRCRR parallel robot assisted minimally invasive surgery/microsurgery system (PRAMiSS) is introduced. Remote centre-of-motion (RCM) control algorithms of PRAMiSS suitable for minimally invasive surgery and microsurgery are also presented. The programmable RCM approach is implemented in order to achieve manipulation under the constraint of moving through the fixed penetration point. Having minimised the displacements of the mobile platform of the parallel micropositioning robot, the algorithms also apply orientation constraint to the instrument and prevent the tool tip to orient due to the robot movements during the manipulation. Experimental results are provided to verify accuracy and effectiveness of the proposed RCM control algorithms for minimally invasive surgery.
['Mohsen Moradi Dalvand', 'Bijan Shirinzadeh']
Remote centre-of-motion control algorithms of 6-RRCRR parallel robot assisted surgery system (PRAMiSS)
214,339
Recent advances in technology have caused a significant growth in wireless communications, which have resulted in a strong demand for reliable transmission of video data. The challenge of robust video transmission is to protect the compressed data against hostile channel conditions while bringing little impact on bandwidth efficiency. In this paper, using results from a simplified macroblock-based segmentation algorithm, we propose a framework called content-based resynchronization for the effective positioning of resynchronization markers such that the image quality of foreground can be improved at the expense of sacrificing unimportant background. We do this because, in applications such as video telephony and video conferencing, foreground is typically the most important image region for viewers. Experimental results demonstrate that this scheme significantly improve the perceptual quality of video sequences for robust video transmission.
['Tao Fang', 'Lap-Pui Chau']
Efficient content-based resynchronization approach for wireless video
336,342
Automatic events classification is an essential requirement for constructing an effective sports video summary. It has become a well-known theory that the high-level semantics in sport video can be "computationally interpreted" based on the occurrences of specific audio and visual features which can be extracted automatically. State-of-the-art solutions for features-based event classification have only relied on either manual-knowledge based heuristics or machine learning. To bridge the gaps, we have successfully combined the two approaches by using learning-based heuristics. The heuristics are constructed automatically using decision tree while manual supervision is only required to check the features and highlight contained in each training segment. Thus, fully automated construction of classification system for sports video events has been achieved. A comprehensive experiment on 10 hours video dataset, with five full-match soccer and five full-match basketball videos, has demonstrated the effectiveness/robustness of our algorithms.
['Dian Tjondronegoro', 'Yi-Ping Phoebe Chen']
Using Decision-Tree to Automatically Construct Learned-Heuristics for Events Classification in Sports Video
238,454
In causal inference, all methods of model learning rely on testable implications, namely, properties of the joint distribution that are dictated by the model structure. These constraints, if not satisfied in the data, allow us to reject or modify the model. Most common methods of testing a linear structural equation model (SEM) rely on the likelihood ratio or chi-square test which simultaneously tests all of the restrictions implied by the model. Local constraints, on the other hand, offer increased power (Bollen and Pearl 2013; McDonald 2002) and, in the case of failure, provide the modeler with insight for revising the model specification. One strategy of uncovering local constraints in linear SEMs is to search for overidentified path coefficients. While these overidentifying constraints are well known, no method has been given for systematically discovering them. In this paper, we extend the half-trek criterion of (Foygel, Draisma, and Drton 2012) to identify a larger set of structural coefficients and use it to systematically discover overidentifying constraints. Still open is the question of whether our algorithm is complete.
['Bryant Chen', 'Jin Tian', 'Judea Pearl']
Testable implications of linear structural equation models
582,283
Handling the evolving permanent contact of deformable objects leads to a collision detection problem of high computing cost. Situations in which this type of contact happens are becoming more and more present with the increasing complexity of virtual human models, especially for the emerging medical applications. In this context, we propose a novel collision detection approach to deal with situations in which soft structures are in constant but dynamic contact, which is typical of 3D biological elements. Our method proceeds in two stages: First, in a preprocessing stage, a mesh is chosen under certain conditions as a reference mesh and is spherically sampled. In the collision detection stage, the resulting table is exploited for each vertex of the other mesh to obtain, in constant time, its signed distance to the fixed mesh. The two working hypotheses for this approach to succeed are typical of the deforming anatomical systems we target: First, the two meshes retain a layered configuration with respect to a central point and, second, the fixed mesh tangential deformation is bounded by the spherical sampling resolution. Within this context, the proposed approach can handle large relative displacements, reorientations, and deformations of the mobile mesh. We illustrate our method in comparison with other techniques on a biomechanical model of the human hip joint.
['Anderson Maciel', 'Ronan Boulic', 'Daniel Thalmann']
Efficient collision detection within deforming spherical sliding contact
199,579
['Qinglai Wei', 'Derong Liu']
Optimal self-learning control scheme for discrete-time nonlinear systems using local value iteration
937,632
['António Barreto', 'Cláudia Antunes']
Finding Cyclic Patterns on Sequential Data
247,683
Efficient algorithmic implementations of wait-free queue classes in the Real-time Specification for Java are presented in this paper. The algorithms are designed to exploit the unidirectional nature of these queues and the priority-based scheduling in the specification. The proposed implementations support multiple real-time threads to access the queue in a wait-free manner and at the same time keep the "Write Once, Run Anywhere" principle of Java. Experiments show our implementations outperform the reference implementations, especially with high priority tasks. In the implementations, we introduce a new solution to the "enabled late-write" problem discussed in [9]. The problem is caused by using only memory read/write operations. The new solution is more efficient, with respect to space complexity, compared to previous wait-free implementations, without losing in time complexity.
['Philippas Tsigas', 'Yi Zhang', 'Daniel Cederman', 'Tord Dellsen']
Wait-Free Queue Algorithms for the Real-time Java Specification
436,813
The existence of tight reductions in cryptographic security proofs is an important question, motivated by the theoretical search for cryptosystems whose security guarantees are truly independent of adversarial behavior and the practical necessity of concrete security bounds for the theoretically-sound selection of cryptographic parameters. At Eurocrypt 2002, Coron described a meta-reduction technique that allows to prove the impossibility of tight reductions for certain digital signature schemes. This seminal result has found many further interesting applications. However, due to a technical subtlety in the argument, the applicability of this technique beyond digital signatures in the single-user setting has turned out to be rather limited. We describe a new meta-reduction technique for proving such impossibility results, which improves on known ones in several ways. It enables interesting novel applications, including a formal proof that for certain cryptographic primitives including public-key encryption/key encapsulation mechanisms and digital signatures, the security loss incurred when the primitive is transferred from an idealized single-user setting to the more realistic multi-user setting is impossible to avoid, and a lower tightness bound for non-interactive key exchange protocols. Moreover, the technique allows to rule out tight reductions from a very general class of non-interactive complexity assumptions. Furthermore, the proofs and bounds are simpler than in Coron's technique and its extensions.
['Christoph Bader', 'Tibor Jager', 'Yong Li', 'Sven Schäge']
On the Impossibility of Tight Cryptographic Reductions
809,272
Given a digraph $D = ( V,A )$ and a positive integer p, the p-competition graph of D, denoted $C_p ( D )$, is defined to have vertex set V and for $x,y \in V,xy \in E ( C_p ( D ) )$ if and only if there are at least p distinct vertices $v_1 ,v_2 , \cdots ,v_p \in V ( D )$ such that $xv_i $ and $yv_i \in A( D )$ for $i = 1,2, \cdots ,p$. This paper furthers the study of complete bipartite graphs that are p-competition graphs. The primary technique used is the concept of the p-edge clique cover (p-ECC) number. General results are given for $K_3 $-free graphs, as well as the result that $K_{n,n} $ is not a 2-competition graph for $n\geqq 4$.
['Michael S. Jacobson']
On the p -edge clique cover number of complete bipartite graphs
697,165
Let G be a graph with maximum degree @D whose vertex set is partitioned into parts V(G)=V"1@?...@?V"r. A transversal is a subset of V(G) containing exactly one vertex from each part V"i. If it is also an independent set, then we call it an independent transversal. The local degree of G is the maximum number of neighbors of a vertex v in a part V"i, taken over all choices of V"i and v@?V"i. We prove that for every fixed @e>0, if all part sizes |V"i|>=(1+@e)@D and the local degree of G is o(@D), then G has an independent transversal for sufficiently large @D. This extends several previous results and settles (in a stronger form) a conjecture of Aharoni and Holzman. We then generalize this result to transversals that induce no cliques of size s. (Note that independent transversals correspond to s=2.) In that context, we prove that parts of size |V"i|>=(1+@e)@Ds-1 and local degree o(@D) guarantee the existence of such a transversal, and we provide a construction that shows this is asymptotically tight.
['Po-Shen Loh', 'Benny Sudakov']
Independent transversals in locally sparse graphs
153,250
There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.
['Chuen-Kai Shie', 'Chung-Hisang Chuang', 'Chun-Nan Chou', 'Meng-hsi Wu', 'Edward Y. Chang']
Transfer representation learning for medical image analysis
673,889
['Lara Raad', 'Bruno Galerne']
Efros and Freeman Image Quilting Algorithm for Texture Synthesis
993,323
This paper presents the prototype design and development of a miniature MR-compatible fiber optic force sensor suitable for the detection of force during MR-guided cardiac catheterization. The working principle is based on light intensity modulation where a fiber optic cable interrogates a reflective surface at a predefined distance inside a catheter shaft. When a force is applied to the tip of the catheter, a force sensitive structure varies the distance and the orientation of the reflective surface with reference to the optical fiber. The visual feedback from the MRI scanner can be used to determine whether or not the catheter tip is normal or tangential to the tissue surface. In both cases the light is modulated accordingly and the axial or lateral force can be estimated. The sensor exhibits adequate linear response, having a good working range, very good resolution and good sensitivity in both axial and lateral force directions. In addition, the use of low-cost and MR-compatible materials for its development makes the sensor safe for use inside MRI environments.
['Panagiotis Polygerinos', 'Pinyo Puangmali', 'Tobias Schaeffter', 'Reza Razavi', 'Lakmal D. Seneviratne', 'Kaspar Althoefer']
Novel miniature MRI-compatible fiber-optic force sensor for cardiac catheterization procedures
413,660
Tools for quantifying teleoperation system performance and stability when communication delays are present are provided. A general multivariable system architecture is utilized which includes all four-types of data transmission between master and slave: force and velocity in both directions. It is shown that a proper use of an four channels is of critical importance in achieving high performance telepresence in the sense of accurate transmission of task impedances to the operator. It is also shown that transparency and robust stability (passivity) are conflicting design goals in teleoperation systems. The analysis is illustrated by comparing transparency and stability in two common architectures, as well as a recent passivated approach and a new transparency-optimized architecture, using simplified one-degree-of-freedom examples. >
['Dale A. Lawrence']
Stability and transparency in bilateral teleoperation
175,783
IEEE VNC 2010 attracted a tremendous amount of attention from car manufacturers and academia, with close to 100 submissions in this fast growing area. This level of interest was a clear manifestation of this exponentially growing research area that could revolutionize our daily lives. Indeed, the emerging and future applications of vehicular ad hoc networks (VANETs) are quite impressive. While the car manufacturers have embraced this area of research as one of their own mainly due to safety applications, the current research efforts have gone far beyond the initial premise and currently include traffic information systems that could mitigate congestion and improve traffic flow in urban areas, smart grid applications via electric vehicles and vehicular networks, entertainment, and mobile advertising, among others. Some of these applications are also tightly coupled with "greener environments" through the reduction of the carbon footprint of cars, alternative energy, "smart cities," and so on, whereby the vehicular networks (or "talking cars") are at the heart of these major initiatives. We believe that the current level of interest we are experiencing from industry and academia is based on these exciting opportunities and will therefore continue to grow exponentially in the next decade. It is widely believed that cars manufactured and sold in the United States will be equipped with dedicated short-range communications (DSRC) radios by 2013. This, coupled with expected rollout of fourth-generation (4G) Long Term Evolution (LTE) access by wireless carriers, will in turn increase the safety of cars, reduce accidents and fatalities in urban areas and on highways, and make driving a much more pleasant experience than it is today. As such, vehicular networking is one of those unique areas where communications and networking technology can create new opportunities and solutions for major problems and bottlenecks in transportation, energy, and greener environment. Thus, the future potential of vehicular networks seems huge.
['Wai Chen', 'Luca Delgrossi', 'Timo Kosch', 'Tadao Saito']
Topics in automotive networking - [series editorial]
91,755
Informally, a first-past-the-post game is a (probabilistic) game where the winner is the person who predicts the event that occurs first among a set of events. Examples of first-past-the-post games include so-called block and hidden patterns and the Penney-Ante game invented by Walter Penney. We formalise the abstract notion of a first-past-the-post game, and the process of extending a probability distribution on symbols of an alphabet to the plays of a game.#R##N##R##N#Analysis of first-past-the-post games depends on a collection of simultaneous (non-linear) equations in languages. Essentially, the equations are due to Guibas and Odlyzko but they did not formulate them as equations in languages but as equations in generating functions detailing lengths of words.#R##N##R##N#Penney-Ante games are two-player games characterised by a collection of regular, prefix-free languages. For such two-player games, we show how to use the equations in languages to calculate the probability of winning. The formula generalises a formula due to John H. Conway for the original Penney-Ante game. At no point in our analysis do we use generating functions. Even so, we are able to calculate probabilities and expected values. Generating functions do appear to become necessary when higher-order cumulatives (for example, the standard deviation) are also required.
['Roland Carl Backhouse']
First-Past-the-Post games
277,803
The advent of wideband systems, e.g., software defined radios, cognitive radios and UWB technology, motivates research for new transceiver architectures and circuit topologies to arrive at compact and low power solutions. Reference frequency generation in wideband CMOS receivers is usually power and area hungry. In this paper a wide band quadrature demodulator, based on mixers reconfigurable between fundamental and sub-harmonic operation modes is presented. The technique allows covering an RF bandwidth three times larger than the frequency covered by the synthesizer. Multiple local oscillator phases are required for the proposed architecture. For low phase noise and fast settling time, they are generated by means of a multi-stage injection locked ring oscillator. This solution proves very accurate and power efficient and may find applications in other communication systems requiring multiple phase references. A demodulator test chip tailored to WiMedia UWB groups 1, 3, 4 (3.1-9.5 GHz), and comprising mixers and frequency synthesizer, has been realized in a 65 nm CMOS technology. Experimental results show 10 dB of conversion gain with 2.3 nV/sqrt(Hz) equivalent input noise voltage spectral density. IIP2 and IIP3, with interferers in the GSM and WLAN bands, are 40 dBm and 11 dBm respectively. The synthesizer displays maximum spurs level of -43 dBc, a state of the art phase noise of -128 dBc/Hz@10 MHz offset and a settling time of less than 6 ns with 43 m W only.
['Andrea Mazzanti', 'Mohammad B. Vahidfar', 'Marco Sosio', 'Francesco Svelto']
A Low Phase-Noise Multi-Phase LO Generator for Wideband Demodulators Based on Reconfigurable Sub-Harmonic Mixers
64,855
['Zhijian Zhang', 'Hong Wu', 'Kun Yue', 'Jin Li', 'Weiyi Liu']
Influence Maximization for Cascade Model with Diffusion Decay in Social Networks
848,392
High accuracy and high generalization capability are two conflicting objectives in the design of adaptive neuro-fuzzy inference system (ANFIS). Motivated by previous studies on handling similar conflicting situations in model selection and autoregressive order estimation, this paper investigates information criteria for the optimization of ANFIS model with applications in machine defect severity classification. The studied criteria include the Akaike Information Criterion (AIC), the corrected AIC (AIC c ), and the Generalized Information Criterion (GIC). By introducing a novel model complexity function and replacing the variances in the original criteria with weighted mean square error, the criteria extended for ANFIS are defined. Based on these criteria, the optimized ANFIS model is chosen to be the one which leads to the minimized criterion value. The performance of these criteria is experimentally studied using bearing defect severity classification as an example.
['Shuangwen Sheng', 'Robert X. Gao']
Optimization of ANFIS with Applications in Machine Defect Severity Classification
46,647
This article contains a retrospective overview of connected work performed for the European Space Agency (ESA) over a span of 10 years. We have been creating and rening an AI approach to problem solving and injected a seriesof deployedplanningand schedulingsystemswhich have innovated agency’s mission planning practice. Goal of the paper is to identify general lessons learned and guidelines for work practice of the future. Specically, the work dwells on issue related to some key points that have contributed to strengthen the effectiveness of our approach: the attention to domain modeling, the constraintbased algorithmsynthesis, andthe developmentof an endto-end methodology to eld applications. Desirable features of space applications useful for protable and successful deployment on different ground segment operations are also discussed.
['Amedeo Cesta', 'Gabriella Cortellessa', 'Simone Fratini', 'Angelo Oddi', 'Giulio Bernardi']
Deploying Interactive Mission Planning Tools Experiences and Lessons Learned
565,818
Semantic interpretation and understanding of images is an important goal of visual recognition research and offers a large variety of possible applications. One step towards this goal is semantic segmentation, which aims for automatic labeling of image regions and pixels with category names. Since usual images contain several millions of pixel, the use of kernel-based methods for the task of semantic segmentation is limited due to the involved computation times. In this paper, we overcome this drawback by exploiting efficient kernel calculations using the histogram intersection kernel for fast and exact Gaussian process classification. Our results show that non-parametric Bayesian methods can be utilized for semantic segmentation without sparse approximation techniques. Furthermore, in experiments, we show a significant benefit in terms of classification accuracy compared to state-of-the-art methods.
['Alexander Freytag', 'Björn Fröhlich', 'Erik Rodner', 'Joachim Denzler']
Efficient semantic segmentation with Gaussian processes and histogram intersection kernels
262,962
Motivated by the goal of hardening operating system kernels against rootkits and related malware, we survey the common interfaces and methods which can be used to modify (either legitimately or maliciously) the kernel which is run on a commodity desktop computer. We also survey how these interfaces can be restricted or disabled. While we concentrate mainly on Linux, many of the methods for modifying kernel code also exist on other operating systems, some of which are discussed.
['Trent Jaeger', 'Paul C. van Oorschot', 'Glenn Wurster']
Countering unauthorized code execution on commodity kernels: A survey of common interfaces allowing kernel code modification
349,195