abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
IT-gestützte Ermittlung von Akzeptanzfaktoren für Biogasanlagen.
['Sören Henke', 'Welf Guenther-Lübbers', 'Ludwig Theuvsen']
IT-gestützte Ermittlung von Akzeptanzfaktoren für Biogasanlagen.
735,779
Scientific workloads running on current extreme-scale systems routinely generate tremendous volumes of data for postprocessing. This data movement has become a serious issue due to its energy cost and the fact that I/O bandwidths have not kept pace with data generation rates. In situ analytics is an increasingly popular alternative in which post-simulation processing is embedded into an application, running as part of the same MPI job. This can reduce data movement costs but introduces a new potential source of interference for the application. Using a validated simulation-based approach, we investigate how best to mitigate the interference from time-shared in situ tasks for a number of key extreme-scale workloads. This paper makes a number of contributions. First, we show that the independent scheduling of in situ analytics tasks can significantly degradation application performance, with slowdowns exceeding 1000%. Second, we demonstrate that the degree of synchronization found in many modern collective algorithms is sufficient to significantly reduce the overheads of this interference to less than 10% in most cases. Finally, we show that many applications already frequently invoke collective operations that use these synchronizing MPI algorithms. Therefore, the syncronization introduced by these MPI collective algorithms can be leveraged to efficiently schedule analytics tasks with minimal changes to existing applications. This paper provides critical analysis and guidance for MPI users and developers on the importance of scheduling in situ analytics tasks. It shows the degree of synchronization needed to mitigate the performance impacts of these time-shared coupled codes and demonstrates how that synchronization can be realized in an extreme-scale environment using modern collective algorithms.
['Scott Levy', 'Kurt B. Ferreira', 'Patrick M. Widener', 'Patrick G. Bridges', 'Oscar H. Mondragon']
How I Learned to Stop Worrying and Love In Situ Analytics: Leveraging Latent Synchronization in MPI Collective Algorithms
918,868
This paper aims at presenting a tool that supports the learner in the creation of lesson notes from resources present in an e-learning platform. To make the resulting material appealing, the lesson notes are represented as cartoons. And in a context of foreign language learning, sound files can be attached to the text of the cartoon to allow both channels (vision and audio) to be trained together and linked. The resulting lesson notes can also be used for lesson memorization, because the cartoon can be played repeatedly.
['Laure France']
Lesson' Toon: Conception of Lesson Notes as Cartoon in an E-Learning Platform
390,377
The widespread integration of cameras in hand-held and head-worn devices and the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images using spatio-temporal natural language queries. We evaluate our system using a new dataset of real image queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability in the resolution of spatial relations in natural language utterances. We show that our system can cope with this variability using personalisation through an online learning-based retrieval formulation.
['Sreyasi Nag Chowdhury', 'Mateusz Malinowski', 'Andreas Bulling', 'Mario Fritz']
Xplore-M-Ego: Contextual Media Retrieval Using Natural Language Queries
813,113
Asynchronous circuits with conditional behavior often have distinct modes of operation each of which can be modeled as a marked graph with its own performance target. This paper derives performance bounds for such conditional circuits based on the cycle times of successively larger collections of these underlying modes. Our bounds prove the somewhat intuitive result that treating a conditional circuit as unconditional for slack matching guarantees the circuit performance requirement conservatively. We also prove the somewhat counter-intuitive result that the average cycle time of a conditional circuit may be worse than the weighted average of the cycle time of its underlying collection of modes. Finally, the paper outlines the potential application of these bounds to future improvements in slack matching of such conditional circuits.
['Mehrdad Najibi', 'Peter A. Beerel']
Performance Bounds of Asynchronous Circuits with Mode-Based Conditional Behavior
198,499
Control-flow machines are sequential in nature, executing instructions in sequence through control of program counters, whereas data-flow machines execute instructions only as input operands are made available, a process directed at the parallelism inherent within programs. At the architecture level, data-flow machines execute instructions asynchronously. In contrast, at the implementation level, the synchronous design framework of computer systems which employs globally clocked timing discipline has reached its design limits owing to problems of clock distribution. Therefore, renewed interest has been expressed in the design of computer systems based upon an asynchronous (or self-timed) approach free of the discipline imposed by the global clock. Thus, the design of a static MIMD data-flow processor using micropipelines is presented. The implemented processor, or the micro data-flow processor, differs from processors previously reported insofar as the micro data-flow processor is wholly asynchronous at both the architectural and the implementation levels. >
['Chih-Ming Chang', 'Shih-Lien Lu']
Design of a static MIMD data flow processor using micropipelines
272,982
This work deals with textural segmenting of high resolution sidescan sonar images by using active contours and Gabor filters. In fact this method is a modification of Chan and Vese Active contour model. It makes the method suitable for textural segmenting of above said images. First, image is passed through a symmetric bank of Gabor filters. Then, filtered images that possess a significant component of the original image are subjected to morphological closing operator. At the end, we use multi channel C-V active contour model for segmenting areas with different textures. Results of the proposed method are presented for different real and simulated sidescan sonar images to demonstrate the robustness of it.
['Kaveh Samiee', 'G. A. Rezai Rad']
Textural Segmentation of Sidescan Sonar Images Based on Gabor Filters Bank and Active Contours without Edges
333,291
Probing interactivity in open data for General Practice. An evidence-based approach
['Federico Cabitza', 'Francesco Del Zotti', 'Angela Locoro']
Probing interactivity in open data for General Practice. An evidence-based approach
985,244
OpenFARM: an open framework for the analysis of rich media
['Filipe Martins', 'James Orwell', 'Sergio A. Velastin']
OpenFARM: an open framework for the analysis of rich media
128,251
We address the problem of denoising for image patches. The approach taken is based on Bayesian modeling of sparse representations, which takes into account dependencies between the dictionary atoms. Following recent work, we use a Boltzman machine to model the sparsity pattern. In this work we focus on the special case of a unitary dictionary and obtain the exact MAP estimate for the sparse representation using an efficient message passing algorithm. We present an adaptive model-based scheme for sparse signal recovery, which is based on sparse coding via message passing and on learning the model parameters from the data. This adaptive approach is applied on noisy image patches in order to recover their sparse representations over a fixed unitary dictionary. We compare the denoising performance to that of previous sparse recovery methods, which do not exploit the statistical dependencies, and show the effectiveness of our approach.
['Tomer Faktor', 'Yonina C. Eldar', 'Michael Elad']
Denoising of image patches via sparse representations with learned statistical dependencies
476,166
On the Use of UML Stereotypes in Creating Higher-order Domain-specific Languages and Tools.
['Edgars Rencis', 'Janis Barzdins']
On the Use of UML Stereotypes in Creating Higher-order Domain-specific Languages and Tools.
798,325
This paper describes a simulation tool developed to aid the analysis of air traffic management domains and potential air traffic control strategies. The simulation represents aircraft and automated air traffic control support tool functionality based on a computational model of aircraft flight management system trajectories. It simulates the effects of environmental disturbances and air traffic controller intervention on air traffic in fast time. The paper outlines a methodology that leverages these capabilities to help with tool design and evaluation, and discusses the results of preliminary simulations aimed toward using the proposed methodology for characterizing the effectiveness of particular strategy-tool combinations for supporting more efficient terminal-area air traffic management. This research is supported by NASA's Vehicle Systems Program Quiet Aircraft Technology project and Airspace Systems Program Advanced Air Transportation Technology project.
['Todd J. Callantine']
Air traffic management system domain and control strategy analysis
209,034
The authors derive a simple, recursive, closed-form algorithm for estimating the parameters of a moving-average (MA) model of known order, using only the autocorrelation and the 1-D diagonal slice of the third-order cumulant of its response to excitation by an unobservable, non-Gaussian, IID process. The output may be corrupted by zero-mean, nonskewed white noise of unknown variance. The autoregressive moving-average (ARMA) case is briefly discussed. >
['Ananthram Swami', 'Jerry M. Mendel']
Closed-form recursive estimation of MA coefficients using autocorrelations and third-order cumulants
537,305
The Open Three Consortium: An Open-Source Initiative at the Service of Healthcare and Inclusion.
['P. Inchingolo']
The Open Three Consortium: An Open-Source Initiative at the Service of Healthcare and Inclusion.
545,602
Adaptive Recovery of Signals by Convex Optimization.
['Zaid Harchaoui', 'Anatoli Juditsky', 'Arkadi Nemirovski', 'Dmitry Ostrovsky']
Adaptive Recovery of Signals by Convex Optimization.
594,049
Bringing Zero-Knowledge Proofs of Knowledge to Practice.
['Endre Bangerter', 'Stefania Barzan', 'Stephan Krenn', 'Ahmad-Reza Sadeghi', 'Thomas Schneider', 'Joe-Kai Tsay']
Bringing Zero-Knowledge Proofs of Knowledge to Practice.
787,559
An algebraic characterization of nonuniform perfect reconstruction (PR) filterbanks with integer decimation factors is presented. The PR property is formulated in the z domain based on the response of the linear multirate systems to the delayed unit-step signals. This leads to a unique class of characterizing formulas that are necessary and sufficient conditions for the PR property and free from the complex roots of unity. Two related characterizations of nonuniform PR systems, in the form of necessary conditions, are also developed based on these formulas. As a concrete example, the results are then used to derive necessary and sufficient conditions for PR nonuniform delay chain filterbanks. The conditions show that nonuniform delay chain filterbanks are the signal processing realizations of the mathematical notion of the exact covering systems of congruence relations. Important results from the mathematics literature on the exact covering systems are introduced. The results elucidate the admissible factors of decimation for the nonuniform PR delay chain systems in settings with maximally distinct decimation factors. A simple test of the PR property for delay chain systems is also presented. The test is based on the divisibility of certain polynomials by the cyclotomic polynomials. Finally, multirate systems based on the Beatty sequences, which are the irrational generalization of the exact covering systems, are briefly discussed.
['Saed Samadi', 'M.O. Ahmad', 'M.N.S. Swamy']
Characterization of nonuniform perfect-reconstruction filterbanks using unit-step signal
528,458
This paper describes an application of Structured Sparsity for denoising. The subject of the denoising is about one-hundred-year old wax cylinders which have been discovered recently and are going to be analysed by musical scientists. The digitized cylinders contain nonstationary background noise of high level which has to be attenuated very carefully because of arising undesirable artifacts. The Structured Sparsity approach is compared to the professional audio restoration software together with a link to the audio examples presented online.
['Vaclav Mach']
Denoising phonogram cylinders recordings using Structured Sparsity
608,637
This paper considers the problem of sliding mode control for stochastic Markovian jumping systems by means of fuzzy method. The Takagi–Sugeno (T–S) fuzzy stochastic model subject to state-dependent noise is presented. A key feature in this work is to remove the restricted condition that each local system model had to share the same input channel, which is usually assumed in some existing results. The integral sliding surface is constructed for every mode and the connections among various sliding surfaces are established via a set of coupled matrices. Moreover, the present sliding mode controller including the transition rates of modes can cope with the effect of Markovian switching. It is shown that both the reachability of sliding surfaces and the stability of sliding mode dynamics can be ensured. Finally, numerical simulation results are given.
['Bei Chen', 'Tinggang Jia', 'Yugang Niu']
Robust fuzzy control for stochastic Markovian jumping systems via sliding mode method
719,638
Handling duplicated tasks in process discovery by refining event labels
['Xixi Lu', 'Dirk Fahland', 'Frank J. H. M. van den Biggelaar', 'Wil M. P. van der Aalst']
Handling duplicated tasks in process discovery by refining event labels
877,507
Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.
['Qiang Liu', 'Shu Wu', 'Liang Wang', 'Tieniu Tan']
Predicting the next location: a recurrent model with spatial and temporal contexts
919,099
Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.
['David A. Friedenberg', 'Chad E. Bouton', 'Nicholas V. Annetta', 'Nicholas Skomrock', 'Mingming Zhang', 'Michael Schwemmer', 'Marcia Bockbrader', 'W. Jerry Mysiw', 'Ali R. Rezai', 'Herbert S. Bresler', 'Gaurav Sharma']
Big data challenges in decoding cortical activity in a human with quadriplegia to inform a brain computer interface
913,553
In this paper, we present a framework for adapting a writer independent system to a user from samples of the user's writing. The writer independent system is modeled using hidden Markov models. Training for a writer involves recomputing the topology and parameters of the hidden Markov models using the writer's data. The framework uses the writer independent system to get an initial alignment of the writer's data. The system described reduces the error rate by an average of 65%. For the results presented, no language model was used.
['Jayashree Subrahmonia', 'Krishna S. Nathan', 'Michael P. Perrone']
Writer dependent recognition of on-line unconstrained handwriting
359,407
A simple voltage reference source in weak inversion CMOS for ultra low-voltage and ultra low-power applications is presented. Its voltage reference is provided by the threshold voltage of an n MOS transistor, as it can be verified in the theoretical deduction presented and by the BSIM3v3 simulations. The circuit simulated in a standard 0.35μm CMOS process provided a voltage reference of 512.03mV with a variation of just 33ppm°C for the temperature range of - 30°C to 100°C, and ± 0.42%/V variation for the power supply ranging from 750mV to 3.60V. The circuit takes a total area of 0.06mm 2 and requires only 800nA of biasing current.
['Luis H. C. Ferreira', 'Tales Cleber Pimenta', 'Robson L. Moreno', 'Wilhelmus Van Noije']
Ultra low-voltage ultra low-power CMOS threshold voltage reference
30,558
We combine ALOS-PALSAR coherence images with airborne LiDAR data, both acquired over the Piton de la Fournaise volcano (Reunion Island, France), to study the main errors affecting repeat-pass InSAR measurements and understand their causes. The high resolution DTM generated using LiDAR data is used to subtract out the topographic contribution from the interferogram and to improve the radar coherence maps. The relationship between LiDAR intensity and radar coherence is then analyzed over several typical volcanic surfaces: it helps to evaluate the coherence loss terms. Additionally, the geometric and physical properties of these surfaces have been measured in situ. Coherence deteriorates over pyroclastic deposits and rough lava flows due to volume and surface scattering. In the presence of vegetation, it is directly related to plant density: the higher the Leaf Area Index (LAI), the lower the coherence. The accuracy of InSAR measurements strongly decreases for LAI higher than 7.
['Melanie Sedze', 'Essam Heggy', 'Frédéric Bretar', 'Daniel Berveiller', 'S. Jacquemoud']
L-band InSAR decorrelation analysis in volcanic terrains using airborne LiDAR data and in situ measurements: The case of the Piton de la Fournaise volcano, France
358,432
Service-Oriented Computing (SOC) has been marked as the technology trend which caters for the interoperability among the components of a distributed system. However, the emergence of various incompatible instantiations of the SOC paradigm e.g. web, grid and p2p services, as well as the interoperability problems encountered within each of these instantiations (e.g. web service interoperability problems addressed by the WS-I Basic Profile) state clearly that interoperability is still elusive. In order to address this problem we first need to identify all problem dimensions and consequently to provide appropriate solutions. Within this paper we describe a set of interoperability dimensions that need to be considered and we present a generic service model which we view as a first step in addressing some of the identified problem dimensions.
['George Athanasopoulos', 'Aphrodite Tsalgatidou', 'Michael Pantazoglou']
Interoperability among Heterogeneous Services
480,466
This paper proposes a new algorithm for retrieving sound based on successive relative search. In retrieving musical sound focusing on its sound features, emotional representations are more appropriate than conventional keywords of the genre or the composer. A vector-based sound retrieval system "Sound Advisor" was built up using emotional parameters. The problem is how to find a better candidate, if the first candidate is not satisfactory. The reason is that it is difficult to quantitatively image the emotional vector space of sounds. This paper proposes a method for successively retrieving sounds using relative search regarding the found candidate as the reference base. This method can contribute to sound-effect retrieval for producing movies, and emotional communication via sound.
['Kiyoaki Aikawa', 'Kanako Yajima']
Vector-based sound retrieval using successive relative search
385,498
We discuss the numerical solution of differential equations of fractional order with discontinuous right-hand side. Problems of this kind arise, for instance, in sliding mode control. After applying a set-valued regularization, the behavior of some generalizations of the implicit Euler method is investigated. We show that the scheme in the family of fractional Adams methods possesses the same chattering-free property of the implicit Euler method in the integer case. A test problem is considered to discuss in details some implementation issues and numerical experiments are presented.
['Roberto Garrappa']
Original article: On some generalizations of the implicit Euler method for discontinuous fractional differential equations
656,863
Three Dimensional Coastline Deformation from Insar Envisat Satellite Data
['Maged Marghany']
Three Dimensional Coastline Deformation from Insar Envisat Satellite Data
684,815
Complex systems have received growing interest recently, due to their universal presence in all areas of science and engineering. Complex networks represent a simplified description of the interactions present in such systems. Boolean networks were introduced as models of gene regulatory networks. Simple enough to be computationally tractable, they capture the rich dynamical behaviour of complex networks. Structure-dynamics relationships in Boolean networks have been investigated by inferring a particular structure of a network from the time sequence of its dynamical states. However, general properties of network structures, which can be obtained from their dynamics, are lacking. We create a mapping of dynamical states to structural classes, using time-delayed normalized mutual information, in an ensemble approach. The high accuracy of our classification algorithm proves that structural information is embedded in network dynamics and that we can extract it with information-theoretic methods.
['Septimia Sarbu', 'Ilya Shmulevich', 'Olli Yli-Harja', 'Matti Nykter', 'Juha Kesseli']
Mapping dynamical states to structural classes for Boolean networks using a classification algorithm
582,767
Consumers’ health information needs are relatively stable and the 100 most common unique queries are about 77% the same from month to month. Website sponsors should provide a broad range of information about a relatively stable number of topics. Analyses of log similarity may identify media-induced, cyclical, or seasonal changes in areas of consumer interest.
['Scott-Wright Ao', 'Jonathan Crowell', 'Qing Zeng', 'David W. Bates', 'Robert A. Greenes']
Analysis of information needs of users of MEDLINEplus, 2002 - 2003.
76,457
Mixed-signal speech processing system has been used in microphone-embedded portable electronics. Speech codec is the core of such a system. In speech codecs, sigma-delta modulator is one key block which converts analog voice signal into pulse density modulation (PDM) output for further digital signal processing. This paper shows the design of a triple-mode switched-capacitor sigma-delta modulator for speech codec. High mode (20 kHz signal bandwidth), low mode (4 kHz signal bandwidth), and sleep mode are implemented by addressing a flexible operation current. In order to realize a fully integrated solution, power management circuits dedicated to the modulator are also implemented, which includes frequency detector, LDO, bandgap and reference voltage buffers. The ASIC is fabricated in a 0.18-um CMOS process. In high mode measurement, an A-weighted signal-to-noise ratio (SNR) of 78 dB with −2.5 dBFS input is achieved under 138 uA. In low mode measurement, an SNR of 78 dB with −2 dBFS is achieved under 100 uA. In the standby condition, a sleep mode with current less than 10 uA is realized, so that the battery life is extended.
['Lei Zou', 'Marco De Blasi', 'Gino Rocca', 'M. Grassi', 'P. Malcovati', 'A. Baschirotto']
Fully integrated triple-mode sigma-delta modulator for speech codec
967,847
Operating systems form a foundation for robust application software, making it important to understand how effective they are at handling exceptional conditions. The Ballista testing system was used to characterize the handling of exceptional input parameter values for up to 233 POSIX functions and system calls on each of 15 widely used operating system (OS) implementations. This identified ways to crash systems with a single call, ways to cause task hangs within OS code, ways to cause abnormal task termination within OS and library code, failures to implement defined POSIX functionality, and failures to report unsuccessful operations. Overall, only 55 percent to 76 percent of the exceptional tests performed generated error codes, depending on the operating system being tested. Approximately 6 percent to 19 percent of tests failed to generate any indication of error despite exceptional inputs. Approximately 1 percent to 3 percent of tests revealed failures to implement defined POSIX functionality for unusual, but specified, situations. Between 18 percent and 33 percent of exceptional tests caused the abnormal termination of an OS system call or library function, and five systems were completely crashed by individual system calls with exceptional parameter values. The most prevalent sources of these robustness failures were illegal pointer values, numeric overflows, and end-of-file overruns.
['Philip Koopman', 'John DeVale']
The exception handling effectiveness of POSIX operating systems
211,699
The main purpose of this study is to find possible relationships between the smoothness of hand function during laparoscopic ventral hernia (LVH) repair and psychomotor skills in a defined virtual reality (VR) environment. Thirty four surgical residents N = 34 performed two scenarios. First, participants were asked to perform a simulated LVH repair during which their hand movement was tracked using electromagnetic sensors. Subsequently, the smoothness of hand function was calculated for each participant's dominant and non-dominate hand. Then participants performed two modules in a defined VR environment, which assessed their force matching and target tracking capabilities. More smooth hand function during the LVH repair correlated positively with higher performance in VR modules. Also, translational smoothness of dominant hand is found as the most informative smoothness metric in the LVH repair scenario. Therefore, defined force matching and target tracking assessments in VR can potentially be used as an indirect assessment of fine motor skills in the LVH repair.
['Hossein Mohamadipanah', 'Chembian Parthiban', 'Jay N. Nathwani', 'Lakita Maulson', 'Shannon M. DiMarco', 'Carla M. Pugh']
Hand smoothness in laparoscopic surgery correlates to psychomotor skills in virtual reality
857,134
Interdependencies among infrastructure systems are now becoming commonplace, and present both opportunities and vulnerabilities. Initial attention was paid to functional interdependencies among infrastructure systems regardless of locational characteristics. Using electric power as a focal point, geographic interdependencies are evaluated, that is, outages that spread across several states rather than being confined to single states. The analysis evaluates the extent to which the two different groups have distinct characteristics. The characteristics examined include incident counts, number of customers lost, duration and energy unserved. Data are drawn from the Disturbance Analysis Working Group (DAWG) database, which is maintained by the North American Electric Reliability Council (NERC), and from the U.S. Energy Information Administration (EIA).
['Carlos E. Restrepo', 'Jeffrey S. Simonoff', 'Rae Zimmerman']
Unraveling Geographic Interdependencies in Electric Power Infrastructure
258,596
Modellbildung in der Informatik
['Manfred Broy', 'Ralf Steinbrüggen']
Modellbildung in der Informatik
768,969
This review article provides a critical discussion of empirical studies that deal with the use of online news sources in journalism. We evaluate how online sources have changed the journalist–source relationship regarding selection of sources as well as verification strategies. We also discuss how the use of online sources changes audience perceptions of news. The available research indicates that journalists have accepted online news sourcing techniques into their daily news production process, but that they hesitate to use information retrieved from social media as direct and quoted sources in news reporting. Studies show that there are differences in the use of online sources between media sectors, type of reporting, and country context. The literature also suggests that verification of online sources requires a new set of skills that journalists still struggle with. We propose a research agenda for future studies.
['Sophie Lecheler', 'Sanne Kruikemeier']
Re-evaluating journalistic routines in a digital age: A review of research on the use of online sources
627,488
One-Way Functions and (Im)perfect Obfuscation.
['Ilan Komargodski', 'Tal Moran', 'Moni Naor', 'Rafael Pass', 'Alon Rosen', 'Eylon Yogev']
One-Way Functions and (Im)perfect Obfuscation.
747,756
Cognitive radio is always based on spectrum sensing to cognize the physical characteristics of the wireless channel and carry out wireless communication resource scheduling. This kind of method makes it hard for cognitive radio to respond to a change in the radio environment in advance. Therefore, both the predictive ability and cognitive contents are limited. This paper proposes a new system called visual cognitive radio, which uses visual information to cognize the radio environment. Visual observation has a good predictive ability and abundant cognitive information, which enables the visual cognitive radio to react to the frequent change in the radio environment and make optimal configurations to the process of wireless communication. This paper presents a typical communication scene as an example to explain the advantages of visual cognitive radio, and also makes a preliminary analysis of its applications and challenges. Copyright © 2011 John Wiley & Sons, Ltd.
['Tian Liu', 'Shihai Shao', 'Daixiong Ye', 'Youxi Tang', 'Juan Zhou']
Visual Cognitive Radio
404,279
The BLEND-LINC Project on ‘Electronic Journals’ After Two Years
['B Shackel', 'D. J. Pullinger', 'Tim Maude', 'W. P. Dodd']
The BLEND-LINC Project on ‘Electronic Journals’ After Two Years
96,749
Deep learning gains lots of attentions in recent years and is more and more important for mining values in big data. However, to make deep learning practical for a wide range of applications in Tencent Inc., three requirements must be considered: 1) Lots of computational power are required to train a practical model with tens of millions of parameters and billions of samples for products such as automatic speech recognition (ASR), and the number of parameters and training data is still growing. 2) The capability of training larger model is necessary for better model quality. 3) Easy to use frameworks are valuable to do many experiments to perform model selection, such as finding an appropriate optimization algorithm and tuning optimal hyper-parameters. To accelerate training, support large models, and make experiments easier, we built Mariana, the Tencent deep learning platform, which utilizes GPU and CPU cluster to train models parallelly with three frameworks: 1) a multi-GPU data parallelism framework for deep neural networks (DNNs). 2) a multi-GPU model parallelism and data parallelism framework for deep convolutional neural networks (CNNs). 3) a CPU cluster framework for large scale DNNs. Mariana also provides built-in algorithms and features to facilitate experiments. Mariana is in production usage for more than one year, achieves state-of-the-art acceleration performance, and plays a key role in training models and improving quality for automatic speech recognition and image recognition in Tencent WeChat, a mobile social platform, and for Ad click-through rate prediction (pCTR) in Tencent QQ, an instant messaging platform, and Tencent Qzone, a social networking service.
['Yongqiang Zou', 'Xing Jin', 'Yi Li', 'Zhimao Guo', 'Eryu Wang', 'Bin Xiao']
Mariana: tencent deep learning platform and its applications
627,607
This paper proposes a neural network that stores and retrieves sparse patterns categorically, the patterns being random realizations of a sequence of biased (0,1) Bernoulli trials. The neural network, denoted as categorizing associative memory, consists of two modules: 1) an adaptive classifier (AC) module that categorizes input data; and 2) an associative memory (AM) module that stores input patterns in each category according to a Hebbian learning rule, after the AC module has stabilized its learning of that category. We show that during training of the AC module, the weights in the AC module belonging to a category converge to the probability of a "1" occurring in a pattern from that category. This fact is used to set the thresholds of the AM module optimally without requiring any a priori knowledge about the stored patterns.
['Ferdinand Peper', 'Mehdi N. Shirazi']
A categorizing associative memory using an adaptive classifier and sparse coding
461,259
Estimating the cross-channel gain from a small cell base station to an active macro user is critical in two-tier heterogeneous networks (HetNets). Conventional methods require the backhaul link, which is too complicated to be used in HetNets. In this paper, we propose a new method to simplify the system. We find that by measuring the signal-to-noise ratio (SNR) and recognizing the modulation level of macro cell's signal, the small cell base station can autonomously obtain the cross-channel gain without using the backhaul link. Simulation results indicate that the proposed method has about 2% estimation error, where the cross-channel gain is usually in the range of −70 dB to −110 dB.
['Xiaoning Huang', 'Liying Li', 'Guodong Zhao', 'Zhi Chen']
Estimating cross-channel gain without using backhaul link in two-tier heterogeneous networks
693,382
We propose a fault simulation technique for multiple faults based on a deductive fault simulation method. The main difficulty in the development of a technique for multiple fault simulation is handling a very large number of multiple faults. Conventional deductive simulation using Linear lists to store fault sets is not appropriate for such large sets of multiple faults. Our approach to this problem is to represent sets of multiple faults by Boolean functions. We assign a distinct code word to each multiple fault and represent the fault by a minterm corresponding to its code word. Then, set operations in deductive simulation are replaced by logic operations. As an internal representation of fault sets, we use shared binary decision diagrams. The method of coding multiple faults is a key to efficient simulation. We propose a coding method called FNT coding. We also propose a fault dropping method, called prime fault dropping, which is used efficiently with our multiple fault simulation technique. Experimental results show that our technique effectively handles multiple faults at significantly lower computation cost. >
['Noriyuki Takahashi', 'Nagisa Ishiura', 'Shuzo Yajima']
Fault simulation for multiple faults by Boolean function manipulation
153,823
POLLI: a handheld-based aid for non-native student presentations.
['Elizabeth M. Davis', 'Oscar Saz', 'Maxine Eskenazi']
POLLI: a handheld-based aid for non-native student presentations.
805,083
In this paper, image recognition is used to link a picture to relevant information. As mobile camera phone becoming more and more popular, applications for these phones will bring value to the phone and convenience to the users. We develop a system to perform audio introduction for our technology showroom using images taken by the built-in camera of mobile phone. Visitors to the showroom can take a picture of a poster, a physical prototype or any materials using their mobile phone. An audio introduction in his favorite language will be delivered to the visitor. A few schemes are used to improve the accuracy and system reliability, which include discriminative feature selection based on feature distance; repeating queries if result is not correct; user's verification on delivered content; employment of multiple classifiers. Experimental results show that the system performance is promising.
['Yiqun Li', 'Joo-Hwee Lim', 'Kart Leong Lim', 'Yilun You']
Showroom introduction using mobile phone based on scene image recognition
154,233
Distributed interactive applications such as multiplayer games will become increasingly popular in wide area distributed systems. To provide the response time desired by users despite high and unpredictable communication latency in such systems, shared objects will be replicated or cached by clients that participate in the applications. Any updates to the shared objects will have to be disseminated to clients that actually use the objects to maintain consistency. We address the problem of efficient and scalable update dissemination in an environment where client interests can change dynamically and the number of multicast channels available for update dissemination is limited. We present a heuristic based algorithm that can group objects and clients in a way that it handles limited bandwidth resources. We show that our algorithm can produce better results than several algorithms that have been developed in the past for update dissemination.
['Tianying Chang', 'George V. Popescu', 'Christopher F. Codella']
Scalable and efficient update dissemination for distributed interactive applications
97,510
We propose a maximum-likelihood (ML) approach for separating and estimating multiple synchronous digital signals arriving at an antenna array at a cell site. The spatial response of the array is assumed to be known imprecisely or unknown. We exploit the finite alphabet property of digital signals to simultaneously estimate the array response and the symbol sequence for each signal. Uniqueness of the estimates is established for BPSK signals. We introduce a signal detection technique based on the finite alphabet property that is different from a standard linear combiner. Computationally efficient algorithms for both block and recursive estimation of the signals are presented. This new approach is applicable to an unknown array geometry and propagation environment, which is particularly useful In wireless communication systems. Simulation results demonstrate its promising performance.
['S. Talwar', 'Mats Viberg', 'Arogyaswami Paulraj']
Blind separation of synchronous co-channel digital signals using an antenna array. I. Algorithms
351,842
Estimating Party-user Similarity in Voting Advice Applications using Hidden Markov Models.
['Marilena Agathokleous', 'Nicolas Tsapatsoulis', 'Constantinos Djouvas']
Estimating Party-user Similarity in Voting Advice Applications using Hidden Markov Models.
990,562
The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories.
['Andrew S. Gordon', 'Cosmin Adrian Bejan', 'Kenji Sagae']
Commonsense causal reasoning using millions of personal stories
777,265
Open Digital Repositories - The Movement of Open Access in Opposition to the Oligopoly of Scientific Publishers
['Ligia Eliana Setenareski', 'Walter Tadahiro Shima', 'Marcos Sfair Sunye', 'Leticia Mara Peres']
Open Digital Repositories - The Movement of Open Access in Opposition to the Oligopoly of Scientific Publishers
729,076
The correctness of a routing protocol can be divided into two parts, a liveness property proof and a safety property proof. The former requires that route(s) should be discovered and data be transmitted successfully, while the latter requires that the discovered routes have some desired characters such as containing only benign nodes. While safety properties are relatively easier to prove, the proof of liveness properties is usually harder. This paper presented a liveness proof of a secure routing protocol, SRP (P. Papadimitratos and Z. J. Haas, 2002) in Isabelle/HOL (T. Nipkow et al., 2002). The liveness property proved says that if a data package needs to be sent, then it will be sent and then received, and finally, the sender will receive an acknowledgement sent back by the receiver. There are three main contributions in this paper. Firstly, a liveness property is proved for a secure routing protocol, and this has never been done before. Secondly, our validation model can deal with arbitrarily many nodes including malicious ones, and nodes are allowed to move randomly. Thirdly, a fail set is defined to restrict the attackers' actions, so that the safety properties used to prove the liveness property can be established. The paper explains why it is reasonable to prevent malicious nodes from performing the events in fail set.
['Huabing Yang', 'Xingyuan Zhang', 'Yuanyuan Wang']
A correctness proof of the SRP protocol
176,169
The advances in social networks has led to the concentration of research on analyzing people's behaviors in these networks. Accordingly, detecting communities and the interactions between their members is one of the most important issues addressed by these studies. After the proposition of new community detection methods in recent years, due to the extensive volume of the information generated in social networks and the increasing growth in the size of these networks, researchers became more interested in local, rather than global, detection methods. This paper proposes a heuristic approach to detecting communities by investigating local information. Comparing this method with state-of-the-art approaches, it is observed that the proposed approach outperforms the compared methods in detecting communities and their members and provides more accurate results.
['Mohammad Ali Tabarzad', 'Ali Hamzeh']
A heuristic local community detection method (HLCD)
863,397
This monograph presents a research agenda for the application of dualprocess theories to Information Systems (IS) research. It begins by clarifying exactly what a dual-process approach to attitude formation is, explaining how and why it can provide insights that other theoretical approaches cannot. Similarities and differences between the two dominant dual-process theories — the Elaboration Likelihood Model (ELM) and the Heuristic Systematic Model (HSM) are discussed. These concepts are illustrated in a review of 26 published dual-process-based IS studies. This body of research is categorized according to a logical schema based on the locus of attitude formation. We then distill from these studies those heuristic cues and moderating factors that are most relevant to understanding IS phenomena. Finally, we identify the following three IS phenomena as offering great potential for further applications of the dual-process approach: First, information filtering under information surplus; second, how credibility assessment interacts with system design features; and third, mediated knowledge work in situ. We hope that by so doing, we can prevent future fragmentation of this widely varied body of research, and avoid premature closure around only a subset of potential areas for dual-process-based IS research.
['Stephanie Watts']
Application of Dual-process Theory to Information Systems: Current and Future Research Directions
621,934
Scientific data offers some of the most interesting challenges in data integration today. Scientific fields evolve rapidly and accumulate masses of observational and experimental data that needs to be annotated, revised, interlinked, and made available to other scientists. From the perspective of the user, this can be a major headache as the data they seek may initially be spread across many databases in need of integration. Worse, even if users are given a solution that integrates the current state of the source databases, new data sources appear with new data items of interest to the user. Here we build upon recent ideas for creating integrated views over data sources using keyword search techniques, ranked answers, and user feedback [32] to investigate how to automatically discover when a new data source has content relevant to a user's view - in essence, performing automatic data integration for incoming data sets. The new architecture accommodates a variety of methods to discover related attributes, including label propagation algorithms from the machine learning community [2] and existing schema matchers [11]. The user may provide feedback on the suggested new results, helping the system repair any bad alignments or increase the cost of including a new source that is not useful. We evaluate our approach on actual bioinformatics schemas and data, using state-of-the-art schema matchers as components. We also discuss how our architecture can be adapted to more traditional settings with a mediated schema.
['Partha Pratim Talukdar', 'Zachary G. Ives', 'Fernando Pereira']
Automatically incorporating new sources in keyword search-based data integration
168,598
Traditionally, optimization for large-scale multi-level lot sizing (MLLS) problems always encountered heavy computational burden. Scholars also indicated that ''whatever the optimal method chosen to solve the MLLS problem, standard optimization packages were still faced with computer memory constraints and computational limits that prevented them from solving realistic size cases''. Therefore, the main purpose of this paper is to propose an optimal method to reduce the computer memory while solving the large-scale MLLS problems. The optimal method is designed to implement on a database entirely because the demand for computer memory can be reduced significantly by means of the utilization of database storage. An example is given to illustrate the proposed method and computation capability is tested for the MLLS problems with up to 1000 levels and 12 periods.
['Dong-Shang Chang', 'Fu-Chiao Chyr', 'Fu-Chiang Yang']
Incorporating a database approach into the large-scale multi-level lot sizing problem
234,605
Quick Convergence Algorithm of ACO Based on Convergence Grads Expectation
['Zhongming Yang', 'Yong Qin', 'Huang Han', 'Yunfu Jia']
Quick Convergence Algorithm of ACO Based on Convergence Grads Expectation
815,734
This work presents a method for modelling and analysis of production systems in order to provide superior and efficient management over inventory by optimizing lead-times for excellent customer service and profit maximization. The proposed inventory control is based on the manufacturing lead-time optimization which is predominant when related to other aspects of the production cycle. The main goal of this work is to introduce a timed Petri net based approach to evaluate inventory in production systems by analyzing performance metrics such as throughput and cycle time. This work also presents PNTSys, the modelling tool which implements the proposed modelling methodology.
['Mauro Jose Carlos e Silva', 'Wellington João Silva', 'Paulo Romero Martins Maciel']
Modelling and analysis in production system: an approach based on Petri net
381,449
This paper proposes a second order accurate, Adams- Bashforth type, asynchronous integration scheme for numerically solving systems of ordinary differential equations. The method has three aspects; a local integration rule with third order truncation error, a third order accurate model of local influencers, and local time advance limits. The role of these elements in the scheme's operation are discussed and demonstrated. The time advance limit, which distinguishes this method from other discrete event methods for ODEs, is argued to be essential for constructing high order accuracy schemes.
['James J. Nutaro']
A Second Order Accurate Adams-Bashforth Type Discrete Event Integration Scheme
199,842
Understanding the prognosis of older adults is a big challenge in healthcare research, especially since very little is known about how different comorbidities interact and influence the prognosis. Recently, a electronic healthcare records dataset of 24 patient attributes from Northwestern Memorial Hospital was used to develop predictive models for five year survival outcome. In this study we analyze the same data for discovering hotspots with respect to five year survival using association rule mining techniques. The goal here is to identify characteristics of patient segments where the five year survival fraction is significantly lower/higher than the survival fraction across the entire dataset. A two-stage post-processing procedure was used to identify non-redundant rules. The resulting rules conform with existing biomedical knowledge and provide interesting insights into prognosis of older adults. Incorporating such information into clinical decision making could advance person-centered healthcare by encouraging optimal use of healthcare services to those patients most likely to benefit.
['Ankit Agrawal', 'Jason S. Mathias', 'David W. Baker', 'Alok N. Choudhary']
Identifying hotspots in five year survival electronic health records of older adults
967,242
We describe the design and implementation of high performance numerical software in Java. Our primary goals are to characterize the performance of object-oriented numerical software written in Java and to investigate whether Java is a suitable language for such endeavors. We have implemented JLAPACK, a subset of the LAPACK library in Java. LAPACK is a high-performance Fortran 77 library used to solve common linear algebra problems. JLAPACK is an object-oriented library using encapsulation, inheritance, and exception handling. It performs within a factor of four of the optimized Fortran version for certain platforms and test cases. When used with the native BLAS library, JLAPACK performs comparably with the Fortran version using the native BLAS library. We conclude that high-performance numerical software could be written in Java if a few concerns about language features and compilation strategies are addressed.
['Brian Blount', 'Siddhartha Chatterjee']
An Evaluation of Java for Numerical Computing
253,019
Rapid advances in computation, combined with latest advances in computer graphics simulations have facilitated the development of vision systems and training them in virtual environments. One major stumbling block is in certification of the designs and tuned parameters of these systems to work in real world. In this paper, we begin to explore the fundamental question: Which type of information transfer is more analogous to real world? Inspired from the performance characterization methodology outlined in the 90's, we note that insights derived from simulations can be qualitative or quantitative depending on the degree of the fidelity of models used in simulations and the nature of the questions posed by the experimenter. We adapt the methodology in the context of current graphics simulation tools for modeling data generation processes and, for systematic performance characterization and trade-off analysis for vision system design leading to qualitative and quantitative insights. In concrete, we examine invariance assumptions used in vision algorithms for video surveillance settings as a case study and assess the degree to which those invariance assumptions deviate as a function of contextual variables on both graphics simulations and in real data. As computer graphics rendering quality improves, we believe teasing apart the degree to which model assumptions are valid via systematic graphics simulation can be a significant aid to assisting more principled ways of approaching vision system design and performance modeling.
['V S R Veeravasarapu', 'Rudra Narayan Hota', 'Constantin A. Rothkopf', 'Ramesh Visvanathan']
Model Validation for Vision Systems via Graphics Simulation
626,410
We consider the uplink of a direct-sequence code-division multiple-access system and we assume that the base station is endowed with a linear antenna array. Transmission takes place over a multipath channel and the goal is to estimate the channel parameters (path gains and delays) and the directions of arrival of the signal echoes from a user entering the network. We propose an estimator that operates in an iterative fashion and exploits knowledge of the transmitted symbols (training sequence). Compared to other existing schemes, it is simpler to implement as it reduces a complicated multidimensional optimization problem to a sequence of one-dimensional searches. Computer simulations indicate that the proposed scheme is useful even in applications over rapidly varying channels where the training sequence must be short compared with the channel decorrelation time.
["Antonio A. D'Amico", 'Umberto Mengali', 'Michele Morelli']
DOA and channel parameter estimation for wideband CDMA systems
526,115
Potential of Heterogeneity in Collective Behaviors: A Case Study on Heterogeneous Swarms
['Daniela Kengyel', 'Heiko Hamann', 'Payam Zahadat', 'Gerald Radspieler', 'Franz Wotawa', 'Thomas Schmickl']
Potential of Heterogeneity in Collective Behaviors: A Case Study on Heterogeneous Swarms
637,528
The goal of the semantic measures is to compare pairs of concepts, words, sentences or named entities. Their categorization depends on what they measure. If a measure only considers taxonomy relationships is a similarity measure; if it considers all type of relationships it is a relatedness measure.#R##N##R##N#The evaluation process of these measures usually relies on semantic gold standards. These datasets, with several pairs of words with a rating assigned by persons, are used to assess how well a semantic measure performs.#R##N##R##N#There are a few frameworks that provide tools to compute and analyze several well-known measures. This paper presents a novel tool - SMComp - a testbed designed for path-based semantic measures. At its current state, it is a domain-specific tool using three different versions of WordNet.#R##N##R##N#SMComp has two views: one to compute semantic measures of a pair of words and another to assess a semantic measure using a dataset. On the first view, it offers several measures described in the literature as well as the possibility of creating a new measure, by introducing Java code snippets on the GUI. The other view offers a large set of semantic benchmarks to use in the assessment process. It also offers the possibility of uploading a custom dataset to be used in the assessment.
['Teresa Costa', 'José Paulo Leal']
Comparing and Benchmarking Semantic Measures Using SMComp
966,572
A 16 nm all-digital auto-calibrating adaptive clock distribution (ACD) enhances processor core performance and energy efficiency by mitigating the adverse effects of high-frequency supply voltage $({\rm V}_{\rm DD})$ droops. The ACD integrates a tunable-length delay prior to the global clock distribution to prolong the clock-data delay compensation in core paths for multiple cycles after a droop occurs to provide a sufficient response time for clock frequency $({\rm F}_{\rm CLK})$ adaptation. A dynamic variation monitor (DVM) detects the onset of the droop and interfaces with an adaptive control unit and clock divider to reduce ${\rm F}_{\rm CLK}$ in half at the TLD output to avoid path timing-margin failures. An auto-calibration circuit enables in-field, low-latency tuning of the DVM to accurately detect ${\rm V}_{\rm DD}$ droops across a wide range of operating conditions. The auto-calibration circuit maximizes the ${\rm V}_{\rm DD}$ -droop tolerance of the ACD while eliminating the overhead from tester calibration. From 109 die measurements across a wafer, the auto-calibrating ACD recovers a minimum of 90% of the throughput loss due to a 10% ${\rm V}_{\rm DD}$ droop in a conventional design for 100% of the dies. ACD measurements demonstrate simultaneous throughput gains and energy reductions ranging from 13% and 5% at 0.9 V to 30% and 13% at 0.6 V, respectively.
['Keith Alan Bowman', 'Sarthak Raina', 'J. Todd Bridges', 'Daniel Yingling', 'Hoan H. Nguyen', 'Brad Appel', 'Yesh Kolla', 'Jihoon Jeong', 'Francois Ibrahim Atallah', 'David Joseph Winston Hansquine']
A 16 nm All-Digital Auto-Calibrating Adaptive Clock Distribution for Supply Voltage Droop Tolerance Across a Wide Operating Range
589,954
Summary This year we took part in the genomic information retrieval and information extraction tasks, as well as the named page and topic distillation searches. In carrying out the last two tasks, we made use of link anchor information and document content in order to construct Web page representatives. This type of document representation uses multi-vectors in order to highlight the importance of both link anchor information and document content.
['Jacques Savoy', 'Yves Rasolofo', 'Laura Perret']
Report on the TREC-2003 Experiment: Genomic and Web Searches
83,266
Rough set provides a theoretical framework for classification learning in data mining and knowledge discovery. As an important application of rough set, attribute reduction, also called feature selection, aims to reduce the redundant attributes in a given decision system while preserving a particular classification property, e.g., information entropy and knowledge granularity. In view of the dynamic changes of the object set in a decision system, in this paper, we focus on knowledge granularity-based attribute reduction approach when some objects vary dynamically. We first introduce incremental mechanisms to compute new knowledge granularity. Then, the corresponding incremental algorithms for attribute reduction are developed when some objects are added into and deleted from the decision system. Experiments conducted on different data sets from UCI show that the proposed incremental algorithm can achieve better performance than the non-incremental counterpart and incremental algorithm based on entropy.
['Yunge Jing', 'Tianrui Li', 'Chuan Luo', 'Shi-Jinn Horng', 'Guoyin Wang', 'Zeng Yu']
An incremental approach for attribute reduction based on knowledge granularity
720,799
In this paper we present MUST-a multiple-stem analysis algorithm for identifying untestable faults in sequential circuits. In general, processing untestable faults is the most time-consuming part of a sequential ATPG. MUST extends the scope of the single-stem analysis done in the FIRES algorithm by identifying additional untestable faults that cannot be found by single-stem analysis. While its computational requirements are greater than those of FIRES, the run-time of MUST remains significantly lower than that used by sequential ATPG. We show that the faults identified by MUST are difficult targets for conventional ATPG programs, that can benefit by using MUST as a preprocessor and excluding the untestable faults identified by multiple stem analysis from the target faults processed by ATPG. We report experimental results obtained by our prototype implementation of MUST on ISCAS benchmarks and other circuits.
['Qiang Peng', 'Miron Abramovici', 'Jacob Savir']
MUST: multiple-stem analysis for identifying sequentially untestable faults
179,542
Most item recommendation systems nowadays are implemented by applying machine learning algorithms with user surveys as ground truth. In order to get satisfactory results from machine learning, massive amounts of user surveys are required. But in reality obtaining a large number of user surveys is not easy. Additionally, in many cases the opinions are subjective and personal. Hence user surveys cannot tell all the aspects of the truth. However, in this paper, we try to generate ground truth automatically instead of doing user surveys. To prove that our approach is useful, we build our experiment using Flickr to recommend tags that can represent the users' interested topics. First, when we build training and testing models by user surveys, we note that the extracted tags are inclined to be too ordinary to be recommended as "Flickr-aware" terms that are more photographic or Flickr-friendly. To capture real representative tags for users, we apply LSA in a novel way to build ground truth for our training model. In order to verify our scheme, we define Flickr-aware terms to measure the extracted representative tags. Our experiments show that our proposed scheme with the automatically generated ground truth and measurements visibly improve the recommendation results.
['Xian Chen', 'Hyoseop Shin', 'Minsoo Lee']
LSA as ground truth for recommending flickr-aware representative tags
585,802
A collocation method is developed for the (truncated) POD of a set of snapshots. In other words, POD computations are performed using only a set of collocation points, whose number is comparable to the number of retained modes, in a similar fashion as in collocation spectral methods. Intending to rely on simple ideas which, moreover, are consistent with the essence of POD, collocation points are computed via the LU decomposition with pivoting of the snapshot matrix. The new method is illustrated in simple applications in which POD is used as a data-processing method. The performance of the method is tested in the computationally efficient construction of reduced order models based on POD plus Galerkin projection for the complex Ginzburg–Landau equation in one and two space dimensions.
['María-Luisa Rapún', 'Filippo Terragni', 'José M. Vega']
LUPOD: Collocation in POD via LU decomposition
983,458
Approximating a matrix by a small subset of its columns is a known problem in numerical linear algebra. Algorithms that address this problem have been used in areas which include, among others, sparse approximation, unsupervised feature selection, data mining, and knowledge representation. Such algorithms were investigated since the 1960's, with recent results that use randomization. The problem is believed to be NP-Hard, and to the best of our knowledge there are no previously published algorithms aimed at computing optimal solutions. We show how to model the problem as a graph search, and propose a heuristic based on eigenvalues of related matrices. Applying the A* search strategy with this heuristic is guaranteed to find the optimal solution. Experimental results on common datasets show that the proposed algorithm can effectively select columns from moderate size matrices, typically improving by orders of magnitude the run time of exhaustive search.
['Hiromasa Arai', 'Crystal Maung', 'Haim Schweitzer']
Optimal column subset selection by a-star search
633,882
A key problem facing current computing systems is the inability to autonomously manage security vulnerabilities as well as more mundane errors. Since the design of computer architectures is usually performance-driven, hardware often lacks primitives for tasks in which raw speed is not the primary goal. There is little architectural support for monitoring execution at the instruction level, and no mechanisms for assisting an automated response.This paper advocates modifying general-purpose processors to provide both program supervision and automatic response via a policy-driven monitoring mechanism and instruction stream rewriting, respectively. These capabilities form the basis of speculative virtual verification (SVV).SVV is a model for the speculative execution of code based on high-level security and safety constraints. We introduce architectural enhancements to support this framework, including the ability to supply an automated response by rewriting the instruction stream. Finally, given the novelty of the SVV approach to executing software, we briefly consider some important challenges for SVV-based systems.
['Michael E. Locasto', 'Stelios Sidiroglou', 'Angelos D. Keromytis']
Speculative virtual verification: policy-constrained speculative execution
126,457
Recently in Japan, the mental health care has become a very important issue because there are many people suffering from mental problems. Also, there are only few specialists and researchers to deal with these problems. Because there are very few mental healthcare specialists, it is very important to decrease their moving time. Also, it is very important to see the facial expression and talk to people for mental healthcare education, aftercare and counseling. For this reason, the video and voice are needed. But, it is not easy to use the conventional TV conference systems for general people, mental health care specialists and their students because they are not computer specialists. In order to realize a remote mental health care education, we have developed a WWW conference system. Our system can communicate between the mental health care specialists and their students. Also, our system can provide the communication between the mental health care specialists, patients and their families. In this paper, we show the performance evaluation of the proposed system. The experimental results show that the load average is affected from the frame rate, frame size and number of clients. Also, throughput is affected from the frame rate and number of clients, but not from the frame size. The proposed system has enough performance in the LAN environment but PCs need a high CPU power in order to send, receive and display the high quality live videos
['S. Baba', 'Kaoru Sugita', 'Norihiko Uchida', 'G. De Marco', 'Leonard Barolli', 'A. Durresi']
Performance Evaluation of WWW-Based Conference System
229,996
Background#R##N#Native structures of proteins are formed essentially due to the combining effects of local and distant (in the sense of sequence) interactions among residues. These interaction information are, explicitly or implicitly, encoded into the scoring function in protein structure prediction approaches—threading approaches usually measure an alignment in the sense that how well a sequence adopts an existing structure; while the energy functions in Ab Initio methods are designed to measure how likely a conformation is near-native. Encouraging progress has been observed in structure refinement where knowledge-based or physics-based potentials are designed to capture distant interactions. Thus, it is interesting to investigate whether distant interaction information captured by the Ab Initio energy function can be used to improve threading, especially for the weakly/distant homologous templates.
['Mingfu Shao', 'Sheng Wang', 'Chao Wang', 'Xiongying Yuan', 'Shuai Cheng Li', 'Wei-Mou Zheng', 'Dongbo Bu']
Incorporating Ab Initio energy into threading approaches for protein structure prediction
461,583
This paper addresses the problem of fitting mixture model based-clustering to imprecise data using the CEM algorithm. Imprecise data are modelled by multivariate uncertainty zones, which constitute a generalization of multivariate interval-valued data. To estimate simultaneously the mixture model parameters and the partition from uncertainty zone data, we propose an adapted version of the CEM algorithm. The paper concludes with a brief description of an application of this approach to flaw diagnosis, on pressure equipments, using acoustic emission, in the context of imprecise bivariate measurements of localization of acoustic emission signals
['Hani Hamdan', 'Gérard Govaert']
CEM algorithm for imprecise data. Application to flaw diagnosis using acoustic emission
19,556
Applications composed of multiple parallel libraries perform poorly when those libraries interfere with one another by obliviously using the same physical cores, leading to destructive resource oversubscription. This paper presents the design and implementation of Lithe , a low-level substrate that provides the basic primitives and a standard interface for composing parallel codes efficiently. Lithe can be inserted underneath the runtimes of legacy parallel libraries to provide bolt-on composability without needing to change existing application code. Lithe can also serve as the foundation for building new parallel abstractions and libraries that automatically interoperate with one another. In this paper, we show versions of Threading Building Blocks (TBB) and OpenMP perform competitively with their original implementations when ported to Lithe. Furthermore, for two applications composed of multiple parallel libraries, we show that leveraging our substrate outperforms their original, even expertly tuned, implementations.
['Heidi Pan', 'Benjamin Hindman', 'Krste Asanovic']
Composing parallel software efficiently with lithe
384,097
This paper reports on the fourth Information Interaction in Context (IIiX) Symposium held in Nijmegen, the Netherlands in August 2012. It featured a lively program with 3 keynotes, 25 long papers with oral presentation, 20 short papers with poster presentation, a doctoral consortium, a workshop on human-computer information retrieval, and was followed by a summer school on information foraging. IIiX'12 is an ACM and ACM SIGIR in cooperation conference with its proceedings published by the ACM.
['Norbert Fuhr', 'Jaap Kamps', 'Wessel Kraaij', 'Suzan Verberne']
Report on IIiX'12: the fourth information interaction in context symposium
487,078
This paper compares areas between a 6T and 8T SRAM cells, in a dual-Vdd scheme and a dynamic voltage scaling (DVS) scheme. In the dual-Vdd scheme, we predict that the area of the 6T cell keep smaller than that of the 8T cell, over feature technology nodes all down to 32 nm. In contrast, in the DVS scheme, the 8T cell will becomes superior to the 6T cell after the 32-nm node, in terms of the area.
['Yasuhiro Morita', 'Hidehiro Fujiwara', 'Hiroki Noguchi', 'Yusuke Iguchi', 'Koji Nii', 'Hiroshi Kawaguchi', 'Masahiko Yoshimoto']
Area Comparison between 6T and 8T SRAM Cells in Dual-Vdd Scheme and DVS Scheme
2,539
Traffic engineering, particularly routing optimization, is one of the most important aspects to take into account when providing QoS in next generation networks (NGN). The problem of weight setting with conventional link state routing protocols for routing optimization has been object of study by a few authors. To solve this problem for big networks artificial intelligence heuristics have been used, in concrete genetic algorithms (GA). Some of the proposals incorporate local search procedures in order to optimize the GA results, in the so-called hybrid genetic algorithm (HGA) or memetic algorithm. This paper presents an inedited comparative analysis of the main hybrid genetic algorithms (HGA) proposals, as well as comparing them with other algorithms for the same problem by means of simulations. One of the HGA algorithms was chosen from the results analysis and was implemented over a real testbed with commercial routers with successful OSPFv3 routing optimization.
['Alex Vallejo', 'Agustin Zaballos', 'David Vernet', 'David Cutiller', 'Jordi Dalmau']
Implementation of Traffic Engineering in NGNs Using Hybrid Genetic Algorithms
110,295
An optical orthogonal signature pattern code (OOSPC) is a collection of (0,1) two-dimensional (2-D) patterns with good correlation properties (i.e., high autocorrelation peaks with low sidelobes, and low cross-correlation functions). Such codes find applications, for example, to parallelly transmit and access images in "multicore-fiber" code-division multiple-access (CDMA) networks. Up to now all work on OOSPCs has been based on an assumption that at most one pulse per column or one pulse per row and column is allowed in each two-dimensional pattern. However, this restriction may not be required in such multiple-access networks if timing information can be extracted from other means, rather than from the autocorrelation function. A new class of OOSPCs is constructed without the restriction. The relationships between two-dimensional binary discrete auto- and cross-correlation arrays and their corresponding "sets" for OOSPCs are first developed. In addition, new bounds on the size of this special class of OOSPCs are derived. Afterwards, four algebraic techniques for constructing these new codes are investigated. Among these constructions, some of them achieve the upper bounds with equality and are thus optimal. Finally, the codes generated from some constructions satisfy the restriction of at most one pulse per row or column and hence can be used in applications requiring, for example, frequency-hopping patterns.
['Guu-Chang Yang', 'Wing C. Kwong']
Two-dimensional spatial signature patterns
230,249
Abstract#R##N##R##N#To improve software productivity, my colleagues and I pointed out in a previous paper that the state transition method is suitable as the functional description and implementation method. In that paper we proposed the distributed state transition method, which was derived from the model of distributed sequential machines and showed results obtained by applying it to an electronic switching system. In this paper, to enlarge the application range of the state transition method, I consider applying it to a PC management database system, and propose a new integrated design method derived from the single sequential machine model as being suitable. First I propose the “group-noted state transition method” by introducing the sequential machine model. Then I propose this design method as follows. Initially, the function of the outer world of the system is classified into several groups, and management jobs are then divided into the same number of groups corresponding to them. The database information can also be classified in the same way and can support the design of the database system. After that, the state transition description is defined from the input/output information to/from the groups of the outer world. Furthermore, the method of notation in which the status group is added to the status name is proposed by classifying statuses using the concept of the management cycle. I also propose a new method for designing the software structure in which state transition diagram information is stored in the database system by converting it to table format. Results clarify that it is possible to make an integrated management database system by using this state transition method. They also clarify how to make a more advanced system by cooperating with the workflow system. © 2001 Scripta Technica, Syst Comp Jpn, 32(10): 13–21, 2001
['Kunio Hiyama']
The group‐noted state transition method applied to a PC management system
229,834
Weather radar signals at high frequencies such as Ku-band are attenuated along the propagation path through rainfall. Hence, reflectivity and differential reflectivity measurements at such frequencies should be corrected for attenuation before any quantitative applications such as the retrieval of raindrop size distribution (DSD), which is a fundamental descriptor of rainfall microphysics. This paper presents the attenuation correction algorithm implemented for NASA Dual-frequency Dual-polarized Doppler Radar (D3R) Ku-band observations. The dual-polarization based correction performance is evaluated with the self-consistency criterion. In addition, the DSD parameters are estimated with the attenuation corrected observations, and the preliminary results are shown.
['Haonan Chen', 'V. Chandrasekar', 'Sanghun Lim', 'Robert M. Beauchamp']
Attenuation correction and raindrop size distribution with Dual-polarization Radar measurements at Ku-band
934,410
In this paper, we propose and study a randomized Boolean gossiping process, where nodes taking value from $\{0,1\}$ pairwise meet over an underlying graph in a random manner at each time step and the two interacting nodes update their states by random logistic rules drawn from the set $\{{\it AND}, {\it OR}\}$. This model is a generalization of the classical gossiping process and serves as a simplified probabilistic Boolean network. First of all, using standard theories from Markov chain we show that almost surely, the network state asymptotically converge to a consensus. We also establish a characterization of the distribution of this limit for large-scale networks with all-to-all communication in light of mean-field approximation methods. Next, we study how the number of communication classes in the network state space relates to the topology of the underlying interaction graph and obtain a full characterization: A line interaction graph with $n$ nodes generates $2n$ communication classes; A cycle graph with $2n$ nodes generates $n+3$ communication classes, and a cycle graph with $2n+1$ nodes generates $n+2$ communication classes; For any connected graph which is not a line or a cycle, the number of communication classes is either five or three, where three is achieved if and only if the graph contains an odd cycle.
['Bo Li', 'Hongsheng Qi', 'Guodong Shi']
Randomized Boolean Gossiping
609,658
People have often insufficient knowledge about database structure and contents, thus frequently obtaining empty answers or having to reformulate the queries several times. This paper describes an approach extending the search criteria ranges in order to provide approximate answers to the user. This approach takes into account the user preferences. It is based on a knowledge base (KB) related to the application domain. The initial SQL query is transformed into a flexible query based on KB and fuzzy set theory. This flexible query is then rewritten into a boolean query called envelope. The envelope is evaluated with a traditional DBMS to take advantage of its full optimization capabilities. The tuples satisfying the envelope are finally ranked according to their satisfaction degree. Unlike other approaches, this one require to modify neither SQL language nor the database engine.
['Narjes Hachani', 'Habib Ounelli']
A Knowledge-Based Approach For Database Flexible Querying
192,840
Modeling and Analysis of Collaborative Consumption in Peer-to-Peer Car Sharing
['Saif Benjaafar', 'Guangwen Kong', 'Xiang Li']
Modeling and Analysis of Collaborative Consumption in Peer-to-Peer Car Sharing
603,605
This paper proposes a framework for acquiring a low-level behavior of a soccer agent. The task of a learning agent is to mimic the behavior of a target agent with a well-trained behavior. Neural networks are used to represent the behavior of the target agent. In order to obtain a set of training data, we convert game logs of the target agent into a set of input-output pairs for the learning of neural networks. We consider two implementations of neural networks. The first implementation maps a situation of a dribbling agent at a certain time step to an action to be conducted at the next time step. In the second implementation three neural networks are used for three possible action such as turn, dash, and kick. Each neural network presents the activation degree of the correspondent action at the next time step. We show the effectiveness of the proposed framework through the computational experiments.
['Tomoharu Nakashima', 'Hisao Ishibuchi']
Mimicking Dribble Trajectories by Neural Networks for RoboCup Soccer Simulation
134,510
In this paper, a backward compatible header error protection mechanism is described. It consists of the addition of a dedicated marker segment to a JPEG 2000 codestream, that will contain the error correction data generated by a block error correction code (e.g. a Reed Solomon code). This mechanism allows leaving the original data intact, hence providing backward compatibility with the already standardised JPEG 2000. Neither side information from higher level, nor extra signalling encapsulation is needed, as the required information is directly embedded in the codestream and also protected. Finally, it is shown how this mechanism can be used for perform unequal error protection of the whole JPEG 2000 stream.
['Didier Nicholson', 'Catherine Lamy-Bergot', 'Xavier Naturel', 'Charly Poulliat']
JPEG 2000 backward compatible error protection with Reed-Solomon codes
470,887
Nonlinear embedding algorithms such as stochastic neighbor embedding do dimensionality reduction by optimizing an objective function involving similarities between pairs of input patterns. The result is a low-dimensional projection of each input pattern. A common way to define an out-of-sample mapping is to optimize the objective directly over a parametric mapping of the inputs, such as a neural net. This can be done using the chain rule and a nonlinear optimizer, but is very slow, because the objective involves a quadratic number of terms each dependent on the entire mapping's parameters. Using the method of auxiliary coordinates, we derive a training algorithm that works by alternating steps that train an auxiliary embedding with steps that train the mapping. This has two advantages: 1) The algorithm is universal in that a specific learning algorithm for any choice of embedding and mapping can be constructed by simply reusing existing algorithms for the embedding and for the mapping. A user can then try possible mappings and embeddings with less effort. 2) The algorithm is fast, and it can reuse N-body methods developed for nonlinear embeddings, yielding linear-time iterations.
['Miguel Á. Carreira-Perpiñán', 'Max Vladymyrov']
A fast, universal algorithm to learn parametric nonlinear embeddings
565,391
The research is essentially to modularize the structure of utilities and develop a system for following up the activities electronically on the city scale. The GIS operational platform will be the base for managing the infrastructure development components with the systems interoperability for the available city infrastructure related systems. The research will develop Service Oriented Architecture (SOA) in order to geospatially manage the available city infrastructure networks. The concentration will be on the available utility networks in order to develop a comprehensive, common, standardized geospatial data models. The construction operations for the utility networks such as electricity, water, Gas, district cooling, irrigation, sewerage and communication networks; are need to be fully monitored on daily basis, in order to utilize the involved huge resources and man power where the SOA will significant value. These resources are allocated only to convey the operational status for the construction and execution sections that used to do the required maintenance. The need for a system that serving the decision makers for following up these activities with a proper geographical representation will definitely reduce the operational cost for the long term.
['Mahmoud Al-Hader', 'Ahmad Rodzi', 'Abdul Rashid B. Mohamed Sharif', 'Noordin Ahmad']
SOA of Smart City Geospatial Management
407,555
Transcriptator: Computational Pipeline to Annotate Transcripts and Assembled Reads from RNA-Seq Data
['Kumar Parijat Tripathi', 'Daniela Evangelista', 'Raffaele Cassandra', 'Mario Rosario Guarracino']
Transcriptator: Computational Pipeline to Annotate Transcripts and Assembled Reads from RNA-Seq Data
598,047
Human-Oriented Challenges of Social BPM: An Overview
['Nicolas Pflanzl', 'Gottfried Vossen']
Human-Oriented Challenges of Social BPM: An Overview
734,861
We present a novel relevance feedback (RF) method that uses not only the surface information in texts, but also the latent information contained therein. In the proposed method, we infer the latent topic distribution in user feedback and in each document in the search results using latent Dirichlet allocation, and then we modify the search results so that documents with a similar topic distribution to that of the feedback are re-ranked higher. Evaluation results show that our method is effective for both explicit and pseudo RF, and that it has the advantage of performing well even when only a small amount of user feedback is available.
['Jun Harashima', 'Sadao Kurohashi']
Relevance Feedback using Latent Information
613,847
Effectively evaluating the capability of a software development methodology has always been very difficult, owing to the number and variability of factors to control. Evaluating XP is by no way different under this respect. In this paper we present a simulation approach to evaluate the applicability and effectiveness of XP process, and the effects of some of its individual practices. Such approaches using simulation are increasing popular because they are inexpensive and flexible. Of course, they need to be calibrated with real data and complemented with empirical research.#R##N##R##N#The XP process has been modelled and a simulation executive has been written, enabling to simulate XP software development activities. The model follows an object-oriented approach, and has been implemented in Smalltalk language, following XP process itself. It is able to vary the usage level of some XP practices and to simulate how all the project entities evolve consequently.
['Alessandra Cau', 'Giulio Concas', 'M. Melis', 'Ivana Turnu']
Evaluate XP effectiveness using simulation modeling
183,804
The control of a swarm of underwater robots requires more than just a control algorithm, it requires a communications system. Underwater communications is difficult at the best of times and so large time delays and minimal information is a concern. The control system must be able to work on minimal and out of date information. The control system must also be able to control a large number of robots without a master control, a decentralized control approach. This paper describes one such control method.
['Matthew Joordens', 'Mo Jamshidi']
Underwater swarm robotics consensus control
445,768
A regional decomposition method is proposed to facilitate pattern analysis and recognition. It splits a complicated pattern into several simple parts or sub-patterns, so that the pattern can be identified by examining the distinct parts. A complexity analysis is derived in this paper to prove the effectiveness of the regional decomposition method; mathematical and statistical formulas are also provided to evaluate the recognition rates of different parts. For a sample of 36 alphanumeric characters handprinted in 89 most common styles, the total mean recognition rates of parts have been found to be 30% higher than those obtained from subjective experiments. >
['Zi-Cai Li', 'Ching Y. Suen', 'J. Guo']
A regional decomposition method for recognizing handprinted characters
319,653
In this research, Wireless EEG equipment via Bluetooth technology named g-Mobilab was used to measure the brainwave signals in the right and left frontal area of the brain. The recorded EEG signals were channelled into an automatic artifact removal analysis whereby signals above values of 100 micro-volts were removed by means of a program using Matlab. Consequently, Power Spectral Density techniques and specific algorithm were employed to further enhance the EEG signals. The correlation between the left and the right brainwaves were achieved using paired T test from SPSS. The results, which are brainwave balancing index (BBI) and brainwave dominance, were presented via Graphic User Interface (GUI). The outcome shows that BBI system could be established using EEG signals. These findings (brainwave dominance and BBI) could be used as a straightforward indicator of one's ability to think and work leading to vast opportunity for constructive human potential advancement.
['Zunairah Hj Murat', 'Mohd Nasir Taib', 'Sahrim Lias', 'Ros Shilawani S. Abdul Kadir', 'Norizam Sulaiman', 'Zodie Mohd Hanafiah']
Development of Brainwave Balancing Index Using EEG
404,912
A silicon neural probe fabricated using a deep reactive ion etching based process on 250 μm thin silicon wafers was developed. The fabricated probes replicate the design of soft parylene-C based probes embedded in dissolvable needles and can therefore also be used to test the encapsulation properties of parylene-C in-vivo without introducing additional effects introduced by the dissolvable gel. The process also demonstrates the possibility of performing conventional photolithography on substrates bonded to a handle wafer using a backgrinding liquid wax (BGL7080) as an adhesive. This technique would allow integration of Si wafer thinning into the fabrication of neural probes, potentially allowing a range of neural probes of different thicknesses to be fabricated. Fabricated probes were characterized using electrochemical impedance spectroscopy (EIS) yielding a measured impedance value of ∼80 kΩ at 1 kHz for 15 μm by 115 μm platinum electrodes, indicating that extracellular neural recordings are possible. The neural probes were inserted into the substantia nigra of a mouse that showed successful recording of neural activity. Probes fabricated using this technique can thus be potentially used in the study of Parkinson's disease.
['Xiao Chuan Ong', 'Amanda M Willard', 'Mats Forssell', 'Aryn H. Gittis', 'Gary K. Fedder']
A silicon neural probe fabricated using DRIE on bonded thin silicon
907,876
This paper focuses on providing the high order algorithms for the space–time tempered fractional diffusion-wave equation. The designed schemes are unconditionally stable and have the global truncation error O(τ 2 +h 2 ) O ( τ 2 + h 2 ) , being theoretically proved and numerically verified.
['Minghua Chen', 'Weihua Deng']
A second-order accurate numerical method for the space-time tempered fractional diffusion-wave equation
906,560
In this paper we introduce an alternative localization approach for binary classification that leads to a novel complexity measure: fixed points of the local empirical entropy. We show that this complexity measure gives a tight control over complexity in the upper bounds. Our results are accompanied by a novel minimax lower bound that involves the same quantity. In particular, we practically answer the question of optimality of ERM under bounded noise for general VC classes.
['Nikita Zhivotovskiy', 'Steve Hanneke']
Localization of VC Classes: Beyond Local Rademacher Complexities
810,187