abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
A direct method is described for computing a hysteresis point (double turning point) corresponding to a cusp point of a system ofn nonlinear equations inn variables depending on two parameters. By addition of two equations a minimally extended system ofn+2 nonlinear equations is constructed for which the hysteresis point is an isolated solution. An efficient implementation of Newton's method is presented not requiring evaluations of second derivatives of the original problem. Two numerical examples show the efficiency of theQ-quadratically convergent method.
['Gerd Pönisch']
Computing hysteresis points of nonlinear equations depending on two parameters
642,275
Most of existing methods for DNA motif discovery consider only a single set of sequences to find an over-represented motif. In contrast, we consider multiple sets of sequences where we group sets associated with the same motif into a cluster, assuming that each set involves a single motif. Clustering sets of sequences yields clusters of coherent motifs, improving signal-to-noise ratio or enabling us to identify multiple motifs. We present a probabilistic model for DNA motif discovery where we identify multiple motifs through searching for patterns which are shared across multiple sets of sequences. Our model infers cluster-indicating latent variables and learns motifs simultaneously, where these two tasks interact with each other. We show that our model can handle various motif discovery problems, depending on how to construct multiple sets of sequences. Experiments on three different problems for discovering DNA motifs emphasize the useful behavior and confirm the substantial gains over existing methods where only a single set of sequences is considered.
['Jong Kyoung Kim', 'Seungjin Choi']
Clustering sequence sets for motif discovery
296,399
Some fundamental properties of an impulse response Gramian for linear, time-invariant, asymptotically stable, discrete single-input-single-output (SISO) systems are derived. This Gramian is system invariant and can be found by solving a Lyapunov equation. The connection with standard controllability, observability, and cross Gramians is proven. The significance of these results in model-order reduction is highlighted with an efficient procedure.
['Stéphane Azou', 'Pascale Bréhonnet', 'Pierre Vilbé', 'Léon-Claude Calvez']
A new discrete impulse response Gramian and its application to model reduction
152,379
Sketch-Based Image Editing Using Clothoid Curves.
['Gunay Orbay', 'Mehmet Ersin Yumer', 'Levent Burak Kara']
Sketch-Based Image Editing Using Clothoid Curves.
768,648
This paper describes the security threats associated with 802.11 based wireless local area networks (WLANs) and outlines a comprehensive architecture for a wireless intrusion protection system (WIPS). The AirDefense WIPS based on this architecture is currently used to monitor and protect tens of thousands of networks and over one million devices around the globe in almost 500 enterprises, healthcare organizations and government agencies. The paper also describes the essential attributes that define the figure of merit (FOM) of a WIPS. The FOM can be used to quantify and compare the performance and functionality of a WIPS and distinguish enterprise class solutions from simple checkbox systems.
['Amit Sinha', 'Issam Haddad', 'Todd Nightingale', 'Richard Rushing', 'David Thomas']
Wireless intrusion protection system using distributed collaborative intelligence
234,869
Experiments had been conducted on Mars rover detection and localization using residue image processing and stereo vision. In NASA's Pathfinder mission, an unmanned lander had been landed on Mars and a microrover was released from the lander to perform scientific experiments. Rover localization is an important issue because, for navigation purpose the rover's position needs to be continuously updated. Three aspects of the problem have been studied: motion detection, residue image processing, and range estimation. The algorithms are described. Stereo-pairs of images taken at the Jet Propulsion's Laboratory's (JPL's) Mars Test Arena were used to test the algorithms. The results were presented along with analysis of range estimation accuracy.
['Larry H. Matthies', 'Byron H. Chen', 'Jon Petrescu']
Stereo vision, residual image processing and Mars rover localization
183,924
Predictive models for the aquatic toxicity of aldehydes were designed for a set of 50 aromatic or aliphatic compounds containing at least one aldehyde group, for which the acute toxicity data for the fathead minnow (Pimephales promelas) are available (96 h test assessing 50% lethal waterborne concentration). The molecular descriptors were based on calculations with various semiempirical or ab initio model chemistries. The resulting four-parameter models were evaluated according to the correlation coefficients R 2 . The best predictive model was obtained with the HF/STO-3G model chemistry (R 2 = 0.868), while the models designed for descriptors based on ab initio calculations of higher level showed a slightly worse predictivity (the HF/3-21G(d) based model R 2 = 0.800, the HF/6-31G(d) based model R 2 = 0.808, the B3LYP/6-31G(d,p) based model R 2 = 0.812). With the semiempirical methods a good predictivity was observed with the PM3 based model (R 2 = 0.811) and the AM1 based model (R 2 = 0.791), but the MNDO based model showed the worst predictivity (R 2 = 0.760). In all ab initio models and the PM3 model very similar descriptors were involved. The importance of the descriptor logarithm of the partition coefficient logP for toxicity prediction was confirmed. Additionally, the descriptors encoding the negatively charged molecular surface area, hydrogen bonding molecular surface area, and reactivity of aldehyde group were identified as essential for the toxicity prediction of aldehydes.
['Martin Smiesko', 'Emilio Benfenati']
Predictive models for aquatic toxicity of aldehydes designed for various model chemistries.
571,586
Despite the plethora of vibrotactile applications that have impacted our everyday life, how to design vibrotactile patterns efficiently continues to be a challenge. Previously, we proposed a vibrotactile score as an intuitive and effective approach for vibrotactile pattern design. The vibrotactile score is adapted from common musical scores and preserves the metaphor of musical scores for easy learning. In this paper, we investigate the usability of the vibrotactile score, focusing on its learnability, efficiency, and user preference. Experiment I was to compare the vibrotactile score and the current dominant practice of vibrotactile pattern design using programming or scripting. The results gained from programming experts validated the substantially superior performance of the vibrotactile score. Experiment II compared the vibrotactile score with waveform-based design implemented in a few recent graphical authoring tools. Regular users without programming backgrounds participated in this experiment, and the results substantiated the improved performance of the vibrotactile score.
['Jaebong Lee', 'Seungmoon Choi']
Evaluation of vibrotactile pattern design using vibrotactile score
157,257
Travel on Virtual Environments is the simple action where a user moves from a starting point A to a target point B. Choosing an incorrect type of technique could compromise the Virtual Reality experience and cause side effects such as spatial disorientation, fatigue and cybersickness. The design of effective travelling techniques demands to be as natural as possible, thus real walking techniques presents better results, despite their physical limitations. Approaches to surpass these limitations employ techniques that provide an indirect travel metaphor such as point-steering and target-based. In fact, target-based techniques evince a reduction in fatigue and cybersickness against the point-steering techniques, even though providing less control. In this paper we investigate further effects of speed and transition on target-based techniques on factors such as comfort and cybersickness using a Head-Mounted Display setup.
['Daniel Medeiros', 'Eduardo Cordeiro', 'Daniel Mendes', 'Maurício Sousa', 'Alberto Barbosa Raposo', 'Alfredo Ferreira', 'Joaquim A. Jorge']
Effects of speed and transitions on target-based travel techniques
928,179
A strength of agent architectures such as PRS and dMARS, which are based on stored plan execution, is that their plan languages offer an easily understood, visual representation of behaviour that permits their underlying architectural complexity to be partially abstracted and effectively exploited. Unlike visual representations of behaviour used in methodologies such as UML for programming in Object Oriented languages such as \textsfJava, plan graphs constitute a direct, executable specification of agent behaviour rather than a model which guides implementation refinement. Previously, such languages have lacked a formal semantic basis. This paper presents key elements of a new visual programming language ViP which has a complete and exact semantics based upon a recently described agent process algebra -- the P calculus.
['David Kinny']
ViP: a visual programming language for plan execution systems
195,520
The Gompertz distribution has been used to describe human mortality and establish actuarial tables. Recently, this distribution has been again studied by some authors. The maximum likelihood estimates for the parameters of the Gompertz distribution has been discussed by Garg et al. [J. R. Statist. Soc. C 19 (1970) 152]. The purpose of this paper is to propose unweighted and weighted least squares estimates for parameters of the Gompertz distribution under the complete data and the first failure-censored data (series systems; see [J. Statist. Comput. Simulat. 52 (1995) 337]). A simulation study is carried out to compare the proposed estimators and the maximum likelihood estimators. Results of the simulation studies show that the performance of the weighted least squares estimators is acceptable.
['Jong-Wuu Wu', 'Wen-Liang Hung', 'Chih-Hui Tsai']
Estimation of parameters of the Gompertz distribution using the least squares method
461,009
Supervised text classification algorithms require a large number of documents labeled by humans, that involve a laborintensive and time consuming process. In this paper, we propose a weakly supervised algorithm in which supervision comes in the form of labeling of Latent Dirichlet Allocation (LDA) topics. We then use this weak supervision to “sprinkle” artificial words to the training documents to identify topics in accordance with the underlying class structure of the corpus based on the higher order word associations. We evaluate this approach to improve performance of text classification on three real world datasets.
['Swapnil Hingmire', 'Sutanu Chakraborti']
Sprinkling Topics for Weakly Supervised Text Classification
613,197
In this paper, we present a survey of emerging technologies for non-invasive human activity, behavior, and physiological sensing. The survey focuses on technologies that are close to entering the commercial market, or have only recently become available. We intend for this survey to give researchers in any field relevant to human data collection an overview of currently accessible devices and sensing modalities, their capabilities, and how the technologies will mature with time.
['Alexandros Lioulemes', 'Michalis Papakostas', 'Shawn N. Gieser', 'Theodora Toutountzi', 'Maher Abujelala', 'Sanika Gupta', 'Christopher Collander', 'Christopher McMurrough', 'Fillia Makedon']
A Survey of Sensing Modalities for Human Activity, Behavior, and Physiological Monitoring
941,874
Technological advances provide designers with tools to develop interfaces with anthropomorphic qualities. However, it is not known how human participants accommodate such design features in their interactions with computers, nor do we know if these features facilitate or hinder information exchange and task performance. Study 1 examined the properties of mediation, contingency, and modality richness, whereas study 2 examined the property of mediation. Results show that the some design features are better than others given the goal of the encounter (e.g., passive involvement vs. relation building). Discussion focuses on the relation between user perceptions, design features, and task outcomes.
['Judee K. Burgoon', 'Bjorn Bengtsson', 'Joseph A. Bonito', 'Artemio Ramirez', 'Norah E. Dunbar']
Designing interfaces to maximize the quality of collaborative work
541,810
This paper presents an explorative navigation method using sparse Gaussian processes for mobile sensor networks. We first show that a near-optimal approximation is possible with a subset of measurements if we select the subset carefully, i.e., if the correlation between the selected measurements and the remaining measurements is small and the correlation between the prediction locations and the remaining measurements is small. An estimation method based on a subset of measurements is desirable for mobile sensor networks since we can always bound computational and memory requirements and unprocessed raw measurements can be easily shared with other agents for further processing (e.g., consensus-based distributed algorithms or distributed learning). We then present an explorative navigation method using sparse Gaussian processes with a subset of measurements. Using the explorative navigation method, mobile sensor networks can actively seek for new measurements to reduce the prediction error and maintain high-quality estimation about the field of interest indefinitely with limited memory.
['Songhwai Oh', 'Yunfei Xu', 'Jongeun Choi']
Explorative navigation of mobile sensor networks using sparse Gaussian processes
294,639
In this paper, we suggest a scheme for error identification in human skills transfer when using the Programming by Demonstration (PbD) in adding a set of skills from a human operator to the force- controlled robotic tasks. Such errors in human skills transfer is majorly caused from the difficulty of properly synchronizing the human and machine responses. Based on the captured Cartesian forces and torques signals of the manipulated object, we present an approach of identifying the errors stemmed from human wrong skills transfer in a PbD process. The scheme is composed of using the Gravitational Search- Fuzzy Clustering Algorithm (GS-FSA) in finding the centroid of the captured forces and torques signals for each Contact Formation (CF). Then using a distance- based outlier identification approach along with the centroid of each signal, the human errors can be identified in the framework of data outlier identification. In order to validate the approach, a test stand, composed of a KUKA Light Weight Robot manipulating a rigid cube object, is built. The manipulated object is assumed to interact with an environment composed of three orthogonal planes. Error identification for two case studies will be considered and other cases can be dealt with in a similar manner. From the experimental results, excellent human error identification is shown when using the suggested approach.
['Ibrahim F. Jasim', 'Peter Plapper']
Human error identification in programming by demonstration of compliant motion robotic tasks
21,905
In this work, a revised form of Implicit Context Representation Cartesian Genetic Programming is used in the development of a diagnostic tool for the assessment of patients with neurological dysfunction such as Alzheimer's disease. Specifically, visuo-spatial ability is assessed by analysing subjects' digitised responses to a simple figure copying task using a conventional test environment. The algorithm was trained to distinguish between classes of visuo-spatial ability based on responses to the figure copying test by 7-11 year old children in which visuo-spatial ability is at varying stages of maturity. Results from receiver operating characteristic (ROC) analysis are presented for the training and subsequent testing of the algorithm and demonstrate this technique has the potential to form the basis of an objective assessment of visuo-spatial ability.
['David M. Howard', 'Andy M. Tyrrell', 'Crispin H. V. Cooper']
Towards an Alternative to Magnetic Resonance Imaging for Vocal Tract Shape Measurement Using the Principles of Evolution
9,547
Conventional model-based or statistical analysis methods for functional MRI (fMRI) suffer from the limitation of the assumed paradigm and biased results. Temporal clustering methods, such as fuzzy clustering, can eliminate these problems but are difficult to find activation occupying a small area, sensitive to noise and initial values, and computationally demanding. To overcome these adversities, a cascade clustering method combining a Kohonen clustering network and fuzzy c means is developed. Receiver operating characteristic (ROC) analysis is used to compare this method with correlation coefficient analysis and t test on a series of testing phantoms. Results show that this method can efficiently and stably identify the actual functional response with typical signal change to noise ratio, from a small activation area occupying only 0.2% of head size, with phase delay, and from other noise sources such as head motion. With the ability of finding activities of small sizes stably, this method can not only identify the functional responses and the active regions more precisely, but also discriminate responses from different signal sources, such as large venous vessels or different types of activation patterns in human studies involving motor cortex activation. Even when the experimental paradigm is unknown in a blind test such that model-based methods are inapplicable, this method can identify the activation patterns and regions correctly.
['Kai-Hsiang Chuang', 'Ming-Jang Chiu', 'Chung-Chih Lin', 'Jyh-Horng Chen']
Model-free functional MRI analysis using Kohonen clustering neural network and fuzzy C-means
490,270
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.
['Jamal Atman', 'Manuel Popp', 'Jan Ruppelt', 'Gert F. Trommer']
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.
892,857
Low probability of missing true corners due to user-defined parameters of feature extraction.Discarding outliers using characteristics of checkerboard corners.Index extension using characteristics of neighboring checkerboard corners.Performance of the proposed method in terms of success ratio.Robustness of the proposed method against partial view, lens distortion and image noise. This paper presents a new algorithm for automated checkerboard detection and indexing. Automated checkerboard detection is essential for reducing user inputs in any camera calibration process. We adopt an iterative refinement algorithm to extract corner candidates. In order to utilize the characteristics of checkerboard corners, we extract a circular boundary from each candidate and find its sign-changing indices. We initialize an arbitrary point and its neighboring two points as seeds and assign world coordinates to the other points. The largest set of world-coordinate-assigned points is selected as the detected checkerboard. The performance of the proposed algorithm is evaluated using images with various sizes and particular conditions.
['Yunsu Bok', 'Hyowon Ha', 'In So Kweon']
Automated checkerboard detection and indexing using circular boundaries
575,993
In this paper, we describe a statistical model for the gradient vector field of the gray level in images validated by different experiments. Moreover, we present a global constrained Markov model for contours in images that uses this statistical model for the likelihood. Our model is amenable to an iterative conditional estimation (ICE) procedure for the estimation of the parameters; our model also allows segmentation by means of the simulated annealing (SA) algorithm, the iterated conditional modes (ICM) algorithm, or the modes of posterior marginals (MPM) Monte Carlo (MC) algorithm. This yields an original unsupervised statistical method for edge-detection, with three variants. The estimation and the segmentation procedures have been tested on a total of 160 images. Those tests indicate that the model and its estimation are valid for applications that require an energy term based on the log-likelihood ratio. Besides edge-detection, our model can be used for semiautomatic extraction of contours, localization of shapes, non-photo-realistic rendering; more generally, it might be useful in various problems that require a statistical likelihood for contours.
['François Destrempes', 'Max Mignotte']
A statistical model for contours in images
290,572
We envision programmable matter consisting of systems of computationally limited devices (which we call particles ) that are able to self-organize in order to achieve a desired collective goal without the need for central control or external intervention. Central problems for these particle systems are shape formation and coating problems. In this paper, we present a universal shape formation algorithm which takes an arbitrary shape composed of a constant number of equilateral triangles of unit size and lets the particles build that shape at a scale depending on the number of particles in the system. Our algorithm runs in O(√n) asynchronous execution rounds, where $n$ is the number of particles in the system, provided we start from a well-initialized configuration of the particles. This is optimal in a sense that for any shape deviating from the initial configuration, any movement strategy would require Ω(√n) rounds in the worst case (over all asynchronous activations of the particles). Our algorithm relies only on local information (e.g., particles do not have ids, nor do they know n, or have any sort of global coordinate system), and requires only a constant-size memory per particle.
['Zahra Derakhshandeh', 'Robert Gmyr', 'Andréa W. Richa', 'Christian Scheideler', 'Thim Strothmann']
Universal Shape Formation for Programmable Matter
837,599
Personal photos on tour are easily affected by distractive objects, which requires effective post-processing for subject enhancement. In this paper, we propose a novel personal photo enhancement method using saliency driven color transfer, which can effectively reduce the attraction of distractive objects with simple user interaction. To each given image, distractive objects are firstly detected by combining saliency map and user interaction, and their attraction is further reduced by color transfer. The experimental results show that our method achieves similar effectiveness but higher efficiency to manually editing and outperforms other existing techniques.
['Yuqi Gao', 'Jingfan Guo', 'Tongwei Ren', 'Jia Bei']
Personal Photo Enhancement via Saliency Driven Color Transfer
966,698
The HGTree database provides putative genome-wide horizontal gene transfer (HGT) information for 2472 completely sequenced prokaryotic genomes. This task is accomplished by reconstructing approximate maximum likelihood phylogenetic trees for each orthologous gene and corresponding 16S rRNA reference species sets and then reconciling the two trees under parsimony framework. The tree reconciliation method is generally considered to be a reliable way to detect HGT events but its practical use has remained limited because the method is computationally intensive and conceptually challenging. In this regard, HGTree (http://hgtree.snu.ac.kr) represents a useful addition to the biological community and enables quick and easy retrieval of information for HGT-acquired genes to better understand microbial taxonomy and evolution. The database is freely available and can be easily scaled and updated to keep pace with the rapid rise in genomic information.
['Hyeonsoo Jeong', 'Samsun Sung', 'Taehyung Kwon', 'Minseok Seo', 'Kelsey Caetano-Anolles', 'Sang Ho Choi', 'Seoae Cho', 'Arshan Nasir', 'Heebal Kim']
HGTree: database of horizontally transferred genes determined by tree reconciliation
548,418
Small screen devices like cellular phones or Personal Digital Assistants (PDAs) are the artifacts of information technology almost everybody carries around today. Together with wireless networks they are the ubiquitous gateway to information and services. Especially the increasing bandwidth together with the improving processing power of these devices supply us with capabilities known only from desktop computers a couple of years ago. However, the access to rich multimedia content using these devices is almost impossible due to the physical limitations without performing lossy content reduction. An idea to overcome these limitations results from the observation that our daily life is surrounded by information technology almost everywhere, e.g. PC’s, Laptops TV sets, public terminals, etc.. These “larger screen devices” can be used in conjunction with the small screen device to extend its capabilities and to provide access to rich multimedia content and services without any sacrifice. In order to do so the small screen device must be able to access these devices and to remotely control them. UPnP (Universal Plug and Play) [1] is a technology, which connects appliances on an adhoc basis. In this paper we will present our concept of ad-hoc personal ubiquitous multimedia services and illustrate an UPnP based implementation, which allows users a high degree of mobility and in parallel facilitates the access of rich multimedia contents by using a small screen device.
['Georg Schneider', 'Christian Hoymann', 'Stuart Goose']
Adhoc personal ubiquitous multimedia services via UPNP
383,567
Isosurface extraction is a standard visualization method for scalar volume data and has been subject to research for decades. Nevertheless, to our knowledge, no isosurface extraction method exists that directly extracts surfaces from scattered volume data without 3D mesh generation or reconstruction over a structured grid. We propose a method based on spatial domain partitioning using a kd-tree and an indexing scheme for efficient neighbor search. Our approach consists of a geometry extraction and a rendering step. The geometry extraction step computes points on the isosurface by linearly interpolating between neighboring pairs of samples. The neighbor information is retrieved by partitioning the 3D domain into cells using a kd-tree. The cells are merely described by their index and bitwise index operations allow for a fast determination of potential neighbors. We use an angle criterion to select appropriate neighbors from the small set of candidates. The output of the geometry step is a point cloud representation of the isosurface. The final rendering step uses point-based rendering techniques to visualize the point cloud.#R##N##R##N#Our direct isosurface extraction algorithm for scattered volume data produces results of quality close to the results from standard isosurface extraction algorithms for gridded volume data (like marching cubes). In comparison to 3D mesh generation algorithms (like Delaunay tetrahedrization), our algorithm is about one order of magnitude faster for the examples used in this paper.
['Paul Rosenthal', 'Lars Linsen']
Direct isosurface extraction from scattered volume data
495,272
Nowadays, most of the statistical translation systems are based on phrases (i.e. groups of words). We describe a phrase-based system using a modified method for the phrase extraction which deals with larger phrases while keeping a reasonable number of phrases. Also, different alignments to extract phrases are allowed and additional features are used which lead to a clear improvement in the performance of translation. Finally, the system manages to do reordering. We report results in terms of translation accuracy by using the BTEC corpus in the tasks of Chinese to English and Arabic to English, in the framework of IWSLT’05 evaluation.
['Marta Ruiz Costa-Jussà', 'José A. R. Fonollosa']
Tuning a phrase-based statistical translation system for the IWSLT 2005 Chinese to English and Arabic to English tasks
663,480
Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions.
['Adrienne Porter Felt', 'Robert W. Reeder', 'Hazim Almuhimedi', 'Sunny Consolvo']
Experimenting at scale with google chrome's SSL warning
36,970
Advances in GPS tracking technology have enabled us to install GPS tracking devices in city taxis to collect a large amount of GPS traces under operational time constraints. These GPS traces provide unparallel opportunities for us to uncover taxi driving fraud activities. In this paper, we develop a taxi driving fraud detection system, which is able to systematically investigate taxi driving fraud. In this system, we first provide functions to find two aspects of evidences: travel route evidence and driving distance evidence. Furthermore, a third function is designed to combine the two aspects of evidences based on Dempster-Shafer theory. To implement the system, we first identify interesting sites from a large amount of taxi GPS logs. Then, we propose a parameter-free method to mine the travel route evidences. Also, we introduce route mark to represent a typical driving path from an interesting site to another one. Based on route mark, we exploit a generative statistical model to characterize the distribution of driving distance and identify the driving distance evidences. Finally, we evaluate the taxi driving fraud detection system with large scale real-world taxi GPS logs. In the experiments, we uncover some regularity of driving fraud activities and investigate the motivation of drivers to commit a driving fraud by analyzing the produced taxi fraud data.
['Yong Ge', 'Hui Xiong', 'Chuanren Liu', 'Zhi-Hua Zhou']
A Taxi Driving Fraud Detection System
511,005
This paper proposes using modulation cepstrum coefficients instead of cepstral coefficients for extracting metadata information such as age and gender. These coefficients are extracted by applying discrete cosine transform to a time-sequence of cepstral coefficients. Lower order coefficients of this transformation represent smooth cepstral trajectories over time. Results presented in this paper show that cepstral trajectories corresponding to lower (3-14 Hz) modulation frequencies provide best discrimination. The proposed system achieves 50.2% overall accuracy for this 7-class task while accuracy of human labelers on a subset of evaluation material used in this work is 54.7%.
['Jitendra Ajmera', 'Felix Burkhardt']
Age and Gender Classification using Modulation Cepstrum
738,154
Precision control of piezoelectric actuator using fuzzy feedback control with inverse hysteresis compensation
['Ziqiang Chi', 'Qingsong Xu']
Precision control of piezoelectric actuator using fuzzy feedback control with inverse hysteresis compensation
639,865
Multi-Channel Quantum Image Scrambling
['Fei Yan', 'Yiming Guo', 'Abdullah M. Iliyasu', 'Zhengang Jiang', 'Huamin Yang']
Multi-Channel Quantum Image Scrambling
714,399
We present the growing hierarchical self-organizing map. This dynamically growing neural network model evolves into a hierarchical structure according to the requirements of the input data during an unsupervised training process. We demonstrate the benefits of this novel neural network model by organizing a real-world document collection according to their similarities.
['Michael Dittenbach', 'Dieter Merkl', 'Andreas Rauber']
The growing hierarchical self-organizing map
2,378
The carbon dioxide (CO 2 ) emissions released from biomass burning significantly affect the temporal variations of atmospheric CO 2 concentrations. Based on a long-term (July 2009–June 2015) retrieved data sets by the greenhouse gases observing satellite (GOSAT), the seasonal cycle and interannual variations of column-averaged volume mixing ratios of atmospheric carbon dioxide (XCO 2 ) in four fire affected continental regions were analyzed. The results showed that Northern Africa (NA) had the largest seasonal variations after removing its regional trend of XCO 2 with peak-to-peak amplitude of 6.2 ppm within the year, higher than central South America (CSA) (2.4 ppm), Southern Africa (SA) (3.8 ppm), and Australia (1.7 ppm). The detrended regional XCO 2 ( $\triangle $ XCO 2 ) was found to be positively correlated with the fire CO 2 emissions during the fire activity period but with different seasonal variabilities. NA recorded the largest change of seasonal variations of $\triangle $ XCO 2 with a total of 12.8 ppm during fire seasons, higher than CSA, SA, and Australia with 5.4, 6.7, and 2.2 ppm, respectively. During the fire episode, the positive $\triangle $ XCO 2 was noticed during June–November in CSA, December to next June in NA, and May–November in SA. The Pearson correlation coefficients between the variations of $\triangle $ XCO 2 and fire CO 2 emissions achieved the best correlations in SA ( $R = 0.77$ and $p ). This letter revealed that fire CO 2 emissions and GOSAT XCO 2 presented consistent seasonal variations.
['Yusheng Shi', 'Tsuneo Matsunaga', 'Hibiki Noda']
Interpreting Temporal Changes of Atmospheric CO 2 Over Fire Affected Regions Based on GOSAT Observations
948,397
The problem of bidirectionally transmitting two analog sources over a fading channel with the help of a relay is studied. In the regime of high signal-to-noise ratios, upper and lower bounds on the exponent of the end-to-end expected distortion are derived. It is shown that simple two-way relaying protocols based on the separation of source and channel coding can provide significant improvements in the distortion exponent over their one-way counterparts. In particular, in the regime of low channel bandwidth to source bandwidth ratios, an amplify-and-forward scheme is shown to achieve good distortion performance, albeit at the cost of large feedback overhead. In the regime of high bandwidth ratio, even better performance is achieved by a novel adaptive decode-and-forward scheme that successfully exploits low-rate feedback from the relay to the source nodes.
['Thanh Tung Kim', 'H. Vincent Poor']
Analog Source Exchange with the Help of a Relay
230,139
Applications, such as streaming applications, modeled by task graphs can be efficiently executed in a pipelined fashion. In synthesizing application-specific heterogeneous pipelined systems, where to place buffers (called buffer placement) and what type of functional units to execute each task (called functional assignment) are two critical problems. In reality, the execution time of each task may not be fixed, which makes the above two problems much more challenging. In this paper, we model the execution time of each task on different types of functional units as a random variable. Our objective is to obtain the optimal functional assignment and buffer placement, such that the resultant pipeline can satisfy the timing requirement with the minimum cost under the guaranteed confidence probability. This paper presents efficient algorithms to achieve the objective. Experiments show that other techniques cannot find any feasible solutions in many cases while ours can. Even for the cases where they can find feasible solutions, our algorithms achieve the minimum cost which gives a significant reduction on the total cost, compared with existing techniques.
['Weiwen Jiang', 'Edwin Hsing-Mean Sha', 'Qingfeng Zhuge', 'Xianzhang Chen']
Optimal functional-unit assignment and buffer placement for probabilistic pipelines
904,667
Crowd-sensing is a popular way to sense and collect data using smartphones that reveals user behaviors and their correlations with device performance. PhoneLab is one of the largest crowd-sensing platform based on the Android system. Through experimental instrumentations and system modifications, researchers can tap into a sea of insightful information that can be further processed to reveal valuable context information about the device, user and the environment. However, the PhoneLab data is in JSON format. The process of inferring reasons from data in this format is not straightforward. In this paper, we introduce PLOMaR — an ontology framework that uses SPARQL rules to help researchers access information and derive new information without complex data processing. The goals are to (i) make the measurement data more accessible, (ii) increase interoperability and reusability of data gathered from different sources, (iii) develop extensible data representation to support future development of the PhoneLab platform. We describe the models, the JSON to RDF mapping processes, and the SPARQL rules used for deriving new information. We evaluate our framework with three application examples based on the sample dataset provided.
['Yogesh Jagadeesan', 'Peizhao Hu', 'Carlos R. Rivero']
PLOMaR: An ontology framework for context modeling and reasoning on crowd-sensing platform
711,573
This article considers extending the scope of the empirical mode decomposition (EMD) method. The extension is aimed at noisy data and irregularly spaced data, which is necessary for widespread applicability of EMD. The proposed algorithm, called statistical EMD (SEMD), uses a smoothing technique instead of an interpolation when constructing upper and lower envelopes. Using SEMD, we discuss how to identify non-informative fluctuations such as noise, outliers, and ultra-high frequency components from the signal, and to decompose irregularly spaced data into several components without distortions.
['Donghoh Kim', 'Kyungmee O. Kim', 'Hee-Seok Oh']
Extending the scope of empirical mode decomposition by smoothing
472,036
Internet of Things as Advanced Technology to Support Mobility and Intelligent Transport
['Milan Dado', 'Aleš Janota', 'Juraj Spalek', 'Peter Holecko', 'Rastislav Pirník', 'Karl Ernst Ambrosch']
Internet of Things as Advanced Technology to Support Mobility and Intelligent Transport
937,017
In this paper, the blocklength-limited performance of a relaying system is studied, where channels are assumed to experience quasi-static Rayleigh fading while at the same time only the average channel state information (CSI) is available at the source. Both the physical-layer performance (blocklength-limited throughput) and the link-layer performance (effective capacity) of the relaying system are investigated. We propose a simple system operation by introducing a factor based on which we weight the average CSI and let the source determine the coding rate accordingly. We show that both the blocklength-limited throughput and the effective capacity are quasi-concave in the weight factor. Through numerical analysis, we investigate the relaying performance with average CSI while considering perfect CSI scenario and direct transmission as comparison schemes. We observe that relaying is more efficient than direct transmission in the finite blocklength regime. Moreover, this performance advantage of relaying under the average CSI scenario is more significant than under the perfect CSI scenario. Finally, the speed of convergence (between the blocklength-limited performance and the performance with infinite blocklengths) in relaying system is faster in comparison to the direct transmission under both the average CSI scenario and the perfect CSI scenario.
['Yulin Hu', 'Anke Schmeink', 'James Gross']
Blocklength-Limited Performance of Relaying Under Quasi-Static Rayleigh Channels
699,093
ABSTRACTIn this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
['Xiaomei Liu', 'Shengtao Li', 'Kanjian Zhang']
Optimal control of switching time in switched stochastic systems with multi-switching times and different costs
840,141
Abstract#R##N##R##N#While the control structures in recent programming languages are structured, the data structures are still primitive. This paper examines data structures and operations on them, and proposes some new features in programming languages. These new features are principally in the areas of data description and data usage. In data description, the emphasis is on a global view of dynamic data structures; in data usage, semantic relationships between data items are innate in the operations on these data structures. Finally, example data descriptions and algorithms using some of the new features are contrasted with those using conventional features.
['N. H. Madhavji', 'I. R. Wilson']
Dynamically structured data
194,391
Key agreement is one of the fundamental cryptographic primitives in public key cryptography. So far several certificateless two-party authenticated key agreement (CL-T-AKA) protocols have been proposed. However, all these protocols are based on bilinear map and most of them are without formal security proof. In this paper, we present a new formal security model of CL-T-AKA protocols and bring forward the first two-party key agreement protocol without the computation of expensive bilinear map. Our protocol is secure under the security model assuming the Gap-DH problem is intractable. With respect to efficiency, our protocol requires a single round of communication in which each party sends only one group element, and needs only five modular exponentiation computations. In addition, we point out that an existing certificateless two-party key agreement protocol cannot resist man-in-the-middle attack.
['Manman Geng', 'Futai Zhang']
Provably Secure Certificateless Two-Party Authenticated Key Agreement Protocol without Pairing
509,075
A semi-dynamic system is presented that is capable of predicting the performance of parallel programs at runtime. The functionality given by the system allows for efficient handling of portability and irregularity of parallel programs. Two forms of parallelism are addressed: loop level parallelism and task level parallelism.
['David Wangerin', 'Isaac D. Scherson']
Using predictive adaptive parallelism to address portability and irregularity
969,711
This paper proposes a novel feedback protocol for relay-based two-hop networks, in which the channel state information matrix of the second hop is compressible due to the presence of spatial correlation and distance dependent path loss among the communication channels from the relay nodes to the users. The proposed protocol makes use of recent developments in the fields of low rank matrix recovery and compressed sensing to approximate the channel matrix by a low rank and sparse matrix. As a result, accurate channel state information can be provided to the base station for optimal relay selection, while significantly reducing pilot contamination and feedback overhead. Simulations demonstrate that approximately 50% of the training and feedback overhead can be saved if the compressibility of the channel matrix is taken into account.
['Jan Schreck', 'Peter Jung', 'Slawomir Stanczak']
On channel state feedback for two-hop networks based on low rank matrix recovery
513,063
The Advanced Metering Infrastructure (AMI) plays a critical role in the Smart Grid. In regarding the usage of smart meters in AMI, there is a primary concern about how utility companies manage energy consumption data, particularly with respect to consumer privacy. This research presents a novel protocol for secure and efficient communication of energy consumption data, protecting its confidentiality, integrity, and privacy while utilizing the existing Grid infrastructure. The protocol supports time-of-use billing and data mining for advanced fine-grained data analysis. We report on the empirical results of the theoretical, experimental, and comparative analyses of the proposed protocol.
['Vitaly Ford', 'Ambareen Siraj', 'Mohammad Ashiqur Rahman']
Secure and efficient protection of consumer privacy in Advanced Metering Infrastructure supporting fine-grained data analysis
831,532
Automatically detecting component failures and unexpected behaviors is an essential service for achieving fault-tolerant robust manufacturing systems. The application of a multi-agent system is regarded as a promising approach for designing complex systems such as manufacturing systems due to its distributed nature. In this paper we present an agent-based control system with diagnostic capabilities on several layers for a pallet transport system. Local diagnostic tasks for detecting failures are performed by the automation agents each controlling one physical component such as a diverter. In order to observe the correct behavior of the components on a system-wide scale, an approach based on Hidden Markov Models linked to the agents is employed.
['Wilfried Lepuschitz', 'Vaclav Jirkovsky', 'Petr Kadera', 'Pavel Vrba']
A Multi-Layer Approach for Failure Detection in a Manufacturing System Based on Automation Agents
446,127
Happy Accident: A Sentiment Composition Lexicon for Opposing Polarity Phrases.
['Svetlana Kiritchenko', 'Saif M. Mohammad']
Happy Accident: A Sentiment Composition Lexicon for Opposing Polarity Phrases.
994,272
The problem of scalable and robust distributed data storage has recently attracted a lot of attention. A common approach in the area of peer-to-peer systems has been to use a distributed hash table (or DHT). DHTs are based on the concept of virtual space. Peers and data items are mapped to points in that space, and local-control rules are used to decide, based on these virtual locations, how to interconnect the peers and how to map the data to the peers.DHTs are known to be highly scalable and easy to update as peers enter and leave the system. It is relatively easy to extend the DHT concept so that a constant fraction of faulty peers can be handled without any problems, but handling adversarial peers is very challenging. The biggest threats appear to be join-leave attacks (i.e., adaptive join-leave behavior by the adversarial peers) and attacks on the data management level (i.e., adaptive insert and lookup attacks by the adversarial peers) against which no provably robust mechanisms are known so far. Join-leave attacks, for example, may be used to isolate honest peers in the system, and attacks on the data management level may be used to create a high load-imbalance, seriously degrading the correctness and scalability of the system.We show, on a high level, that both of these threats can be handled in a scalable manner, even if a constant fraction of the peers in the system is adversarial, demonstrating that open systems for scalable distributed data storage that are robust against even massive adversarial behavior are feasible.
['Baruch Awerbuch', 'Christian Scheideler']
Towards a scalable and robust DHT
444,732
Designing efficient and fair solutions for dividing the network resources in a distributed manner among self-interested multimedia users is recently becoming an important research topic because heterogeneous and high bandwidth multimedia applications (users), having different quality-of-service requirements, are sharing the same network. Suitable resource negotiation solutions need to explicitly consider the amount of information exchanged among the users and the computational complexity incurred by the users. In this paper, we propose decentralized solutions for resource negotiation, where multiple autonomous users self-organize into a coalition which shares the same network resources and negotiate the division of these resources by exchanging information about their requirements. We then discuss various resource sharing strategies that the users can deploy based on their exchanged information. Several of these strategies are designed to explicitly consider the utility (i.e., video quality) impact of multimedia applications. In order to quantify the utility benefit derived by exchanging different information, we define a new metric, which we refer to as the value of information. We quantify through simulations the improvements that can be achieved when various information is exchanged between users, and discuss the required complexity at the user side involved in implementing the various resource negotiation strategies.
['Hyunggon Park', 'M. van der Schaar']
Coalition-Based Resource Negotiation for Multimedia Applications in Informationally Decentralized Networks
69,255
Pacific Biosciences (PacBio), the main third generation sequencing technology can produce scalable, high-throughput, unprecedented sequencing results through long reads with uniform coverage. Although these long reads have been shown to increase the quality of draft genomes in repetitive regions, fundamental computational challenges remain in overcoming their high error rate and assembling them efficiently. In this paper we show that the de Bruijn graph built on the long reads can be efficiently and substantially disentangled using optical mapping data as auxiliary information. Fundamental to our approach is the use of the positional de Bruijn graph and a succinct data structure for constructing and traversing this graph. Our experimental results show that over 97.7% of directed cycles have been removed from the resulting positional de Bruijn graph as compared to its non-positional counterpart. Our results thus indicate that disentangling the de Bruijn graph using positional information is a promising direction for developing a simple and efficient assembly algorithm for long reads.
['Bahar Alipanahi', 'Leena Salmela', 'Simon J. Puglisi', 'Martin D. Muggli', 'Christina Boucher']
Disentangled Long-Read De Bruijn Graphs via Optical Maps
997,550
We propose jammer excision techniques for direct sequence spread spectrum communications when the jammers cannot be parametrically characterized. The representation of the non-stationary signals is done using the time-frequency and the frequency-frequency evolutionary transformations. One of the methods, based on the frequency-frequency representation of the received signal, uses a deterministic masking approach while the other, based on nonstationary Wiener filtering, reduces interference in a mean-square fashion. Both of these approaches use the fact that the spreading sequence is known at the transmitter and the receiver, and that, as such, its evolutionary representation can be used to estimate the sent bit. The difference in performance between these two approaches depends on the support rather than on the type of jammer being excised. The frequency-frequency masking approach works well when the jammer is narrowly concentrated in parts of the frequency-frequency plane, while the Wiener masking approach works well in situations when the jammer is spread over all frequencies. Simulations illustrating the performance of the two methods, in different situations, are shown.
['Luis F. Chaparro', 'Abdullah Ali Alshehri']
Jammer excision in spread spectrum communications via Wiener masking and frequency-frequency evolutionary transform
114,236
Conventional edge-detection methods suffer from the dislocation of curved surfaces due to the PSF. We propose a new method that uses the isophote curvature to circumvent this. It is accurate for objects with locally constant curvature, even for small objects (like blood vessels) and in the presence of noise.
['Henri Bouma', 'Anna Vilanova', 'L.J. Van Vliet', 'Frans A. Gerritsen']
Correction for the dislocation of curved surfaces caused by the PSF in 2D and 3D CT images
276,954
Fully Symbolic Model Checking for Timed Automata.
['Georges Morbé', 'Christoph Scholl']
Fully Symbolic Model Checking for Timed Automata.
798,857
In recent years, analyzing data streams has attracted considerable attention in different fields of computer science. In this paper, two different frameworks, namely MOA and Spark MLlib, are examined for linear regression on streaming data. The focus is placed on determining how well the linear regression techniques implemented in the frameworks that could be used to model the data streams. We also examine the challenges of massive data streams and how MOA and Spark Streaming solve these kinds of challenges. As a result of the experiments, we see that although the usage of MOA is more easier than Spark MLlib, Spark MLlib linear regression performance on streaming data is better.
['Barış Akgün', 'Sule Gunduz Oguducu']
Streaming Linear Regression on Spark MLlib and MOA
654,078
Online financial news is an important part of financial big data. In this paper, we propose a model to promptly recognize valuable news about senior executives' behavior and an online automatically trading strategy based on the model. Our model consists of three phases. First, word segmentation and keyword extraction are employed to quantify the financial text. For a better efficiency and promptness, manifold learning is utilized to reduce the dimension of keyword vector. Second, the idea of financial event study is utilized to judge whether a specific type of news could produce significantly positive or negative return. Third, support vector machine is employed to recognize the specific financial news and associate the quantified text with the stock return. Experiments show that the recognition work performed excellently and the behavior of increasing shareholdings produces significant positive return. Our online automatically trading strategy based on the model obtained a return of 55.62%, outperforming three main benchmarks in the same period, 4.52%, 12.47% and −6.89% respectively.
['Chao Ma', 'Xun Liang']
Mining gold in senior executives' pockets: An online automatically trading strategy
565,059
The purpose of this paper is to demonstrate a methodology to evaluate the performance of third party logistics (3PL) company using a team approach. The criteria to evaluate the performance of 3PL have been identified through literature review and in consultation with practitioners. The weightage to criteria and the performance evaluation of 3PL are based on the information gathered by the performance measurement team (PMT). A fuzzy logic is used to incorporate the perception of PMT members. The uniqueness of the method is to incorporate the views of multiple decision makers so that biasness in decision can be minimised. This is the process that can be used by a company to evaluate his own performance with more accuracy. The managements of 3PL companies can use this methodology to evaluate their performance with high accuracy. Application of proposed methodology incorporating the team approach to evaluate the performance of 3PL is the originality of this paper.
['Ankit Bansal', 'Pravin Kumar', 'Siddhant Issar']
Evaluation of a 3PL company: an approach of fuzzy modelling
521,288
Maintaining comfortable thermal conditions in an office environment is very important, as it can affect the quality of life of the occupants, their work productivity, and improve energy efficiency. One significant aspect of this task is how to balance the preferences of a number of occupants sharing the same space. We suggest three families of approaches to this problem, both for the case of optimising for a single time period, and for the problem of optimising over multiple different time periods. We analyse in detail the different approaches based on a number of natural properties, proving which of the properties the different families satisfy.
['Nic Wilson']
Approaches and Properties for Aggregating Occupant Preferences
604,395
Rocchio relevance feedback and latent semantic indexing (LSI) are well-known extensions of the vector space model for information retrieval (IR). This paper analyzes the statistical relationship between these extensions. The analysis focuses on each method's basis in least-squares optimization. Noting that LSI and Rocchio relevance feedback both alter the vector space model in a way that is in some sense least-squares optimal, we ask: what is the relationship between LSI's and Rocchio's notions of optimality? What does this relationship imply for IR? Using an analytical approach, we argue that Rocchio relevance feedback is optimal if we understand retrieval as a simplified classification problem. On the other hand, LSI's motivation comes to the fore if we understand it as a biased regression technique, where projection onto a low-dimensional orthogonal subspace of the documents reduces model variance.
['Miles Efron']
Query expansion and dimensionality reduction: Notions of optimality in Rocchio relevance feedback and latent semantic indexing
148,916
Twitter, a popular microblogging service, has received much attention recently. An important characteristic of Twitter is its real-time nature. For example, when an earthquake occurs, people make many Twitter posts (tweets) related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing the tweets. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. We consider each Twitter user as a sensor and apply Kalman filtering and particle filtering, which are widely used for location estimation in ubiquitous/pervasive computing. The particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons. As an application, we construct an earthquake reporting system in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (96% of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and sends e-mails to registered users. Notification is delivered much faster than the announcements that are broadcast by the JMA.
['Takeshi Sakaki', 'M. Okazaki', 'Yutaka Matsuo']
Earthquake shakes Twitter users: real-time event detection by social sensors
65,743
A new digital current sensing technique suitable for low power energy harvesting systems
['M. A. Ibrahim', 'Ayman Eltaliawy', 'Hassan Mostafa', 'Yehea I. Ismail']
A new digital current sensing technique suitable for low power energy harvesting systems
625,390
An accurate approximate formula of the die-out probability in a SIS epidemic process on a network is proposed. The formula contains only three essential parameters: the largest eigenvalue of the adjacency matrix of the network, the effective infection rate of the virus, and the initial number of infected nodes in the network. The die-out probability formula is compared with the exact die-out probability in complete and Erd\H{o}s-R\'enyi graphs, which demonstrates the accuracy. Furthermore, as an example, the formula is applied to the N-Intertwined Mean-Field Approximation, to explicitly incorporate the die-out.
['Qiang Liu', 'Piet Van Mieghem']
Die-out Probability in SIS Epidemic Processes on Networks
892,671
The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor userdefined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.
['Valentina Casola', 'Alessandra De Benedictis', 'Massimiliano Rak', 'Umberto Villano']
SLA-Based Secure Cloud Application Development: The SPECS Framework
688,123
With huge amount of observed air quality and components data, it is of great challenge to analyze and trace the pollutant diffusion path. Partitioning the air pollution sources (air quality observation stations) into subnetworks will help a lot in tracing the air pollution diffusion path. Conventional air pollution sources clustering methods, which are based on geography or pollutant levels, present weak correlation with pollution transmission links. In order to overcome such problem, a method of air pollution sources clustering via activation force (AF) model is introduced in this paper. We model the connections of the pollution sources by AF so that the relationship among the observation stations and the coincidence of the transmission links can be modeled effectively. With the affinity matrix obtained via AF modeling, we conduct clustering of the air pollution sources via modularity measurement. Compared to K-means clustering method purely, which is based on the air quality index of pollutants, the proposed approach shows several advantages in air pollution network clustering.
['Di Huang', 'Ni Zhang', 'Hong Yu', 'Huanyu Zhou', 'Zhanyu Ma', 'Weisong Hu', 'Jun Guo']
Activation force-based air pollution observation station clustering
275,051
A new concept for constructing an intelligent mobile system is proposed. We describe reasons for the necessity of a new architecture for mobile systems in intelligent spaces. The intelligent spaces are room or area that are equipped with sensors, network and computers. The intelligent spaces are expected to be authentic future environment. If environment gets intelligence, it is not a distinct part of an intelligent mobile system any more and also the mobile system becomes a physical agent of the intelligent space. In this paper, we do some experiments to show what are possible for a mobile robot in an intelligent space.
['Joo-Ho Lee', 'Guido Appenzeller', 'Hideki Hashimoto']
Physical agent for sensored, networked and thinking space
455,384
Recent years have seen rapid cellular expansion in urban and rural India [1], providing an avenue to bridge the digital divide. However there is little understanding of the performance of cellular data connectivity in different geographies. We take a first step towards this. We are planning to characterize the performance of cellular data networks available across different locations, rural and urban, in India through a large scale experimental setup consisting of more than 50 measurement points across the country. We hope our findings will reveal capacity provisioning and network design characteristics that telecom operators follow in deploying 2G/3G connectivity in different areas.
['Amitsingh Chandele', 'Zahir Koradia', 'Vinay J. Ribeiro', 'Aaditeshwar Seth', 'Sipat Triukose', 'Sebastien Ardon', 'Anirban Mahanti']
2G/3G network measurements in rural areas of India
21,723
In order to reduce the total cost of a dual source drinking water treatment plant operation, a comprehensive hybrid prediction model was built to estimate the necessary chemicals dosage and pumping energy costs for alternative source selection scenarios. Correlations between the water quality parameters and the required treatment chemicals were estimated using historical data and the expected pH variations associated with each chemical addition, which was based on the Caldwell-Lawrence diagram. The pumping energy costs were also estimated using a data-driven approach that was based on historical plant data. The research has practical implications for water treatment operators seeking to minimize plant operational costs through selecting raw water intake volumes for their treatment plant based on multiple source options and offtake tower gate levels. Future research seeks to better link current and future water treatment dosage cost predictions directly to water quality measurements taken from vertical profiling systems. Prediction model built to estimate treatment costs for a dual source drinking WTP.Model optimises source selection proportions based on treatment and pumping costs.Model aids WTP plant operators to alter the source selection strategy in extreme events.Model utilisation for source selection optimization delivers life cycle monetary savings.
['Edoardo Bertone', 'Rodney Anthony Stewart', 'Hong Zhang', "Kelvin O'Halloran"]
Hybrid water treatment cost prediction model for raw water intake optimization
568,004
Resolution complexity of perfect mathcing principles for sparse graphs.
['Dmitry Itsykson', 'Mikhail Slabodkin', 'Dmitry Sokolov']
Resolution complexity of perfect mathcing principles for sparse graphs.
753,900
A Method of Analysis and Visualization of Structured Datasets Based on Centrality Information
['Wojciech Czech', 'Radosław Łazarz']
A Method of Analysis and Visualization of Structured Datasets Based on Centrality Information
848,262
We have introduced a new nonassociative class of Abel-Grassmann's groupoid, namely, intraregular and characterized it in terms of its (∈,∈∨𝑞)-fuzzy quasi-ideals.
['Bijan Davvaz', 'Madad Khan', 'Saima Anis', 'Shamsul Haq']
Generalized Fuzzy Quasi-Ideals of an Intraregular Abel-Grassmann's Groupoid
89,041
Enforcing Privacy in Decentralized Mobile Social Networks.
['Hiep H. Nguyen', 'Abdessamad Imine', 'Michaël Rusinowitch']
Enforcing Privacy in Decentralized Mobile Social Networks.
783,889
In a previous paper we Weiss and Rasmussen [Weiss, H. J., R. A. Rasmussen. 2007. Lessons from modeling sudoku in excel. INFORMS Trans. Ed.72, http://ite.pubs.informs.org/Vol7No2/Weiss/ ] demonstrated lessons that can be learned by formulating Sudoku in Excel using Solver's standard tools. In this paper we use the advanced tools and solver engines available with the Premium Solver Platform in order to demonstrate more sophisticated lessons regarding optimization modeling. Optimization modeling is a skill developed by building and testing alternative formulations for new problems. This paper gives advanced undergraduate and graduate students an opportunity to develop the craft of optimization modeling, by presenting the construction of two new alternatives for modeling Sudoku in Excel. We do not present these models because they are more efficient than the previous models; in fact one of them does not work well at all. The main reason is to highlight strengths and weaknesses when using different modeling approaches, and to display some of the additional modeling capabilities available using the Premium Solver Platform rather than the standard Solver.
['Rasmus Rasmussen', 'Howard J. Weiss']
Advanced Lessons on the Craft of Optimization Modeling Based on Modeling Sudoku in Excel
521,093
This paper deals with the existence of symmetric positive solutions for a class of singular Sturm-Liouville-like boundary value problems with a one-dimensional p-Laplacian operator. By using the fixed theorem of cone expansion and compression of norm type in a cone, the existence of positive solutions is established though nonlinear term contains the first derivative of unknown function.
['Jingbao Yang', 'Zhongli Wei', 'Ke Liu']
Existence of symmetric positive solutions for a class of Sturm-Liouville-like boundary value problems
336,215
This paper evaluates the effect of energy trading networks on the volatility of coal, oil, natural gas, and electricity. This research conducts a longitudinal analysis using a time series of static coal trading networks to generate a dynamic trading network, and uses the component causality index as a leading indicator of systemic risk. This research finds out that the component causality index, based on degree centrality, anticipates or moves together with coal volatility and in less degree with gas and electricity volatility during the period 2007–14. The broad impact of this research lies in the understanding of mechanisms of the instability and risk of the energy sector as a result of a complex interaction of the network of producers and traders.
['Germán G. Creamer']
Trading network and systemic risk in the energy market
970,297
Designing a Decision Support System to Prevent Products Missing from the Shelf.
['Dimitris A. Papakiriakopoulos', 'Angeliki Karagiannaki', 'Apostolos N. Giovanis', 'Spiridon Binioris']
Designing a Decision Support System to Prevent Products Missing from the Shelf.
744,007
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
['Annika Röhl', 'Alexander Bockmayr']
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks
956,916
In the context of Maximum Likelihood (ML) source separation in a semi-blind scenario, where the spectra of the sources are known and distinct, the likelihood equations amount to a set of matrix decompositions (known as the "Sequentially Drilled" Joint Congruence Transformation (SeDJoCo)). However, quite often multiple solutions of SeDJoCo exist, only one of which is the optimal solution, corresponding to the global maximum. In this paper we characterize the different solutions and propose a procedure for detecting whether a given solution is sub-optimal. Moreover, for such sub-optimal solutions we propose a procedure for re-initializing an iterative solver so as to converge to the optimal solution. Using simulation, we present the empirical probability to encounter a sub-optimal solution (by a given iterative algorithm), as well as the resulting separation improvement when applying our proposed re-initialization approach in such cases.
['Arie Yeredor', 'Yao Cheng', 'Martin Haardt']
On multiple solutions of the "sequentially drilled" joint congruence transformation (SeDJoCo) problem for semi-blind source separation
804,016
Vision is one of the most important aesthesia of mobile robot. Conventional vision for robot is so-called simultaneous localization and mapping (SLAM) technology, but this technology usually is used to take cognizance of macrostructure of environment. The objects in the scene are commonly treated as just landmarks and the detail 3D information are ignored. But our cognitive robot requires finding the objects from a large-scale environment and exploring them, 3D data of objects cannot be slid over any more. In this paper, we will develop a vision system for our cognitive robot based on classic batch monocular SLAM technology. The two main contributions of our work are introduce of loop closure method and relevant map storage approach, new 3D spatial cluster denoising technology for data refinement.
['Kai Zhou', 'Michael Zillich', 'Markus Vincze']
Reconstruction of three dimensional spatial clusters using monocular camera
187,756
This paper presents a simple and effective method to compute the pixel saliency with full resolution in an image. First, the proposed method creates an image representation of four color channels through the modified computation on the basis of Itti et al.[5]. Then the most informative channel is automatically identified from the derived four color channels. Finally, the pixel saliency is computed through the simple combination of contrast feature and spatial attention function on the individual channel. The proposed method is computationally very simple, but it achieves a very good performance in the comparison experiments with six other saliency detection methods. On the challenging database with 1,000 images, it outperforms six other methods in both identifying salient pixels and segmenting salient regions.
['Hui Zhang', 'Weiqiang Wang', 'Guiping Su', 'Lijuan Duan']
A simple and effective saliency detection approach
481,415
Search satisfaction is defined as the fulfillment of a user’s information need. Characterizing and predicting the satisfaction of search engine users is vital for improving ranking models, increasing user retention rates, and growing market share. This article provides an overview of the research areas related to user satisfaction. First, we show that whenever users choose to defect from one search engine to another they do so mostly due to dissatisfaction with the search results. We also describe several search engine switching prediction methods, which could help search engines retain more users. Second, we discuss research on the difference between good and bad abandonment, which shows that in approximately 30p of all abandoned searches the users are in fact satisfied with the results. Third, we catalog techniques to determine queries and groups of queries that are underperforming in terms of user satisfaction. This can help improve search engines by developing specialized rankers for these query patterns. Fourth, we detail how task difficulty affects user behavior and how task difficulty can be predicted. Fifth, we characterize satisfaction and we compare major satisfaction prediction algorithms.
['Ovidiu Dan', 'Brian D. Davison']
Measuring and Predicting Search Engine Users’ Satisfaction
857,186
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on-the-fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU-friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.
['Marc Olano', 'Dan Baker', 'Wesley Griffin', 'Joshua Barczak']
Variable bit rate GPU texture decompression
292,573
Relational fuzzy c-means and kernel fuzzy c-means using an object-wise β -spread transformation
['Yuchi Kanzawa']
Relational fuzzy c-means and kernel fuzzy c-means using an object-wise β -spread transformation
776,535
Word sense disambiguation for arabic text categorization.
['Meryeme Hadni', 'Said El Alaoui Ouatik', 'Abdelmonaime Lachkar']
Word sense disambiguation for arabic text categorization.
995,407
Version control branching allows an organization to parallelize its development efforts. Releasing a software system developed in this manner requires release managers, and other project stakeholders, to make decisions about how to integrate the branched work. This group decision-making process becomes very complex in the case of large-scale parallel development. To better understand the information needs of release managers in this context, we conducted an interview study at a large software company. Our analysis of the interviews provides a view into how release managers make integration decisions, organized around ten key factors. Based on these factors, we discuss specific information needs for release managers and how the needs can be met in future work.
['Shaun Phillips', 'Guenther Ruhe', 'Jonathan Sillito']
Information needs for integration decisions in the release process of large-scale parallel development
482,861
Addressing the problem of Joint segmentation, reconstruction and tracking of multiple targets from multi-view videos.Casting the problem as data association among extracted superpixels from images.Optimizing a flow graph to solve the global data association in order to segment and reconstruct targets.Fast obtaining the solution of graph by performing two stages of optimization.Conduction experimental results on known public datasets and analyzing the proposed algorithm. Tracking of multiple targets in a crowded environment using tracking by detection algorithms has been investigated thoroughly. Although these techniques are quite successful, they suffer from the loss of much detailed information about targets in detection boxes, which is highly desirable in many applications like activity recognition. To address this problem, we propose an approach that tracks superpixels instead of detection boxes in multi-view video sequences. Specifically, we first extract superpixels from detection boxes and then associate them within each detection box, over several views and time steps that lead to a combined segmentation, reconstruction, and tracking of superpixels. We construct a flow graph and incorporate both visual and geometric cues in a global optimization framework to minimize its cost. Hence, we simultaneously achieve segmentation, reconstruction and tracking of targets in video. Experimental results confirm that the proposed approach outperforms state-of-the-art techniques for tracking while achieving comparable results in segmentation.
['Mohammadreza Babaee', 'Yue You', 'Gerhard Rigoll']
Combined segmentation, reconstruction, and tracking of multiple targets in multi-view video sequences
867,532
Counting extensions
['Joel Spencer']
Counting extensions
715,393
In this article, we build dynamic models of 2D compliant links to evaluate injury level in a human-robot interaction. Safety is a premium concern for co-robotic systems. It has been studied that using compliant links in a robot can greatly reduce the injury level. Since most safety criteria are based on tolerance of acceleration of the operator's head during the impact, an efficient and yet accurate dynamic model of compliant links is needed. In this paper, we compare three dynamic models for calculating acceleration and head injury criterion: the compliant Beam-Spring-Mass (BSM) model, the Mass-Spring-Mass (MSM) model and the Link-Spring-Mass (LSM) model. For the MSM model and LSM model, we obtain analytic expressions of acceleration. While numerical results are achieved for the compliant BSM model. To develop the compliant BSM model, we compared three different methods: the Pseudo-Rigid-Body (PRB) model, the Finite-Segment-Model (FSM), and the Assumed-Mode-Method (AMM). Finally, all these models are validated by human-robot impact simulation programs built in Matlab. The acceleration from these simulations can be used to quantitatively measure the injury level during an impact.
['Yu She', 'Deshan Meng', 'Hongliang Shi', 'Hai-Jun Su']
Dynamic modeling of a 2D compliant link for safety evaluation in human-robot interactions
575,420
Evapotranspiration (ET) is one of important components of surface energy and water cycles. The accurate information of ET is valuable for water management. This study was conducted to investigate the dependence of ET on land cover types, forest species and forest age in Xiamen City, China using remote sensing data. The information of forest species and age was retrieved from the forest inventory database produced in 2003. Remote sensing data of Landsat-5 TM acquired on November 5, 2006 was used to produce a land cover map and to retrieve ground surface albedo, normalized difference vegetation index (NDVI), and land surface temperature which were employed in conjunction with meteorological data (air temperature, relatively humidity, and sunshine hours) to estimate daily ET at a 30 m resolution using an empirical model on the basis of the energy balance principle for the study area. Derived ET shows distinct spatial variations, mainly caused by land cover types, species and development stages of forests. The daily average ET of water, forest, and built-up/spare soil, cropland is 5.59, 3.91, 2.92, and 2.73mm, respectively. The averages of daily ET are 4.37, 4.36, 4.30, 4.11, 4.00, and 2.85 mm for Chinese Fir, Schimacrenata, Slash pine, Tea trees, Masson pine, and Longan, respectively. The 5-year binned averages of daily ET increases with forest age at the rate of 0.20 mm d −1 (10a) −1 for all forests with ages in the range from 1 to 60 in this study area (R 2 =0.73). However, the changes of ET with forest ages differ among different species. The changes in daily average ET values of Chinese Fir, and tea trees with ages are not detectable. The averages of daily ET values of Slash pine, Schimacrenata, and Masson pine increase with age significantly. Daily ET of Longan increases fast during the early development stage and then decreases gradually with tree age above 6. The daily ET of Longan aged above 15 does not show obvious trend.
['Jingfang Zhu', 'Weimin Ju', 'Ying Ren']
Effects of land cover types and forest age on evapotranspiration detected by remote sensing in Xiamen City, China
489,176
This study investigated whether communication modality affects talkers’ speech adaptation to an interlocutor exposed to background noise. It was predicted that adaptations to lip gestures would be greater and acoustic ones reduced when communicating face-to-face. We video recorded 14 Australian-English talkers (Talker A) speaking in a face-to-face or auditory only setting with their interlocutors who were either in quiet or noise. Focusing on keyword productions, acoustic-phonetic adaptations were examined via measures of vowel intensity, pitch, keyword duration, vowel F1/F2 space and VOT, and visual adaptations via measures of vowel interlip area. The interlocutor adverse listening conditions lead Talker A to reduce speech rate, increase pitch and expand vowel space. These adaptations were not significantly reduced in the face-to-face setting although there was a trend for a smaller degree of vowel space expansion than in the auditory only setting. Visible lip gestures were more enhanced overall in the face-to-face setting, but also increased in the auditory only setting when countering the effects of noise. This study therefore showed only small effects of communication modality on speech adaptations.
['Valerie Hazan', 'Jeesun Kim']
Acoustic and visual adaptations in speech produced to counter adverse listening conditions
602,889
In this paper, we propose a new fuzzy forecasting method based on two-factors second-order fuzzy-trend logical relationship groups (TSFTLRGs), particle swarm optimization (PSO) techniques and similarity measures between the subscripts of fuzzy sets (FSs) for forecasting the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the New Taiwan Dollar/US Dollar (NTD/USD) exchange rates. First, we propose a PSO-based optimal-intervals partition algorithm to get the optimal partition of the intervals in the universe of discourse (UOD) of the main factor TAIEX and to get the optimal partition of the intervals in the UOD of the secondary factor SF , where SF ∈ {Dow Jones, NASDAQ, M1B}. Based on the proposed PSO-based optimal-intervals partition algorithm, the constructed TSFTLRGs, and similarity measures between the subscripts of FSs, we propose a new method for forecasting the TAIEX and the NTD/USD exchange rates. The main contribution of this paper is that we propose a new fuzzy forecasting method based on TSFTLRGs, PSO techniques and similarity measures between the subscripts of FSs for forecasting the TAIEX and the NTD/USD exchange rates to get higher forecasting accuracy rates than the ones of the existing fuzzy forecasting methods.
['Shyi-Ming Chen', 'Wen-Shan Jian']
Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups, similarity measures and PSO techniques
946,882
We present a new variational approach to the problem of computed tomography reconstruction from sparse data. We use a Tikhonov regularisation (quite different from that of Louis [1985]) which deals without approximation with discrete or nonuniform grids. Our algorithm requires calculation of a Green's function on a finite region and we show how this can be done very efficiently computationally using the numerical/analytic boundary element method (BEM).
['Victor Solo']
Regularisation of the limited data computed tomography problem via the boundary element method
16,790
Nanoscale systems on chip will integrate billion-gate designs. The challenge is to find a scalable HW/SW design style for future CMOS technologies. Tiled architectures suggest a possible path: "small" processing tiles connected by "short wires". A typical SHAPES tile contains a VLIW floating-point DSP, a RISC, a DNP (Distributed Network Processor), distributed on chip memory, the POT (a set of Peripherals On Tile) plus an interface for DXM (Distributed External Memory). The SHAPES routing fabric connects on-chip and off-chip tiles, weaving a distributed packet switching network. 3D next-neighbours engineering methodologies is adopted for off-chip networking and maximum system density. The SW challenge is to provide a simple and efficient programming environment for tiled architectures. SHAPES will investigate a layered system software, which does not destroy algorithmic and distribution info provided by the programmer and is fully aware of the HW paradigm. For efficiency and QoS, the system SW manages intra-tile and inter-tile latencies, bandwidths, computing resources, using static and dynamic profiling. The SW accesses the on-chip and off-chip networks through a homogeneous interface.
['P. Paolucci', 'Ahmed Amine Jerraya', 'Rainer Leupers', 'Lothar Thiele', 'Piero Vicini']
SHAPES:: a tiled scalable software hardware architecture platform for embedded systems
410,211
EVALUATION OF CROSS-CULTURAL WEB INFORMATION SYSTEM DESIGN GUIDELINES
['Gatis Vitols', 'Irina Arhipova', 'Yukako Hirata']
EVALUATION OF CROSS-CULTURAL WEB INFORMATION SYSTEM DESIGN GUIDELINES
768,762
Parameter identification problems typically consist of a model equation, e.g., a (system of) ordinary or partial differential equation(s), and the observation equation. In the conventional reduced setting, the model equation is eliminated via the parameter-to-state map. Alternatively, one might consider both sets of equations (model and observations) as one large system, to which some regularization method is applied. The choice of the formulation (reduced or all-at-once) can make a large difference computationally, depending on which regularization method is used: Whereas almost the same optimality system arises for the reduced and the all-at-once Tikhonov method, the situation is different for iterative methods, especially in the context of nonlinear models. In this paper we will exemplarily provide some convergence results for all-at-once versions of variational, Newton type, and gradient based regularization methods. Moreover we will compare the implementation requirements for the respective all-at-on...
['Barbara Kaltenbacher']
Regularization based on all-at-once formulations for inverse problems
689,023
In the decision making process of new product development, companies need to understand consumer preference for newly developed products. A recently developed belief rule based (BRB) inference methodology is used to formulate the relationship between consumer preference and product attributes. However, when the number of product attributes is large, the methodology encounters the challenge of dealing with an oversized rule base. To overcome the challenge, the paper incorporates factor analysis into the BRB methodology and develops a BRB expert system for predicting consumer preference of a new product. Firstly, a small number of factors are extracted from product attributes by conducting both exploratory and confirmatory factor analysis. Secondly, a belief rule base is constructed to model the causal relationships between the characteristic factors and consumer preference for products using experts' knowledge. Furthermore, a BRB expert system is developed for predicting consumer preference in new product development, where the factor values transformed from product attributes are taken as inputs. Relevant rules in the system are activated by the input data, and then the activated rules are aggregated using the evidential reasoning (ER) approach to generate the predicted consumer preference for each product. Finally, the BRB expert system is illustrated using the data collected from 100 consumers of several tea stores through a market survey. The results show that the prototype of the BRB expert system has superior fitting capability on training data and high prediction accuracy on testing data, and it has great potential to be applied to consumer preference prediction in new product development.
['Ying Yang', 'Chao Fu', 'Yu-Wang Chen', 'Dong-Ling Xu', 'Shanlin Yang']
A belief rule based expert system for predicting consumer preference in new product development
561,922
Excessively choosing immediate over larger future rewards, or delay discounting (DD), associates with multiple clinical conditions. Individual differences in DD likely depend on variations in the activation of and functional interactions between networks, representing possible endophenotypes for associated disorders, including alcohol use disorders (AUDs). Numerous fMRI studies have probed the neural bases of DD, but investigations of large-scale networks remain scant. We addressed this gap by testing whether activation within large-scale networks during Now/Later decision-making predicts individual differences in DD. To do so, we scanned 95 social drinkers (18–40 years old; 50 women) using fMRI during hypothetical choices between small monetary amounts available “today” or larger amounts available later. We identified neural networks engaged during Now/Later choice using independent component analysis and tested the relationship between component activation and degree of DD. The activity of two component...
['Amanda Elton', 'Christopher T. Smith', 'Michael H. Parrish', 'Charlotte A. Boettiger']
Neural Systems Underlying Individual Differences in Intertemporal Decision-making
907,552
Opportunistic scheduling algorithms are effective in exploiting channel variations and maximizing system throughput in multi-rate wireless networks. However, most scheduling algorithms ignore the per-user quality of service (QoS) requirements and try to allocate resources (i.e., the time slots) among multiple users. This leads to a phenomenon commonly referred to as the exposure problem wherein the algorithms fail to satisfy the minimum slot requirements of the users due to substitutability and complementarity requirement of user slots. To eliminate this exposure problem, we propose a novel scheduling algorithm based on two phase combinatorial reverse auction with the primary objective to maximize the number of satisfied users in the system. We also consider maximizing the system throughput as a secondary objective. In the proposed scheme, multiple users bid to acquire the required number of time slots, and the allocations are done to satisfy the two objectives in a sequential manner. We provide an approximate solution to the proposed scheduling problem which is a NP-complete problem. We prove that our proposed algorithm is (1 + log m) times the optimal solution, where m is the number of slots in a schedule cycle. We also present an extension to this algorithm which can support more satisfied users at the cost of additional complexity. Numerical results are provided to compare the proposed scheduling algorithms with other competitive schemes.
['Sourav Pal', 'Preetam Ghosh', 'Amin R. Mazloom', 'Sumantra R. Kundu', 'Sajal K. Das']
Two Phase Scheduling Algorithm for Maximizing the Number of Satisfied Users in Multi-Rate Wireless Systems
195,367
Rural public libraries have tremendous potential to play an active role in community economic development. Libraries are a natural choice for facilitating economic development activities in rural communities short of resources commonly available in more heavily populated areas. This article is based on semistructured interviews with librarians and community development officials in five rural communities. The results of the investigation indicate that those libraries had success in community economic development. Additionally, the study suggests that rural public libraries may become critical community economic development resources if rural librarians cultivate stronger, more integral relationships with community developers and other community agencies.
['Jeffrey Hancks']
Rural Public Libraries' Role in Community Economic Development
257,475
Sensor nodes are low cost, low power devices that are used to collect physical data and monitor environmental conditions from remote locations. Wireless Sensor Networks(WSN) are collection of sensor nodes, coordinating among themselves to perform a particular task. Localization is defined as the deployment of the sensor nodes at known locations in the network. Localization techniques are classified as Centralized and Distributed. MDS-Map and SDP are some of the centralized algorithms while Diffusion, Gradient,APIT, Bounding Box, Relaxation-Based and Coordinate System Stitching come under Distributed algorithms. In this paper, we propose a new hybrid localization technique, which combines the advantages of the centralized and distributed algorithms and overcomes some of the drawbacks of the existing techniques. Simulations done with J-Sim prove advantage of the proposed scheme in terms of localization error calculated by varying the sink nodes, increasing node density and increasing communication range.
['Deepali Virmani', 'Satbir Jain']
An Efficient Hybrid Localization Technique in Wireless Sensor Networks
738,272
Objective Significant limitations exist in the timely and complete identification of primary and recurrent cancers for clinical and epidemiologic research. A SAS-based coding, extraction, and nomenclature tool (SCENT) was developed to address this problem.#N##N#Materials and methods SCENT employs hierarchical classification rules to identify and extract information from electronic pathology reports. Reports are analyzed and coded using a dictionary of clinical concepts and associated SNOMED codes. To assess the accuracy of SCENT, validation was conducted using manual review of pathology reports from a random sample of 400 breast and 400 prostate cancer patients diagnosed at Kaiser Permanente Southern California. Trained abstractors classified the malignancy status of each report.#N##N#Results Classifications of SCENT were highly concordant with those of abstractors, achieving κ of 0.96 and 0.95 in the breast and prostate cancer groups, respectively. SCENT identified 51 of 54 new primary and 60 of 61 recurrent cancer cases across both groups, with only three false positives in 792 true benign cases. Measures of sensitivity, specificity, positive predictive value, and negative predictive value exceeded 94% in both cancer groups.#N##N#Discussion Favorable validation results suggest that SCENT can be used to identify, extract, and code information from pathology report text. Consequently, SCENT has wide applicability in research and clinical care. Further assessment will be needed to validate performance with other clinical text sources, particularly those with greater linguistic variability.#N##N#Conclusion SCENT is proof of concept for SAS-based natural language processing applications that can be easily shared between institutions and used to support clinical and epidemiologic research.
['Justin Strauss', 'Chun Chao', 'Marilyn L. Kwan', 'Syed A. Ahmed', 'Joanne E. Schottinger', 'Virginia P. Quinn']
Identifying primary and recurrent cancers using a SAS-based natural language processing algorithm
459,797