abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
Abstract Using a model of substitutable goods I determine generic conditions on tastes which guarantee that fixed prices are not optimal: the fully optimal tariff includes lotteries. That is, a profit maximising seller would employ a haggling strategy. We show that the fully optimal selling strategy in a class of cases requires a seller to not allow themselves to focus on one good but to remain haggling over more than one good . This throws new light on the selling strategies used in diverse industries. These insights are used to provide a counter-example to the no lotteries result of McAfee and McMillan (J. Econ. Theory 46 (1988) 335).
['John Thanassoulis']
Haggling over substitutes
154,843
This paper reports on our approach to the NTCIR-11 QALab task of answering questions from Japanese National Center Examinations for University Admissions. Our approach aims at identifying and comparing periods of world history in both the questions and the answer candidates. We cre
['Yasutomo Kimura', 'Fumitoshi Ashihara', 'Arnaud Jordan', 'Keiichi Takamaru', 'Yuzu Uchida', 'Hokuto Ototake', 'Hideyuki Shibuki', 'Michal Ptaszynski', 'Rafal Rzepka', 'Fumito Masui', 'Kenji Araki']
Using Time Periods Comparison for Eliminating Chronological Discrepancies between Question and Answer Candidates at QALab NTCIR11 Task
687,995
An Exploration of Users’ Needs for Multilingual Information Retrieval and Access
['Evgenia Vassilakaki', 'Emmanouel Garoufallou', 'Frances C. Johnson', 'Richard J. Hartley']
An Exploration of Users’ Needs for Multilingual Information Retrieval and Access
754,269
In this paper we scrutinize the applicability of network layered mobility schemes, in particular mobile IP, within a mobile environment. We argue that a transport layer solution is better suited to support handover mechanisms, because not only will it reduce the overall complexity of the handover process but also it will permit applications to take advantage of influencing handover management. In this paper we present a concept of Socketless TCP (SL-TCP) that may be used to enhance packet multiplexing and de-multiplexing at the transport layer to support address migration and handover mechanism
['Eric Beda', 'Neco Ventura']
A Transport Layer Approach to Handover in IP Networks
335,562
ABAEnrichment: an R package to test for gene set expression enrichment in the adult and developing human brain.
['Steffi Grote', 'Kay Prüfer', 'Janet Kelso', 'Michael Dannemann']
ABAEnrichment: an R package to test for gene set expression enrichment in the adult and developing human brain.
824,251
This article looks at how the logic of big data analytics, which promotes an aura of unchallenged objectivity to the algorithmic analysis of quantitative data, preempts individuals’ ability to self-define and closes off any opportunity for those inferences to be challenged or resisted. We argue that the predominant privacy protection regimes based on the privacy self-management framework of “notice and choice” not only fail to protect individual privacy, but also underplay privacy as a collective good. To illustrate this claim, we discuss how two possible individual strategies—withdrawal from the market (avoidance) and complete reliance on market-provided privacy protections (assimilation)—may result in less privacy options available to the society at large. We conclude by discussing how acknowledging the collective dimension of privacy could provide more meaningful alternatives for privacy protection.
['Lemi Baruh', 'Mihaela Popescu']
Big data analytics and the limits of privacy self-management
690,100
Disparate areas of machine learning have benefited from models that can take raw data with little preprocessing as input and learn rich representations of that raw data in order to perform well on a given prediction task. We evaluate this approach in healthcare by using longitudinal measurements of lab tests, one of the more raw signals of a patient's health state widely available in clinical data, to predict disease onsets. In particular, we train a Long Short-Term Memory (LSTM) recurrent neural network and two novel convolutional neural networks for multi-task prediction of disease onset for 133 conditions based on 18 common lab tests measured over time in a cohort of 298K patients derived from 8 years of administrative claims data. We compare the neural networks to a logistic regression with several hand-engineered, clinically relevant features. We find that the representation-based learning approaches significantly outperform this baseline. We believe that our work suggests a new avenue for patient risk stratification based solely on lab results.
['Narges Sharif Razavian', 'Jake R. Marcus', 'David Sontag']
Multi-task Prediction of Disease Onsets from Longitudinal Lab Tests
848,867
In 2007, Levstein and Maldonado  computed the Terwilliger algebra of the Johnson graph $J(n,m)$ when $3m\leq n$. It is well known that the halved graphs of the incidence graph $J(n,m,m+1)$ of Johnson geometry are Johnson graphs. In this paper, we determine the Terwilliger algebra of $J(n,m,m+1)$ when $3m\leq n$, give two bases of this algebra, and calculate its dimension.
['Qian Kong', 'Benjian Lv', 'Kaishun Wang']
The Terwilliger Algebra of the Incidence Graphs of Johnson Geometry
970,569
Trauma patients suffer from a wide range of injuries, including vascular injuries. Such injuries can be difficult to immediately identify, only becoming detectable after repeated examinations and procedures. Large data sets of Shock Trauma patient treatment and care exist, spanning thousands to millions of patients, but machine learning techniques are needed to analyze this data and build appropriate models for predicting patient injury and outcome. We developed an initial approach for ensemble prediction of vascular injury in trauma care to aid doctors and medical staff in predicting injury and aiding in patient recovery. Of the classifiers tested, we found that stacked ensemble classifiers provided the best predictions. Prediction accuracy varied among vascular injuries (sensitivity ranging from 1.0 to 0.21), but demonstrated the feasibility of the approach for use on massive clinical datasets.
['Max Metzger', 'Michael Howard', 'Lee Kellogg', 'Rishi Kundi']
Ensemble prediction of vascular injury in Trauma care: Initial efforts towards data-driven, low-cost screening
589,235
We give a definition of the Delaunay triangulation of a point set in a closed Euclidean d-manifold, i.e. a compact quotient space of the Euclidean space for a discrete group of isometries (a so-called Bieberbach group or crystallographic group). We describe a geometric criterion to check whether a partition of the manifold actually forms a triangulation (which subsumes that it is a simplicial complex). We provide an incremental algorithm to compute the Delaunay triangulation of the manifold defined by a given set of input points, if it exists. Otherwise, the algorithm returns the Delaunay triangulation of a finite-sheeted covering space of the manifold. The algorithm has optimal randomized worst-case time and space complexity. It extends to closed Euclidean orbifolds. An implementation for the special case of the 3D flat torus has been released in Cgal 3.5. To the best of our knowledge, this is the first general result on this topic.
['Manuel Caroli', 'Monique Teillaud']
Delaunay Triangulations of Closed Euclidean d-Orbifolds
699,477
Runtime monitoring and assessment of software products, features, and requirements allow product managers and requirement engineers to verify the implemented features or requirements, and validate the user acceptance. Gaining insight into software quality and impact of the quality on user facilitates interpretation of quality against users' acceptance and vice versa. The insight also expedites root cause analysis and fast evolution in the case of threatening the health and sustainability of the software. Several studies have proposed automated monitoring solutions and assessment, however, none of the studies introduces a solution for a joint assessment of software quality and quality impact on users. In this research, we study the relation between software quality and the impact of quality on Quality of Experience (QoE) of users to support the assessment of software products, features, and requirements. We propose a Quality-Impact assessment method based on a joint analysis of software quality and user feedback. As an application of the proposed method in requirement engineering, the joint analysis guides verification and validation of functional and quality requirements as well as capturing new requirements. The study follows a design science approach to design Quality-Impact method artifact. The Quality-Impact method has been already designed and validated in the first design cycle. However, next design cycles will contribute to clarify problems of the initial design, refine and validate the proposed method. This paper presents the concluded results and explains future studies for the follow up of the Ph.D. research.
['Farnaz Fotrousi']
Quality-Impact Assessment of Software Systems
954,331
The analysis of spectral data constitutes new challenges for machine learning algorithms due to the functional nature of the data. Special attention is paid to the metric used in the analysis. Recently, a prototype based algorithm has been proposed which allows the integration of a full adaptive matrix in the metric. In this contribution we study this approach with respect to band matrices and its use for the analysis of functional spectral data. The method is tested on data taken from food chemistry and satellite image data.
['Petra Schneider', 'Frank-Michael Schleif', 'Thomas Villmann', 'Michael Biehl']
Generalized Matrix Learning Vector Quantizer for the Analysis of Spectral Data
247,396
In this study a device for automatic electrochemical analysis was designed. A three electrodes detection system was attached to a positioning device, which enabled us to move the electrode system from one well to another of a microtitre plate. Disposable carbon tip electrodes were used for Cd(II), Cu(II) and Pb(II) ion quantification, while Zn(II) did not give signal in this electrode configuration. In order to detect all mentioned heavy metals simultaneously, thin-film mercury electrodes (TFME) were fabricated by electrodeposition of mercury on the surface of carbon tips. In comparison with bare electrodes the TMFEs had lower detection limits and better sensitivity. In addition to pure aqueous heavy metal solutions, the assay was also performed on mineralized rock samples, artificial blood plasma samples and samples of chicken embryo organs treated with cadmium. An artificial neural network was created to evaluate the concentrations of the mentioned heavy metals correctly in mixture samples and an excellent fit was observed (R2 = 0.9933).
['Jiri Kudr', 'Hoai Viet Nguyen', 'Jaromír Gumulec', 'Lukas Nejdl', 'Iva Blazkova', 'Branislav Ruttkay-Nedecky', 'David Hynek', 'Jindrich Kynicky', 'Vojtech Adam', 'Rene Kizek']
Simultaneous Automatic Electrochemical Detection of Zinc, Cadmium, Copper and Lead Ions in Environmental Samples Using a Thin-Film Mercury Electrode and an Artificial Neural Network
43,458
In 1969 H. Emmons provided three theorems Emmons 1--3 for determining precedence relations between pairs of jobs for the single-machine tardiness problem. We show here a fourth straightforward theorem that uses the information when the jobs in the pair are both known to precede a third job in an optimum sequence. The new theorem augments the three Emmons theorems and is shown to be a generalization of a theorem by Elmaghraby.
['John J. Kanet']
One-Machine Sequencing to Minimize Total Tardiness: A Fourth Theorem for Emmons
167,487
We are on the verge of a breakthrough into Philips Product Divisions and now the spot-light is on testing. So far, we have fabricated, tested, and characterized about ten asynchronous IC designs. Recently, we developed a test method which is competitive with modern synchronous test methods in quality and test time. Tangram is poised for CAT success.
['Marly Roncken']
Asynchronous design: working the fast lane
210,235
We present a general algorithm, pre-determinization, that makes an arbitrary weighted transducer over the tropical semiring or an arbitrary unambiguous weighted transducer over a cancellative commutative semiring determinizable by inserting in it transitions labeled with special symbols. After determinization, the special symbols can be removed or replaced with e-transitions. The resulting transducer can be significantly more efficient to use. We report empirical results showing that our algorithm leads to a substantial speed-up in large-vocabulary speech recognition. Our pre-determinization algorithm makes use of an efficient algorithm for testing a general twins property, a sufficient condition for the determinizability of all weighted transducers over the tropical semiring and unambiguous weighted transducers over cancellative commutative semirings. Based on the transitions marked by this test of the twins property, our pre-determinization algorithm inserts new transitions just when needed to guarantee that the resulting transducer has the twins property and thus is determinizable. It also uses a single-source shortest-paths algorithm over the min-max semiring for carefully selecting the positions for insertion of new transitions to benefit from the subsequent application of determinization. These positions are proved to be optimal in a sense that we describe.
['Cyril Allauzen', 'Mehryar Mohri']
An optimal pre-determinization algorithm for weighted transducers
74,831
The relationship between coverage and connectivity in sensor networks has been investigated in recent research treating both network parameters in a unified framework. It is known that networks covering a convex area are connected if the communication range of each node is at least twice a unique sensing range used by each node. Furthermore, geographic greedy routing is a viable and effective approach providing guaranteed delivery for this special network class. In this work we show that the result about network connectivity does not suffer from generalizing the concept of sensing coverage to arbitrary network deployment regions. However, dropping the assumption that the monitored area is convex requires the application of greedy recovery strategies like traversing a locally extracted planar subgraph. A recently proposed variant performs message forwarding along edges of a virtual overlay graph instead of using wireless links for planar graph construction directly. However, there exist connected network configurations where this routing variant may fail. In this work we proof a theoretical bound which is a sufficient condition for guaranteed delivery of this routing strategy applied in sensing covered networks. By simulation results we show that this bound may also be relaxed from a practical point of view and that geographical cluster based routing achieves a comparable performance compared to other planar graph routing variants based on two-hop neighbor information.
['Hannes Frey', 'Daniel Görgen']
Geographical cluster based routing in sensing-covered networks
72,734
In this paper, a displacement monitoring system is developed for regional high-precision monitoring tasks. This system locates the positions of target points by using hyperbolic positioning principle, and monitors displacements by continuously locating. To form the hyperbolas, the system works as a microwave interferometer to solve the phase difference of arrival. Composed by software defined radio platforms, the configuration of this system can be easily set and adjusted. Experiments performed under a prototype system shows a measuring error within ±0.2mm at a monitoring coverage of 5m when carrier frequency is configured at 433MHz, which shows a relative uncertainty about 0.03% when normalizing to wavelength. By applying simplified beacons, average cost for each test point can be minimized. More works are done to compensate phase drift in microwave cables used. This system has great applications in regional high-precision monitoring and positioning tasks, especially when test points are required in large scale.
['Sichen Sun', 'Zhengbo Wang', 'Bolin Fan', 'Yu Bai', 'Lijun Wang']
A low-cost displacement monitoring system with sub-millimeter resolution based on software-defined radio platform
973,332
A deep learning approach has been proposed recently to derive speaker identifies (d-vector) by a deep neural network (DNN). This approach has been applied to text-dependent speaker recognition tasks and shows reasonable performance gains when combined with the conventional i-vector approach. Although promising, the existing d-vector implementation still can not compete with the i-vector baseline. This paper presents two improvements for the deep learning approach: a phone-dependent DNN structure to normalize phone variation, and a new scoring approach based on dynamic time warping (DTW). Experiments on a text-dependent speaker recognition task demonstrated that the proposed methods can provide considerable performance improvement over the existing d-vector implementation.
['Lantian Li', 'Yiye Lin', 'Zhiyong Zhang', 'Dong Wang']
Improved deep speaker feature learning for text-dependent speaker recognition
618,287
We present a novel method for enforcing nonlinear inequality constraints in the estimation of a high degree of freedom robotic system within a Kalman filter. Our constrained Kalman filtering technique is based on a new concept, which we call uncertainty projection, that projects the portion of the uncertainty ellipsoid that does not satisfy the constraint onto the constraint surface. A new PDF is then generated with an efficient update procedure that is guaranteed to reduce the uncertainty of the system. The application we have targeted for this work is the localization and automatic registration of a robotic surgical probe relative to preoperative images during image-guided surgery. We demonstrate the feasibility of our constrained filtering approach with data collected from an experiment involving a surgical robot navigating on the epicardial surface of a porcine heart.
['Stephen Tully', 'George Kantor', 'Howie Choset']
Inequality constrained Kalman filtering for the localization and registration of a surgical robot
343,075
This paper presents a method to mitigate Negative Bias Temperature Instability (NBTI) in digital circuits. Since effect of NBTI strongly depends on digital logic value of internal nodes, this method uses internal nodes control (INC) method to reduce NBTI-critical transistors. There are some internal nodes in digital circuits that are under severe NBTI. This method at first, identifies NBTI-critical internal nodes in critical and non-critical paths by calculating probability of being under NBTI stress. Second, it eliminates these internal nodes by combining NBTI-sensitive gates and their driver gates, generating a new complex gate. These complex gates have the same logic and remove any NBTI-critical transistors. The proposed method reduces NBTI in combinational and sequential CMOS circuits and increases their lifetime. Experimental results on ISCAS'89 benchmark circuits show that NBTI-critical transistors, NBTI-induced delay degradation and the number of circuit's transistors are decreased about 86.1%, 15.12% and 4.3%, respectively. However, this method imposes area overhead of 0.2% for the investigated circuits.
['Maryam Ghane', 'Hamid R. Zarandi']
Gate Merging: An NBTI Mitigation Method to Eliminate Critical Internal Nodes in Digital Circuits
695,201
Statistical language models provide a powerful tool for modelling natural spoken language. Nevertheless a large set of training sentences is required to estimate reliably the model parameters. The authors present a method for estimating n-gram probabilities from sparse data. The proposed language modeling strategy allows one to adapt a generic language model (LM) to a new semantic domain with just a few hundred sentences. This reduced set of sentences is automatically tagged with eighty different pseudo-morphological labels, and then a word-bigram LM is derived from them. Finally, this target domain word-bigram LM is interpolated with a generic back-off word-bigram LM, which was estimated using a large text database. This strategy reduces by 27% the word error rate of the SPATIS (SPanish ATIS) task.
['Carlos Crespo', 'Daniel Tapias', 'Gregorio Escalada', 'Jorge Alvarez']
Language model adaptation for conversational speech recognition using automatically tagged pseudo-morphological classes
296,287
We present a general preferential semantic framework for plausible subsumption in description logics, analogous to the KLM preferential semantics for propositional entailment. We introduce the notion of ordered interpretations for description logics, and use it to define two mutually dual non-deductive subsumption relations ***⊑ and ***⊑*. We outline their properties and explain how they may be used for inductive and abductive reasoning respectively. We show that the preferential semantics for subsumption can be reduced to standard semantics of a sufficiently expressive description logic. This has the advantage that standard DL algorithms can be extended to reason about our notions of plausible subsumption.
['Katarina Britz', 'Johannes Heidema', 'Thomas Andreas Meyer']
Semantic preferential subsumption
434,011
In [1] and [2], authors proposed two efficient crossover predistortion schemes which are capable to compensate simultaneously HPA nonlinearity and crosstalk effects in MIMO systems. The crosstalk model considered in these papers was memoryless one. However, memory effects of crosstalk can no longer be ignored due to the broadband transmitted signal. Then, in this paper, we demonstrate the effect of memory crosstalk on the Crossover Neural Network Predistorter (CO-NNPD) proposed in [1]. Along, we propose a new crossover predistortion structure based on this conventional CO-NNPD which is capable to enhance good performance in MIMO OFDM systems in presence of HPA nonlinearities with taken into account the memory effects of crosstalk. The Levenberg-Marquardt algorithm (LM) is used for neural network training, which has proven [3] to exhibit a very good performance with lower computation complexity and faster convergence than other algorithms used in literature. This paper is supported with simulation results for the Alamouti STBC MIMO OFDM system in terms of Bit Error Rate (BER) in Rayleigh fading channel.
['Hanen Bouhadda', 'Rafik Zayani', 'Ridha Bouallegue', 'Daniel Roviras']
Memory Crossover Neural Network Predistorter for the compensation of memory crosstalk and HPA nonlinearity
415,384
An Energy detection based cooperative spectrum sensing for cognitive radio system is proposed in this paper using fuzzy conditional entropy maximization. Instead of using conventional single threshold in energy value, this paper deals with utilization of multiple thresholds to improve the sensing reliability. The basic objective here is to calculate an optimal set of fuzzy parameters that would maximize the fuzzy conditional entropy and Differential Evolution algorithm is used for this purpose. Multiple threshold values are calculated using these optimal parameters. Simulation results highlight improved performance of the proposed scheme by providing high detection probability at low diversity and using less number of samples. Performance results are compared with the conventional cooperative energy detector methods to highlight the significance of the proposed scheme.
['Avik Banerjee', 'Santi P. Maity']
Energy detection based cooperative spectrum sensing using fuzzy conditional entropy maximization
926,122
The problems of detecting the presence of a new user and of estimating the delays of its multipath replicas in a direct-sequence/code-division-multiple-access (DS/CDMA) system are investigated. Despite previous works, we consider a doubly-dispersive fading channel model and we propose a new code-aided detection algorithm which relies on the application of a powerful statistical tool known as the method of sieves. The proposed detector is blind and bounded constant false alarm rate. As a by-product of the detection stage, a new blind procedure to estimate the multipath channel delays of the detected user is also derived.
['Stefano Buzzi', 'Luca Venturino', 'Alessio Zappone', 'Antonio De Maio']
Blind User Detection and Delay Acquisition in Doubly-Dispersive DS/CDMA Fading Channels
24,307
The proposed microarray image analysis (MIA) system is designed to analyze microarray slide images in a fully automatic manner. This system identifies and rectifies tilted slides, discovers block boundaries, generates gridlines, recognizes spots, and finally extracts the accurate spot intensity values from the two image channels (red and green) in a microarray slide. The red-to-green intensity ratio of a spot represents the gene expression level in the specimen. Our experimental results demonstrate the effectiveness and robustness of the proposed system. Further, the MIA system is tightly integrated with the component-based Unstructured Information Management Architecture (UIMA) which is an open source platform for the analysis of unstructured data (e.g. images) and is developed by IBM. With UIMA, we can easily apply various analysis algorithms on data by simply plugging analysis components into the system. Further, the analysis results at each analyzing step are attached to the data object as its annotations. The major contribution of this paper is that we design a microarray image analysis system which provides users a convenient manner to automatically analyze slide images and acquire accurate gene expression data from microarray slides. Also, the proposed MIA system, which is based on UIMA, provides a flexible, scalable, and extensible environment for users to perform various analysis tasks on microarray slide images.
['Wei-Bang Chen', 'Chengcui Zhang']
MIA: An Effective and Robust Microarray Image Analysis System with Unstructured Information Management Architecture
16,217
Let T = (V, E) be an undirected tree, in which each edge is associated with a non-negative cost, and let {s 1 , t 1 },..., {s k , t k } be a collection of k distinct pairs of vertices. Given a requirement parameter t 0. This result is achieved by introducing problem-specific insight to the general framework of using the Lagrangian relaxation technique in approximation algorithms. Our algorithm utilizes a heuristic for the closely related prize-collecting variant, in which we are not required to disconnect all pairs, but rather incur penalties for failing to do so. We provide a Lagrangian multiplier preserving algorithm for the latter problem, with an approximation factor of 2. Finally, we present a new 2-approximation algorithm for multicut on a tree, based on LP-rounding.
['Asaf Levin', 'Danny Segev']
Partial multicuts in trees
832,479
An Automatic Library Data Classification System Using Layer Structure and Voting Strategy
['June-Jei Kuo']
An Automatic Library Data Classification System Using Layer Structure and Voting Strategy
617,770
The effects of pitch and dynamics on the emotional characteristics of Piano Sounds
['C. C. Chau', 'Andrew Horner']
The effects of pitch and dynamics on the emotional characteristics of Piano Sounds
803,121
Twitter Sentiment Analysis
['Olga Kolchyna', 'Tharsis T. P. Souza', 'Philip C. Treleaven', 'Tomaso Aste']
Twitter Sentiment Analysis
743,946
Details the implementation of the Pascal programming language for a dataflow architecture. The reasons for choosing this particular architectural model and language was to achieve maximum parallelism with minimal specification by the programmer. Issues that are discussed involve the analysis and transformation of code to maximise parallelism, and the generation of an applicative intermediate code form (IF1) to be later translated into machine code for a wide range of parallel architectures. Specific emphasis is given to those features of Pascal and other conventional languages that are omitted from parallel functional languages for dataflow architectures, such as global variables, function side-effects, variable aliasing, and pointers. Loop and general code optimizations are also introduced in order to maximise parallelism. Some simulation results are presented which highlight the extent of parallelism available in a conventional language. >
['Simon F. Wail']
Implementing a conventional language for a dataflow architecture
104,207
Green is the center of the visible spectrum and the hue to which we are most sensitive. In RGB color, green is 60 percent of white. When we look through a prism at a white square, as Goethe did, we see white between yellow and cyan, just where green appears in the spectrum of Newton. Additional arguments were published previously and appear at www.csulb.edu/-percept, along with the Percept color chart of the hue/value relationships. A new argument, derived from the perception of leaves, is presented here. The Percept color chart transformed into a color wheel is also presented.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
['Hal Glicksman']
White is green
2,004
HIPS is a project recently funded by the European Commission within the I-Cube initiative whose main aim is to study new technologies and interaction modalities that allow people to navigate both a physical space and a related information space at the same time, with a minimal gap between the two. The project envisages a portable electronic tour guide (to exhibitions, museums, archaeological sites, expositions distributed over a city, and to cities themselves) which empowers visitors to determine themselves the structure of a tour, according to their own criteria, interests and needs and which allow different information delivery modalities.
['Giuliano Benelli', 'Alberto Bianchi', 'Patrizia Marti', 'Elena Not', 'David Sennati']
HIPS: hyper-interaction within physical space
323,879
We extend an algorithm by Paige and Tarjan that solves the coarsest stable refinement problem to the domain of trees. The algorithm is used to minimize non-deterministic tree automata (NTA) with respect to bisimulation. We show that our algorithm has an overall complexity of $O(\hat{r}m \log n)$, where $\hat{r}$ is the maximum rank of the input alphabet, m is the total size of the transition table, and n is the number of states.
['Parosh Aziz Abdulla', 'Lisa Kaati', 'Johanna Högberg']
Bisimulation minimization of tree automata
235,713
Ascending price auctions typically involve a single price path with buyers paying their final bid price. Using this traditional definition, no ascending price auction can achieve the Vickrey-Clarke-Groves (VCG) outcome for general private valuations in the combinatorial auction setting. We relax this definition by allowing discounts to buyers from the final price of the auction (or alternatively, calculating the discounts dynamically during the auction) while still maintaining a single price path. Using a notion called universal competitive equilibrium prices, shown to be necessary and sufficient to achieve the VCG outcome using ascending price auctions, we define a broad class of ascending price combinatorial auctions in which truthful bidding by buyers is an ex post Nash equilibrium. Any auction in this class achieves the VCG outcome and ex post efficiency for general valuations. We define two specific auctions in this class by generalizing two known auctions in the literature [11, 24].
['Debasis Mishra', 'David C. Parkes']
Ascending Price Vickrey Auctions for General Valuations
119,651
Designing Better Scaffolding in Simulation-Based Learning Environments Teaching Science Systems: A Pilot Study Report.
['Na Li', 'John B. Black', 'Mengzi Gao']
Designing Better Scaffolding in Simulation-Based Learning Environments Teaching Science Systems: A Pilot Study Report.
789,977
Navigating through a visual maze relies on the strategic use of eye movements to select and identify the route. When navigating the maze, there are trade-os between exploring to the environment and relying on memory. This study examined strategies used to navi- gating through novel and familiar mazes that were viewed from above and traversed by a mouse cursor. Eye and mouse movements revealed two modes that almost never occurred concurrently: exploration and guidance. Analyses showed that people learned mazes and were able to devise and carry out complex, multi-faceted strategies that traded-o vi- sual exploration against active motor performance. These strategies took into account available visual information, memory, condence, the esti- mated cost in time for exploration, and idiosyncratic tolerance for error. Understanding the strategies humans used for maze solving is valuable for applications in cognitive neuroscience as well as in AI, robotics and human-robot interactions.
['Min Zhao', 'André Márquez']
Understanding Humans' Strategies in Maze Solving
199,556
This article presents the design and analysis for a flexible antenna prototype. The polyimide-based flexible antenna was designed for the 5 GHz 802.11 standard. Worst-case scenario analysis with technological variables such as substrate thickness, loss tangent, relative dielectric constant, metal layer thickness, and configuration was used to prepare an adaptive process-independent technique for control and correction of the models used for parameter calculations.
['Aleksandr Timoshenko', 'Ksenia Lomovskaya', 'Aleksandr Levanov', 'Egor Borodulin', 'Egor Belousov']
Analysis and design of planar flexible antenna prototype
970,846
Abstract The k -medoids methods for modeling clustered data have many desirable properties such as robustness to noise and the ability to use non-numerical values, however, they are typically not applied to large datasets due to their associated computational complexity. In this paper, we present AGORAS, a novel heuristic algorithm for the k -medoids problem where the algorithmic complexity is driven by, k , the number of clusters, rather than, n , the number of data points. Our algorithm attempts to isolate a sample from each individual cluster within a sequence of uniformly drawn samples taken from the complete data. As a result, computing the k -medoids solution using our method only involves solving k trivial sub-problems of centrality. This allows our algorithm to run in comparable time for arbitrarily large datasets with same underlying density distribution. We evaluate AGORAS experimentally against PAM and CLARANS – two of the best-known existing algorithms for the k -medoids problem – across a variety of published and synthetic datasets. We find that AGORAS outperforms PAM by up to four orders of magnitude for data sets with less than 10,000 points, and it outperforms CLARANS by two orders of magnitude on a dataset of just 64,000 points. Moreover, we find in some cases that AGORAS also outperforms in terms of cluster quality.
['Esteban Rangel', 'William Hendrix', 'Ankit Agrawal', 'Wei Keng Liao', 'Alok N. Choudhary']
AGORAS: A fast algorithm for estimating medoids in large datasets
842,249
In this note, a hierarchical fusion estimation method is presented for clustered sensor networks with a very general setup where sensors (sensor nodes) and estimators (cluster heads) are allowed to work asynchronously with aperiodic sampling and estimation rates. A sequential measurement fusion (SMF) method is presented to design local estimators, and it is shown that the SMF estimator is equivalent to the measurement augmentation (MA) estimator in precision but with much lower computational complexity. Two types of sequential covariance intersection (CI) fusion estimators are presented for the fusion estimation. The proposed SCI fusion estimators provide a satisfactory estimation precision that is close to the centralized batch CI (BCI) estimator while requiring smaller computational burden as compared with the BCI estimator. Therefore, the proposed hierarchical fusion estimation method is suitable for real-time applications in asynchronous sensor networks with energy constraints. Moreover, the method is applicable to the case with packet delays and losses.
['Wen-An Zhang', 'Bo Chen', 'Michael Z. Q. Chen']
Hierarchical Fusion Estimation for Clustered Asynchronous Sensor Networks
722,317
In this paper we present a summary of the application of CO2RBFN, a evolutionary cooperative-competitive algorithm for Radial Basis Function Networks design, to the medium-term forecasting of the extra-virgen olive price, carry out by the SIMIDAT research group. The forecast is about the price at source of the extra-virgin olive oil six months ahead. The influential of the feature selection algorithms over the forecasting of the extra-virgin olive oil price has been analysed in this study and the results obtained with CO2RBFN have been compared with those obtained by different soft computing methods.
['Antonio J. Rivera', 'María Dolores Pérez-Godoy', 'María José del Jesús', 'Pedro Pérez-Recuerda', 'M. P. Frías', 'Manuel Parras']
A summary on the study of the medium-term forecasting of the extra-virgen olive oil price
398,759
In this paper we study robust speaker recognition in far-field microphone situations such as meeting scenarios. By applying reverberation compensation and feature warping we achieved significant improvements under mismatched training-testing conditions. To capture useful information from multiple distant microphones, two approaches for multiple channel combination are investigated. This leads to 84.1 % and 78.1% relative improvements on the distant microphone database. Furthermore, we tested the resulting system on the ICSI Meeting Corpus. The improvements are also very high on this task, which indicates that our system is robust to changing conditions in a remote microphone setting
['Qin Jin', 'Yue Pan', 'Tanja Schultz']
Far-Field Speaker Recognition
266,421
DRDC Valcartier has initiated, through a PRECARN partnership project, the development of an advanced simulation test bed called CanCoastWatch. The main focus of this test bed is to study net-enabled concepts such as distributed information fusion algorithms and architectures, dynamic resources and networks configuration management, and self-synchronising units and agents. The test bed allows the evaluation of a range of control strategies from independent platform search, through various levels of platform collaboration, up to a centralized control of search platforms. In this paper, we present the integration of a planning tool based on search theory concept: SARPlan. In particular, we discuss the original idea of combining fusion results to build a containment probability distribution according to the search theory approach. This paper presents the results and discusses future development.
['Adel Guitouni', 'Khaled Jabeur', 'Mohamad Allouche', 'Hans Wehn', 'Jens Happe']
Application of search theory for large volume surveillance planning
460,405
Upper Bounds for the Security of Several Feistel Networks
['Yosuke Todo']
Upper Bounds for the Security of Several Feistel Networks
424,343
In isogeometric shape optimization, the use of the search direction directly predicted from the discrete shape gradient makes the optimization history strongly dependent on the discretization. This discretization-dependency can affect the convergence and may lead the optimization process into a sub-optimal solution. The source of this discretization-dependency is traced to the lack of consistency with the local steepest descent search direction in the continuous formulation. In the present contribution, this inconsistency is analyzed using the shape variation equations and subsequently illustrated with a volume minimization problem. It is found that the inconsistency originates from the NURBS discretization which induces a discrete quadratic norm to represent the continuous Euclidean norm. To fix this inconsistency, three normalization approaches are proposed to obtain a discretization-independent normalized descent search direction. The discretization-independence of the proposed approaches is verified with a benchmark problem. The superiority of the proposed search direction and its suitability for numerical implementation is illustrated with examples of shape optimization for mechanical and thermal problems. Although the present work focuses on a NURBS-based discretization usually used in conjunction with isogeometric analysis, the proposed methodology may also be applied to alleviate the “mesh-dependency” in (traditional) Finite Element-based shape optimization.
['Zhen-Pei Wang', 'Mostafa Abdalla', 'Sr Turteltaub']
Normalization approaches for the descent search direction in isogeometric shape optimization
824,680
Preference analysis is a class of important issues in multi-criteria ordinal decision making. Rough set is an effective approach to handle preference analysis. In order to solve the multi-criteria preference analysis problem, this work improves the fuzzy preference relation rough set model with additive consistent fuzzy preference relation, and expands it to multi-granulation case. Cost is also an important issue in decision analysis. Taking the cost into consideration, we also expand the model to cost sensitive multi-granulation fuzzy preference relation rough set. Some theorems are represented, and the classification and sample condensation algorithms based on our model are investigated. Some experiments are complete and the experimental results show that our model and algorithms are effective for preference decision making of ordinal decision system.
['Wei Pan', 'Kun She', 'Pengyuan Wei']
Multi-granulation fuzzy preference relation rough set for ordinal decision system
873,079
Temporal analysis for web spam detection: an overview
['Miklós Erdélyi', 'András A. Benczúr']
Temporal analysis for web spam detection: an overview
801,345
A new scheme, based on the concept of stressed curves, is developed for extracting significant curvature points on a planar curve. The authors show that the problem has an interesting analogy to stable configurations of a mechanical structure. The stressed curve is generated by recursively solving a potential energy minimization problem. The resulting algorithm is rather simple and is local since the iteration for each point of the curve uses at most four of its nearest neighbors. Examples and some of the applications of this new scheme are provided along with a comparison with the curvature primal sketch scheme. The proposed scheme offers efficient and accurate extraction of significant curvature points with a smaller computational complexity. >
['Xiaonong Ran', 'Nariman Farvardin']
On planar curve representation
458,530
Both in industrial as in controlled environments, such as high-voltage laboratories, pulses from multiple sources, including partial discharges (PD) and electrical noise can be superimposed. These circumstances can modify and alter the results of PD measurements and, what is more, they can lead to misinterpretation. The spectral power clustering technique (SPCT) allows separating PD sources and electrical noise through the two-dimensional representation (power ratio map or PR map) of the relative spectral power in two intervals, high and low frequency, calculated for each pulse captured with broadband sensors. This method allows to clearly distinguishing each of the effects of noise and PD, making it easy discrimination of all sources. In this paper, the separation ability of the SPCT clustering technique when using a Rogowski coil for PD measurements is evaluated. Different parameters were studied in order to establish which of them could help for improving the manual selection of the separation intervals, thus enabling a better separation of clusters. The signal processing can be performed during the measurements or in a further analysis.
['Jorge Alfredo Ardila-Rey', 'Ricardo Albarracín', 'Fernando Álvarez', 'Aldo Barrueto']
A validation of the spectral power clustering technique (SPCT) by using a Rogowski coil in partial discharge measurements.
195,180
Direct techniques for the optimal resolution estimation and position prediction of subpel motion vectors (MVs) based on integer-pel MVs are investigated in this paper. Although it is common to determine the optimal MV position by fitting a local error surface using integer-pel MVs, the characteristics of the error surface have not been thoroughly studied in the past. Here, we use an approximate condition number of the Hessian matrix of the error surface to characterize its shape in a local region. By exploiting this shape information, we propose a block-based subpel MV resolution estimation method that allows each block to choose its optimal subpel MV resolution for the optimal rate-distortion (R-D) performance adaptively. Furthermore, we propose two MV position prediction schemes for ill and well-conditioned error surfaces, respectively. All proposed techniques are direct methods, where no iteration is required. Experimental results are given to show the R-D performance of the proposed subpel MV resolution estimation and position prediction schemes.
['Qi Zhang', 'Yunyang Dai', 'C.-C. Jay Kuo']
Direct Techniques for Optimal Sub-Pixel Motion Accuracy Estimation and Position Prediction
375,315
The Carry Select Adder (CSLA) is one of the fastest multi-bit adder architectures being used in various high speed processors. The CSLA is fast but compromises on the area and power consumption due to its complex architecture when implemented using standard CMOS logic. In this work, an alternate implementation of the CSLA architecture is done using Gate Diffusion Input (GDI) logic; instead of the CMOS logic. This approach simplifies the overall architectural dimensions due to reduction in transistor count as well as the power consumption. In this work, various types of CSLA architectures are implemented using the GDI logic and compared with their CMOS logic counterparts in terms of average power, delay and transistor count in 45 nm technology node. The comparative analysis clearly shows that GDI based circuits are better compared to CMOS logic implementations.
['Jubal Saji', 'Shoaib Kamal']
GDI logic implementation of uniform sized CSLA architectures in 45nm SOI technology
984,639
Newsvendor’s Response to Demand History
['Wei Geng', 'Xiaodong Ding']
Newsvendor’s Response to Demand History
641,053
Query Expansion technology can reduce word mismatch between query and related documents, improve retrieval precision through adding similar or related terms to original query. In the algorithm proposed in this paper, terms or phrases which have closely related sense are added to the original query and express users' query intention more precisely. This algorithm costs O(L) time which is independent of the SER-Base, and this is very practical for highly real-time search engine.
['Li Li']
A Query Expansion Method Based on Semantic Element
425,565
We consider the problem of clustering data over time. An evolutionary clustering should simultaneously optimize two potentially conflicting criteria: first, the clustering at any point in time should remain faithful to the current data as much as possible; and second, the clustering should not shift dramatically from one timestep to the next. We present a generic framework for this problem, and discuss evolutionary versions of two widely-used clustering algorithms within this framework: k -means and agglomerative hierarchical clustering. We extensively evaluate these algorithms on real data sets and show that our algorithms can simultaneously attain both high accuracy in capturing today's data, and high fidelity in reflecting yesterday's clustering.
['Deepayan Chakrabarti', 'Ravi Kumar', 'Andrew Tomkins']
Evolutionary clustering
665,379
Tracking epidemic disease is a very challenging issue nowadays. The success of such process could help medical administration to stop diseases quicker than usual. In this paper, we suggest a methodology based on wireless sensor networks deployed over volunteers who agree to carry a light wireless sensor network. Sensors over the body will monitor some health parameters (temperature, pressure,…) and will run some light classification algorithms to diagnosis first diseases on the volunteers and later on their neighbors. The classification methodologies used in this study are based on the SVM approach or on the Fuzzy C-Means one. Finally, the wireless sensor network will send aggregated data about the disease to some base stations which collect the results. The main contribution is to execute an on-line disease tracking program and to detect some information about how the disease is propagated.
['Stephane Cormier', 'Hacène Fouchal', 'Itheri Yahiaoui']
Disease tracking service in urban areas
527,809
Segmentation, paging and optimal page sizes in virtual memory
['Timo O. Alanko', 'A. Inkeri Verkamo']
Segmentation, paging and optimal page sizes in virtual memory
600,396
The use of web services has been growing significantly, with increasingly large numbers of applications being implemented through the web. A difficulty associated with this development is the quality assurance of these services, specifically the challenges encountered when testing the applications - amongst other things, testers may not have access to the source code, and the correctness of the output may not be easily ascertained known as the oracle problem. Metamorphic testing MT has been introduced as a technique to alleviate the oracle problem. MT makes use of properties of the software under test, known as metamorphic relations, and checks whether or not these relations are violated. Since MT does not require source code to generate the metamorphic relations, it is suitable for testing web services-based applications. We have designed an XML-based language representation to facilitate the formalisation of metamorphic relations, the generation of follow-up test cases, and the verification of the test results. Based on this, we have also developed a tool to support the automation of MT for web service applications. This tool has been used in an experiment to test web services, the evaluation of which is reported in this paper.
['Chang-ai Sun', 'Guan Wang', 'Qing Wen', 'Dave Towey', 'Tsong Yueh Chen']
MT4WS: an automated metamorphic testing system for web services
797,277
Large costs arise at a seaport container terminal from the duration of the unloading of vessels and from the time a vessel is waiting to be unloaded. The optimal allocation of vessels to berth space (Berth Allocation Problem) becomes more and more important as its solution is also input to further terminal decision problems. We compare solutions for a realistic data Berth Allocation Problem found by a Composite Heuristic combining a tree search procedure and a pair-wise exchange heuristic with two metaheuristics. We apply Genetic Algorithms as it is widely used and flexible in adaption with promising results in logistics applications and propose a modified Particle Swarm Optimization for combinatorial optimiza- tion.
['Ole Björn Brodersen', 'Leif Meier', 'Matthias Schumann']
Optimizing the Berth Allocation Problem using a Genetic Algorithm and Particle Swarm Optimization
392,892
With the cost of healthcare delivery rising all over the world the way hospitals use their resources stands in the centre of attention in many countries. In order to make best use of doctors and nurses and costly medical appliances etc. the use of information systems plays a vital role. Although all physicians are usually obliged to use the systems anecdotal evidence shows that use-patterns are not always as expected. Some physicians do not like the system and find ways to avoid working with it. They establish so called "workarounds".#R##N#This research investigates into the root causes of workarounds used by hospital physicians. Based on information systems theories a framework is developed to structure the findings from eight interviews in three hospitals in Germany. The interview partners were assured complete anonymity and thus the interviews were very open. We identified six distinctive types of workarounds and discuss their causes.#R##N#The setup of this research is of exploratory nature using a grounded theory approach. Our findings underline the existence of workarounds in medical environment and provide guidance how to cope with them.
['Arnold Reiz', 'Heiko Gewald']
Physicians' Resistance towards Information Systems in Healthcare: The Case of Workarounds
935,319
Social Network Services (SNSs) have been regarded as an important source for identifying events in our society. Detecting and understanding social events from SNS has been investigated in many different contexts. Most of the studies have focused on detecting bursts based on textual context. In this paper, we propose a novel framework on collecting and analyzing social media data for i) discovering social bursts and ii) ranking these social bursts. Firstly, we detect social bursts from the photos textual annotations as well as visual features (e.g., timestamp and location); and then effectively identify social bursts by considering the spreading effect of social bursts in the spatio-temporal contexts. Secondly, we use these relationships among social bursts (e.g., spatial contexts, temporal contexts and content) for enhancing the precision of the algorithm. Finally, we rank social bursts by analyzing relationships between them (e.g., locations, timestamps, tags) at different period of time. The experiments have been conducted with two different approaches: i) offline approach with the collected dataset, and ii ) online approach with the streaming dataset in real time.
['Jai E. Jung']
Discovering Social Bursts by Using Link Analytics on Large-Scale Social Networks
966,886
ABSTRACTIncreased environmental and social responsibility awareness, while producing unique opportunities for sustainability-oriented innovations, has generated important challenges for companies. The path to sustainability requires corporate strategies that guarantee profitability, managing simultaneously environmental and social responsibilities. An attempt is made to provide an understanding of sustainable development thinking in business, discussing how the combination of the transition management, adaptive planning and sociotechnical approaches can contribute towards an effective implementation of sustainability-oriented innovations in business context. The article proposes a conceptual model, which incorporates this contribution, developed through a four-year action-research project carried out within a large Brazilian energy company – Petrobras. The authors argue that the adoption of the proposed model by other large firms operating in different societal sectors might trigger organisational changes...
['Maria Fatima Ludovico de Almeida', 'Maria Angela Campelo de Melo']
Sociotechnical regimes, technological innovation and corporate sustainability: from principles to action
869,310
Device-to-device (D2D) communication underlaying cellular wireless networks is a promising concept to improve user experience and resource utilization by allowing direct transmission between two cellular devices. In this paper, performance of network-assisted D2D communication is investigated where D2D traffic is carried through relay nodes. Considering a multi-user and multi-relay network, we propose a distributed solution for resource allocation with a view to maximizing network sum-rate. An optimization problem is formulated for radio resource allocation at the relays. The objective is to maximize end-to-end rate as well as satisfy the data rate requirements for cellular and D2D user equipments under total power constraint. Due to intractability of the resource allocation problem, we propose a solution approach using message passing technique where each user equipment sends and receives information messages to/from the relay node in an iterative manner with the goal of achieving an optimal allocation. Therefore, the computational effort is distributed among all the user equipments and the corresponding relay node. The convergence and optimality of the proposed scheme are proved and a possible distributed implementation of the scheme in practical LTE-Advanced networks is outlined. The numerical results show that there is a distance threshold beyond which relay-aided D2D communication significantly improves network performance with a small increase in end-to-end delay when compared to direct communication between D2D peers.
['Monowar Hasan', 'Ekram Hossain']
Distributed Resource Allocation for Relay-Aided Device-to-Device Communication: A Message Passing Approach
274,299
The relay employs network coding to transmit the packets from the source nodes simultaneously, for increasing spectral efficiency in wireless environments. The cooperative transmission based on network coding usually works on decode-and-forward (DF) protocols. However, detection errors at the relay cause error propagation, which degrades the performance of cooperative communications. To overcome this problem, we model the error propagation effect of the DF-based system at the destination as the addition of virtual noise, and then design a low complexity detection method. We derive the achievable diversity gain to evaluate the proposed model and corresponding detection scheme. To extend the proposed model to network-coded systems, we first express the channel conditions between the sources and relay as a single equivalent channel gain. Then, we develop low complexity detection schemes for the network-coded systems. From the error propagation model, we propose a dual mode network coding technique, which exploits different network coding schemes adaptively according to channel qualities. Simulation results show that the proposed model and detection scheme effectively reduce the error propagation effects. Also, the proposed dual mode network coding has gains for all channel conditions and thus gives better BER performance than conventional methods.
['Dongsik Kim', 'Hyun-Myung Kim', 'Gi-Hong Im']
Improved Network-Coded Cooperative Transmission with Low-Complexity Adaptation to Wireless Channels
462,868
In this article we introduce a calculus of variations for sums of elementary tensors and apply it to functionals of practical interest. The survey provides all necessary ingredients for applying minimization methods in a general setting. The important cases of target functionals which are linear and quadratic with respect to the tensor product are discussed, and combinations of these functionals are presented in detail. As an example, we consider the solution of a linear system in structured tensor format. Moreover, we discuss the solution of an eigenvalue problem with sums of elementary tensors. This example can be viewed as a prototype of a constrained minimization problem. For the numerical treatment, we suggest a method which has the same order of complexity as the popular alternating least square algorithm and demonstrate the rate of convergence in numerical tests.
['Mike Espig', 'Wolfgang Hackbusch', 'Thorsten Rohwedder', 'Reinhold Schneider']
Variational calculus with sums of elementary tensors of fixed rank
241,228
This paper presents a comprehensive study of the effect of job interleaving by preemption on the throughput of a single server where requests arrive with a given processing time and slack. The problem is to decide which requests to serve so as to maximize the server's utilization. This simple model captures many situations, both at the application (e.g., delivery of video) as well as at the network/transmission levels (e.g., scheduling of packets from input to output interface of a switch). The problem is on-line in nature, and thus we use competitive analysis for measuring the performance of our scheduling algorithms. We consider two modes of operation - with and without commitment - and derive upper and lower bounds for each case. Since competitive analysis is based on the worst-case scenario, the average-case performance of the algorithms is also examined by a simulation study.
['Juan A. Garay', 'Joseph (Seffi) Naor', 'Bülent Yener', 'Peng Zhao']
On-line admission control and packet scheduling with interleaving
228,085
The paper is a manifestation of the fundamental importance of the linear program with linear complementarity constraints (LPCC) in disjunctive and hierarchical programming as well as in some novel paradigms of mathematical programming. In addition to providing a unified framework for bilevel and inverse linear optimization, nonconvex piecewise linear programming, indefinite quadratic programs, quantile minimization, and ? 0 minimization, the LPCC provides a gateway to a mathematical program with equilibrium constraints, which itself is an important class of constrained optimization problems that has broad applications. We describe several approaches for the global resolution of the LPCC, including a logical Benders approach that can be applied to problems that may be infeasible or unbounded.
['Jing Hu', 'John E. Mitchell', 'Jong-Shi Pang', 'Bin Yu']
On linear programs with linear complementarity constraints
136,129
We have used semantic technologies to design, implement, and deploy an interdisciplinary virtual observatory. The Virtual Solar-Terrestrial Observatory is a production data framework providing access to observational datasets. It is in use by a community of scientists, students, and data providers interested in the middle and upper Earth's atmosphere, and the Sun. The data sets span upper atmospheric terrestrial physics to solar physics. The observatory allows virtual access to a highly distributed and heterogeneous set of data that appears as if all resources are organized, stored and accessible from a local machine. The system has been operational since the summer of 2006 and has shown registered data access by over 75% of the active community (last count over 600 of the estimated 800 person active research community). This demonstration will highlight how semantic technologies are being used to support data integration and more efficient data access in a multi-disciplinary setting. A full paper on this work is being published in the IAAI 07 'deployed' paper track.
['Deborah L. McGuinness', 'Peter A. Fox', 'L. Cinquini', 'Patrick West', 'Jose Garcia', 'J. L. Benedict', 'Don Middleton']
A deployed semantically-enabled interdisciplinary virtual observatory
544,587
Although the volume settings of smartphones are important for users, they still need to push the hardware sound button manually. The purpose of our study is to improve the usability of volume settings. Our proposed method predicts the user's routine volume settings by learning the actual daily smartphone logs. Related works used suitable volume settings input by the experimental participants to learn the volume setting pattern for each user. In contrast, this study uses actual smartphone logs. This paper describes three results of the analyses of many actual smartphone logs. First, we investigate the rate at which the users change the application volume. Second, we examine the accuracy of the results predicted by our method. Third, we classify the test users as users for whom our method effectively works or others. Finally, we discuss the appropriateness of the predicted results for users' routine settings.
['Tatsuhito Hasegawa', 'Makoto Koshino', 'Haruhiko Kimura']
Analysis of Actual Smartphone Logs for Predicting the User's Routine Settings of Application Volume
610,527
Probabilistic Data Programming with ENFrame.
['Dan Olteanu', 'Sebastiaan J. van Schaik']
Probabilistic Data Programming with ENFrame.
792,026
This paper proposes a scale-space theory based on B-spline kernels. Our aim is twofold: 1) present a general framework, and show how B-splines provide a flexible tool to design various scale-space representations. In particular, we focus on the design of continuous scale-space and dyadic scale-space frame representations. A general algorithm is presented for fast implementation of continuous scale-space at rational scales. In the dyadic case, efficient frame algorithms are derived using B-spline techniques to analyze the geometry of an image. The relationship between several scale-space approaches is explored. The behavior of edge models, the properties of completeness, causality, and other properties in such a scale-space representation are examined in the framework of B-splines. It is shown that, besides the good properties inherited from the Gaussian kernel, the B-spline derived scale-space exhibits many advantages for modeling visual mechanism including the efficiency, compactness, orientation feature and parallel structure.
['Yuping Wang', 'Seng Luan Lee']
Scale-space derived from B-splines
284,114
Towards Total Budgeting and the Interactive Budget Warehouse
['Dirk Draheim']
Towards Total Budgeting and the Interactive Budget Warehouse
600,096
Snow cover area is a very important parameter for snowmelt runoff modeling and forecasting. Snow cover information is also useful for managing transportation and avalanche forecasting. Our study area includes Gangotri glacier region, Siachen glacier region and Beaskund glacier region in the North-West Himalayas of India. Our previous studies discuss the capability of several algorithms for optical sensor as well as SAR to map the snow cover area in Himalayan regions. In recent years, the SAR interferometry has provided number of attractive applications in landuse/landcover mapping. This study discusses the capability of both backscattering ratio techniques and InSAR coherence measurement techniques for snow cover mapping in Himalayan region with repeat passes data of ERSfrac12 and ENVISAT-ASAR. By analyzing the several pairs of ENVISAT repeat passes ASAR images for the study area, we find that the coherence measurement from bare soil, bare rock and vegetation are high and snow covered area and glacier area have very low coherence except in one day difference image.
['Gulab Singh', 'G. Venkataraman', 'Y. S. Rao', 'V. Kumar', 'Snehmani']
InSAR Coherence Measurement Techniques for Snow Cover Mapping in Himalayan Region
109,011
We present a feature maps selection method for convolutional neural network (CNN) which can keep the classifier performance when CNN is used as a feature extractor. This method aims to simplify the last subsampling layer of CNN by cutting the number of feature maps with Linear Discriminant Analysis (LDA). It is shown that our method can stabilize the classification accuracy and achieve runtime reduction by removing some feature maps of the last subsampling layer which have worst separability. And the result also lay the foundation for further simplification of CNN.
['Ting Rui', 'Junhua Zou', 'You Zhou', 'Jianchao Fei', 'Chengsong Yang']
Convolutional Neural Network Simplification Based on Feature Maps Selection
983,968
It is important for artificial agents to accurately infer human emotions in order to provide believable interactions. However, there is currently a lack of empirical results supporting affective agent to propose effective computational models for this purpose through analyzing individual profile information and the interaction outcomes. In this paper, we bridge this gap with a game-based empirical study. We propose a general model for interactions between an agent and a user in competitive game settings. Based on results from over 450 players in over 2,500 game sessions, we construct a regression model using a player's education level, age, gender and the interaction outcome as explanatory factors to compute his/her composite emotions consisting of the six basic emotions.
['Xinjia Yu', 'Chunyan Miao', 'Cyril Leung', 'Charles T. Salmon']
Modelling Composite Emotions in Affective Agents
637,837
This paper proposes a new morphology-based approach for the interslice interpolation of current transformer (CT) and MRI datasets composed of parallel slices. Our approach is object based and accepts as input data binary slices belonging to the same anatomical structure. Such slices may contain one or more regions, since topological changes between two adjacent slices may occur. Our approach handles explicitly interslice topology changes by decomposing a many-to-many correspondence into three fundamental cases: one-to-one, one-to-many, and zero-to-one correspondences. The proposed interpolation process is iterative. One iteration of this process computes a transition sequence between a pair of corresponding input slices, and selects the element located at equal distance from the input slices. This algorithmic design yields a gradual, smooth change of shape between the input slices. Therefore, the main contribution of our approach is its ability to interpolate between two anatomic shapes by creating a smooth, gradual change of shape, and without generating over-smoothed interpolated shapes.
['Alexandra Branzan Albu', 'Trevor Beugeling', 'Denis Laurendeau']
A Morphology-Based Approach for Interslice Interpolation of Anatomical Slices From Volumetric Images
382,264
Efficient parallel evaluation of straight-line code and arithmetic circuits
['Gary L. Miller', 'Vijaya Ramachandran', 'Erich Kaltofen']
Efficient parallel evaluation of straight-line code and arithmetic circuits
300,363
Energy harvesting is an enabling technology for realizing an ambient power supply for wireless sensor nodes and mobile devices. By using flexible photovoltaic cells and piezoelectric films, we can readily harvest ambient energy if flexible energy harvesters can be realized. Conventional silicon circuits, however, are not best suited to realizing flexible large-area energy harvesters because they are not mechanically conformable to uneven surfaces such as shoes. To address this challenge, we propose an organic insole pedometer with a piezoelectric energy harvester in this paper as the first step toward ambient energy harvesting using organic flexible electronics.
['Koichi Ishida', 'Tsung-Ching Huang', 'Kentaro Honda', 'Yasuhiro Shinozuka', 'Hiroshi Fuketa', 'Tomoyuki Yokota', 'Ute Zschieschang', 'Hagen Klauk', 'Gregory Tortissier', 'Tsuyoshi Sekitani', 'Makoto Takamiya', 'Hiroshi Toshiyoshi', 'Takao Someya', 'Takayasu Sakurai']
Insole pedometer with piezoelectric energy harvester and 2V organic digital and analog circuits
321,892
This paper is concerned with the analysis of the kernel-based algorithm for gain function approximation in the feedback particle filter. The exact gain function is the solution of a Poisson equation involving a probability-weighted Laplacian. The kernel-based method -- introduced in our prior work -- allows one to approximate this solution using {\em only} particles sampled from the probability distribution. This paper describes new representations and algorithms based on the kernel-based method. Theory surrounding the approximation is improved and a novel formula for the gain function approximation is derived. A procedure for carrying out error analysis of the approximation is introduced. Certain asymptotic estimates for bias and variance are derived for the general nonlinear non-Gaussian case. Comparison with the constant gain function approximation is provided. The results are illustrated with the aid of some numerical experiments.
['Amirhossein Taghvaei', 'Prashant G. Mehta', 'Sean P. Meyn']
Error Estimates for the Kernel Gain Function Approximation in the Feedback Particle Filter
960,236
Skies are common backgrounds in photos but are often less interesting due to the time of photographing. Professional photographers correct this by using sophisticated tools with painstaking efforts that are beyond the command of ordinary users. In this work, we propose an automatic background replacement algorithm that can generate realistic, artifact-free images with a diverse styles of skies. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven sky search scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing results. In addition, we demonstrate the effectiveness of the proposed algorithm with extensive user studies.
['Yi-Hsuan Tsai', 'Xiaohui Shen', 'Zhe Lin', 'Kalyan Sunkavalli', 'Ming-Hsuan Yang']
Sky is not the limit: semantic-aware sky replacement
829,893
Dynamic voltage scaling (DVS) is a popular approach for energy reduction of integrated circuits. Current processors that use DVS typically have an operating voltage range from full to half of the maximum V/sub dd/. However, there is no fundamental reason why designs cannot operate over a much larger voltage range: from full V/sub dd/ to subthreshold voltages. This possibility raises the question of whether a larger voltage range improves the energy efficiency of DVS. First, from a theoretical point of view, we show that, for subthreshold supply voltages, leakage energy becomes dominant, making "just-in-time computation" energy-inefficient at extremely low voltages. Hence, we introduce the existence of a so-called "energy-optimal voltage" which is the voltage at which the application is executed with the highest possible energy efficiency and below which voltage scaling reduces energy efficiency. We derive an analytical model for the energy-optimal voltage and study its trends with technology scaling and different application loads. Second, we compare several different low-power approaches including MTCMOS, standard DVS, and the proposed Insomniac (extended DVS into subthreshold operation). A study of real applications on commercial processors shows that Insomniac provides the best energy efficiency. From these results, we conclude that extending the voltage range below V/sub dd//2 will improve the energy efficiency for many processor designs.
['Bo Zhai', 'David Blaauw', 'Dennis Sylvester', 'Krisztián Flautner']
The limit of dynamic voltage scaling and insomniac dynamic voltage scaling
332,096
The paper contains a description of Methodology and its supporting online tools which allow for efficient identification of optimization problems in transport organizations and for fast development of prototype solution. The proposed Methodology benefits from the agile approach to software development. The result of applying the Methodology with support of the Process Optimization Platform in context of an organization is a prototype of a decision support system. This prototype can be evaluated with a sample of live data representing real life problems of an organization. This way the organization can easily verify potential quality of software development process and benefits related to implementation of a decision support system.
['Grzegorz Kołaczek', 'Pawel Swiatek', 'Adam Grzech', 'Krzysztof Juszczyszyn', 'Paweł Stelmach', 'Lukasz Falas', 'Arkadiusz Sławek', 'Patryk Schauer']
Online platform to support business analysis and Process Optimization in transportation organizations
146,173
Often, we worry about outsiders attacking our systems and networks, breaking through the perimeter defenses we've established to keep bad actors out. However, we must also worry about "insider threats": people with legitimate access who behave in ways that put our data, systems, organizations, and even our businesses' viability at risk.
['Joel B. Predd', 'Shari Lawrence Pfleeger', 'Jeffrey Hunker', 'Carla Bulford']
Insiders Behaving Badly
352,357
MATLAB is a software simulator which is good for simulating the mathematical modeling and feedback control, while OPNET is a software tool which is good for simulating the network communication behavior. However, currently simulating the communication behavior of a wireless ad hoc network within MATLAB is difficult, where complex queuing models in OPNET are also difficult to create and manipulate. In this paper, we have created an interface between MATLAB and OPNET to allow MATLAB to use its strong mathematical functionality and OPNET to use its ability to manipulate network simulations.
['Christopher Harding', 'Alison Griffiths', 'Hongnian Yu']
An Interface between MATLAB and OPNET to Allow Simulation of WNCS with MANETs
394,908
A fundamental problem in distributed computing is the problem of cooperatively executing a given set of tasks in a dynamic setting. The challenge is to minimize the total work done and to maintain efficiency in the face of dynamically changing processor connectivity. In this setting, work is defined as the total number of tasks performed (counting multiplicities) by all the processors during the course of the computation. In this scenario, we are given a set of t tasks that must be completed in a distributed setting by a set of p processors where the communication medium is subject to failures. We assume that the t tasks are similar, in that they require the same number of computation steps to finish execution. We further assume that the tasks are idempotent - executing a task multiple times has the same effect as a single execution of the task. The tasks have a dependency relationship defined among them captured by a task dependency graph.
['Chadi Kari', 'Alexander Russell', 'Narasimha Shashidhar']
Randomized Work-Competitive Scheduling for Cooperative Computing on k-partite Task Graphs
343,787
Unbiased sampling of online social networks (OSNs) makes it possible to get accurate statistical properties of large-scale OSNs. However, the most used sampling methods, Breadth-First-Search (BFS) and Greedy, are known to be biased towards high degree nodes, yielding inaccurate statistical results. To give a general requirement for unbiased sampling, we model the crawling process as a Markov Chain and deduce a necessary and sufficient condition, which enables us to design various efficient unbiased sampling methods. To the best of our knowledge, we are among the first to give such a condition. Metropolis-Hastings Random Walk (MHRW) is an example which satisfies the condition. However, walkers in MHRW may stay at some low-degree nodes for a long time, resulting considerable self-loops on these nodes, which adversely affect the crawling efficiency. Based on the condition, a new unbiased sampling method, called USRS, is proposed to reduce the probabilities of self-loops. We use the dataset of Renren, the largest OSN in China, to evaluate the performance of USRS. The results have demonstrated that USRS generates unbiased samples with low self-loop probabilities, and achieves higher crawling efficiency.
['Dong Wang', 'Zhenyu Li', 'Gaogang Xie']
Towards Unbiased Sampling of Online Social Networks
492,313
Weighted voting for operation dependent management of replicated data
['Mirjana Obradovic', 'Piotr Berman']
Weighted voting for operation dependent management of replicated data
260,608
Recognition of acoustic events using deep neural networks
['Oguzhan Gencoglu', 'Tuomas Virtanen', 'Heikki Huttunen']
Recognition of acoustic events using deep neural networks
752,415
For infants, early word learning is a chicken-and-egg problem. One way to learn a word is to observe that it co-occurs with a particular referent across different situations. Another way is to use the social context of an utterance to infer the intended referent of a word. Here we present a Bayesian model of cross-situational word learning, and an extension of this model that also learns which social cues are relevant to determining reference. We test our model on a small corpus of mother-infant interaction and find it performs better than competing models. Finally, we show that our model accounts for experimental phenomena including mutual exclusivity, fast-mapping, and generalization from social cues.
['Noah D. Goodman', 'Joshua B. Tenenbaum', 'Michael J. Black']
A Bayesian Framework for Cross-Situational Word-Learning
95,230
Buffer sizing is an important network configuration parameter that impacts the quality of service characteristics of data traffic. With falling memory costs and the fallacy that "more is better," network devices are being overprovisioned with large buffers. This may increase queueing delays experienced by a packet and subsequently impact stability of core protocols such as TCP. The problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environments such as time-varying channel capacity, variable packet inter-service time, and packet aggregation, among others. In this article we discuss these challenges, classify the current state-of-the-art solutions, discuss their limitations, and provide directions for future research in the area.
['Ahmad Showail', 'Kamran Jamshaid', 'Basem Shihada']
Buffer sizing in wireless networks: challenges, solutions, and opportunities
710,347
Delayed channel state information (CSI) degrades the system performance and predictor can mitigate the effects of outdate CSI. In massive multiple input multiple output (MIMO) systems with large dimensional channel vectors, low-complexity prediction can reduce operation time and process latency. This study adopts a low-complexity channel predictor based on polynomial fitting for the massive MIMO system. Compared with the conventional Wiener predictor, it does not need statistical channel estimation and avoids matrix inversion. The authors derive the approximate signal-to-interference-plus-noise ratio (SINR) with predicted channel information and the approximate gaps of the average rate per user between using perfect CSI, the predicted CSI provided by Wiener predictor and polynomial fitting, respectively, in the uplink massive MIMO system. The authors also analyse the normalised mean square error of prediction. The performance is investigated in a more practical and general angle of departure spectrum model with a concentration direction and a spreading factor. Simulations validate that the SINR approximations are tight, and show that the polynomial fitting with a proper prediction order can achieve a satisfying performance, when the concentration direction and the spreading factor are small.
['Lixing Fan', 'Qi Wang', 'Yongming Huang', 'Luxi Yang']
Performance analysis of low-complexity channel prediction for uplink massive MIMO
814,222
Will it or won't it? The 1999 IEEE 1149.4 Standard for a mixed-signal test bus is on the cusp of industrial acceptance, but it's not clear whether industry will pick it up. This study, by two leading European research institute, delves into the details of hardware implementation and, in so doing, contributes to the growing literature on this topic.
['Uros Kac', 'Franc Novak', 'Florence Azaïs', 'Pascal Nouet', 'Michel Renovell']
Extending IEEE Std. 1149.4 analog boundary modules to enhance mixed-signal test
293,043
In this article we present the results of two empirical studies focusing on the structure and extent of the innovation activities of the German Software Industry and analyze distinctive features of Software Innovations. We distinguish the activities of the primary (core) Software Industry and secondary industries such as Mechanical and Electrical Engineering, Motor Industry and Telecommunications. A special focus is put on the question if innovations in Software are sequential, on the role of Open Source Software and the importance of interoperability.
['Michael Friedewald', 'Knut Blind', 'Jakob Edler']
Die Innovationstätigkeit der deutschen Softwareindustrie
715,695
Performance Improvement via Bagging in Ensemble Prediction of Chaotic Time Series Using Similarity of Attractors and LOOCV Predictable Horizon.
['Mitsuki Toidani', 'Kazuya Matsuo', 'Shuichi Kurogi']
Performance Improvement via Bagging in Ensemble Prediction of Chaotic Time Series Using Similarity of Attractors and LOOCV Predictable Horizon.
897,929
Equivalent Keys in Multivariate Quadratic Public Key Systems.
['Christopher Wolf', 'Bart Preneel']
Equivalent Keys in Multivariate Quadratic Public Key Systems.
892,735
In steganographic technique of "sum and difference covering set" (SDCS), an appropriate SDCS is used to embed data into pixel sequences of cover image. This technique extends the conventional LSB matching and matrix embedding to derive more secure steganography. In this paper, towards stego-security enhancement, a new SDCS-based contentadaptive steganography is proposed. The most noisy pixels are first determined according to an iterative noise-level estimation mechanism. Then, the secret data is embedded into these noisy pixels using SDCS steganography. In addition, a simple while efficient SDCS construction is adopted in our method to improve the embedding efficiency and enhance further the stego-security. The experimental results show that our method provides a better resistance to steganalysis compared with the previous SDCS-based steganography.
['Bowen Xue', 'Xiaolong Li', 'Zongming Guo']
A New SDCS-based Content-adaptive Steganography Using Iterative Noise-Level Estimation
656,437
This brief addresses the problem of estimation of both the states and the unknown inputs of a class of systems that are subject to a time-varying delay in their state variables, to an unknown input, and also to an additive uncertain, nonlinear disturbance. Conditions are derived for the solvability of the design matrices of a reduced-order observer for state and input estimation, and for the stability of its dynamics. To improve computational efficiency, a delay-dependent asymptotic stability condition is then developed using the linear matrix inequality formulation. A design procedure is proposed and illustrated by a numerical example.
['Hieu Trinh', 'Quang Phuc Ha']
State and Input Simultaneous Estimation for a Class of Time-Delay Systems With Uncertainties
318,500
Ventricular intramyocardial electrograms are recorded with electrodes directly from the heart either in intraventricular or epimyocardial position and may be acquired either from the spontaneously beating or from the paced heart. The morphology of these signals differs significantly from that of body surface ECG recordings. Although the morphology shows general characteristics, it additionally depends on different individual impacts. This problem of individual evaluation is briefly discussed. As an appropriate methodology for its solution, personalized referencing based on similarity averaging has been employed. A more general approach may be model-based signal interpretation, which is still under investigation. The preliminary results reveal a promising potential of intramyocardial electrograms for cardiac risk surveillance, e.g., for arrhythmia detection, recognition of rejection events in transplanted hearts, and assessment of hemodynamic performance. Employing implants with telemetric capabilities may render possible permanent and even continuous cardiac telemonitoring. Furthermore, the signals can be utilized for supporting therapy management, e.g., in patients with different kinds of cardiomyopathies. This paper shall demonstrate some preliminary results and discuss the expected potential.
['H. Hutten']
Ventricular Intramyocardial Electrograms and Their Expected Potential for Cardiac Risk Surveillance, Telemonitoring, and Therapy Management
336,829
Meeting timing requirements and improving routability are becoming more challenging in modern design technologies. Most timing-driven placement approaches ignore routability concerns which may lead to a gap in routing quality between the actual routing and what is expected. In this paper, we propose a routing-aware incremental timing-driven placementtechnique to reduce early and late negative slacks while considering global routing congestion. Our proposed flow considers both timing and routing metrics during the detailed placement. We also presents a comprehensive analysis of timing quality score and the total number of routing overflows and the trade-off between them by modifying the International Conference on Computer Aided Design (ICCAD) 2015 timing-driven contest benchmarksand the displacement constraints. Experimental results on the ICCAD 2015 Incremental Timing-Driven Contest benchmarks show the efficacy of our proposed routing-aware incremental timing-driven placement method. On average, we obtain 22% and 17% improvement in timing quality score and global routing overflows, respectively, compared to the first placed team at 2015 ICCAD contest.
['Jucemar Monteiro', 'Nima Karimpour Darav', 'Guilherme Flach', 'Mateus Fogaça', 'Ricardo Reis', 'Andrew A. Kennings', 'Marcelo de Oliveira Johann', 'Laleh Behjat']
Routing-Aware Incremental Timing-Driven Placement
866,148
This paper proposes a new framework that takes advantage of the computing capabilities provided by the Internet of Thing (IoT) paradigm in order to support collaborative applications. It looks at the requirements needed to run a wide range of computing tasks on a set of devices in the user environment with limited computing resources. This approach contributes to building the social dimension of the IoT by enabling the addition of computing resources accessible to the user without harming the other activities for which the IoT devices are intended. The framework mainly includes a model of the computing load, a scheduling mechanism and a handover procedure for transferring tasks between available devices. The experiments show the feasibility of the approach and compare different implementation alternatives.
['José Francisco Colom', 'Higinio Mora Mora', 'David Gil', 'María Teresa Signes‐Pont']
Collaborative building of behavioural models based on internet of things
880,908