abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Parallel computing is becoming mainstream with the advent of general purpose cost effective Shared-memory Multiprocessor (SMP) systems. At the same time, new developments in parallel programming environments allow more rapid and efficient programming of these systems. To this end, OpenMP has emerged as a flexible and fairly comprehensive set of compiler directives, library routines, and environment variables to facilitate parallel programming of SMP systems in Fortran and C/C++. The Standard Performance Evaluation Corporation (SPEC) has created a benchmark suite of eleven applications, named SPEC OMP2001, to be used for the performance evaluation and comparison of moderate size SMP systems. Each of the benchmarks in SPEC OMP2001 is either automatically or manually parallelized using OpenMP directives. In this paper, we present basic static and runtime characteristics of these benchmarks. We present data gathered using high resolution timers and the hardware counters available on our SMP system. We explain some of the benchmark performance characteristics with measured data and with a quantitative model. | ['Vishal Aslot', 'Rudolf Eigenmann'] | Performance characteristics of the SPEC OMP2001 benchmarks | 89,895 |
A Generic Approach for Analysis of White-Light Interferometry Data via User-Defined Algorithms | ['Max Schneider', 'Dietmar Fey', 'Kay Wenzel', 'Torsten Machleidt'] | A Generic Approach for Analysis of White-Light Interferometry Data via User-Defined Algorithms | 644,282 |
The study examined the effects of varying the accommodation-convergence conflict created by stereoscopic displays which are now commonly used for the viewing of virtual environments, television and cinema. These displays will dissociate the naturally co-varying accommodation (focusing) and convergence (eye position) demands by placing an image geometrically behind or in front of the screen, and it has been suggested that the unnatural conflict between these demands will cause discomfort. Commercially available stereoscopic equipment was used to create a stimulus with four different levels of conflict, one of which was a control condition of zero conflict. Sixteen participants, each with normal visual systems, were presented with all four conditions in a balanced experimental design. The changes in visual discomfort, near heterophoria, distance heterophoria and visual acuity were assessed. Clear changes in comfort were observed, although no significant associated physiological changes were observed. The model which best describes the relationship between the conflict and the discomfort is one in which a small amount of conflict does not cause visual discomfort, whereas a larger amount will do so. This finding is consistent with expectations based on historical optometric experiments, which indicate that the normal visual system can maintain comfortable vision whilst experiencing small discrepancies between the accommodation and convergence demands. Our results indicate that visual discomfort occurs beyond a given conflict threshold and continues to rise as the conflict increases. They are consistent with the idea that this threshold is idiosyncratic to the individual. The principal implication of these findings is that people with normal visual systems should not experience asthenopic symptoms as a consequence of the accommodation-convergence conflict if the difference between the stimulus to each system is small. | ['Peter A. Howarth', 'Pete Underwood'] | The impact of viewing stereoscopic displays on the visual system | 632,082 |
Electronic negotiation experiments provide a rich source of information about relationships between the negotiators, their individual actions, and the negotiation dynamics. This information can be effectively utilized by intelligent agents equipped with adaptive capabilities to learn from past negotiations and assist in selecting appropriate negotiation tactics. This paper presents an approach to modeling the negotiation process in a time-series fashion using artificial neural network. In essence, the network uses information about past offers and the current proposed offer to simulate expected counter-offers. On the basis of the model's prediction, ''what-if'' analysis of counter-offers can be done with the purpose of optimizing the current offer. The neural network has been trained using the Levenberg-Marquardt algorithm with Bayesian Regularization. The simulation of the predictive model on a testing set has very good and highly significant performance. The findings suggest that machine learning techniques may find useful applications in the context of electronic negotiations. These techniques can be effectively incorporated in an intelligent agent that can sense the environment and assist negotiators by providing predictive information, and possibly automating some negotiation steps. | ['Réal André Carbonneau', 'Gregory E. Kersten', 'Rustam M. Vahidov'] | Predicting opponent's moves in electronic negotiations using neural networks | 538,852 |
In a virtual machine (VM) consolidation environment, it has been observed that CPU sharing among multiple VMs will lead to I/O processing latency because of the CPU access latency experienced by each VM. In this paper, we present vTurbo, a system that accelerates I/O processing for VMs by offloading I/O processing to a designated core. More specifically, the designated core - called turbo core - runs with a much smaller time slice (e.g., 0.1ms) than the cores shared by production VMs. Most of the I/O IRQs for the production VMs will be delegated to the turbo core for more timely processing, hence accelerating the I/O processing for the production VMs. Our experiments show that vTurbo significantly improves the VMs' network and disk I/O throughput, which consequently translates into application-level performance improvement. | ['Cong Xu', 'Sahan Gamage', 'Hui Lu', 'Ramana Rao Kompella', 'Dongyan Xu'] | vTurbo: accelerating virtual machine I/O processing using designated turbo-sliced core | 274,954 |
This paper concerns with implementation of self organizing map (SOM) for intelligent machine fault diagnostics. The present study employs infrared images acquired by thermography camera as data base of machine diagnostics system. Image processing is carried out using thresholding for image segmentation and clustering by means of k-means algorithm. Feature extraction of images is conducted by calculating area, perimeter and central moment of region of interest (ROI). All data of this work was acquired by capturing the images of rolling element bearings from rotating machine fault simulator (MFS). The simulator is able to experiment a normal and seeded fault conditions such as outer and inner race defects of rolling element bearing, unbalance, misalignment and looseness. Pattern recognition technique is then employed to diagnose the machine conditions by mapping the image features through SOM. The result shows that SOM based infrared thermography image can perform intelligent machine fault diagnostics with plausible accuracy. | ['Achmad Widodo', 'Djoeli Satrijo', 'Muhammad Huda', 'Gang-Min Lim', 'Bo-Suk Yang'] | Application of Self Organizing Map for Intelligent Machine Fault Diagnostics Based on Infrared Thermography Images | 46,947 |
Dance movements are a complex class of human behavior which convey forms of non-verbal and subjective communication that are performed as cultural vocabularies in all human cultures. The singularity of dance forms imposes fascinating challenges to computer animation and robotics, which in turn presents outstanding opportunities to deepen our understanding about the phenomenon of dance by means of developing models, analyses and syntheses of motion patterns. In this article, we formalize a model for the analysis and representation of popular dance styles of repetitive gestures by specifying the parameters and validation procedures necessary to describe the spatiotemporal elements of the dance movement in relation to its music temporal structure (musical meter). Our representation model is able to precisely describe the structure of dance gestures according to the structure of musical meter, at different temporal resolutions, and is flexible enough to convey the variability of the spatiotemporal relation between music structure and movement in space. It results in a compact and discrete mid-level representation of the dance that can be further applied to algorithms for the generation of movements in different humanoid dancing characters. The validation of our representation model relies upon two hypotheses: (i) the impact of metric resolution and (ii) the impact of variability towards fully and naturally representing a particular dance style of repetitive gestures. We numerically and subjectively assess these hypotheses by analyzing solo dance sequences of Afro-Brazilian samba and American Charleston, captured with a MoCap (Motion Capture) system. From these analyses, we build a set of dance representations modeled with different parameters, and re-synthesize motion sequence variations of the represented dance styles. For specifically assessing the metric hypothesis, we compare the captured dance sequences with repetitive sequences of a fixed dance motion pattern, synthesized at different metric resolutions for both dance styles. In order to evaluate the hypothesis of variability, we compare the same repetitive sequences with others synthesized with variability, by generating and concatenating stochastic variations of the represented dance pattern. The observed results validate the proposition that different dance styles of repetitive gestures might require a minimum and sufficient metric resolution to be fully represented by the proposed representation model. Yet, these also suggest that additional information may be required to synthesize variability in the dance sequences while assuring the naturalness of the performance. Nevertheless, we found evidence that supports the use of the proposed dance representation for flexibly modeling and synthesizing dance sequences from different popular dance styles, with potential developments for the generation of expressive and natural movement profiles onto humanoid dancing characters. | ['João Lobato Oliveira', 'Luiz Alberto Naveda', 'Fabien Gouyon', 'Luís Paulo Reis', 'Paulo Sousa', 'Marc Leman'] | A parameterizable spatiotemporal representation of popular dance styles for humanoid dancing characters | 170,711 |
Based upon the strategic information systems planning (SISP) and contingency theories, a mediating model was developed to investigate the direct and indirect effects of top management support on SISP success. The model was tested using partial least squares analysis on a sample of 57 information systems executives from US organisations. Top management support was found to influence SISP success both directly and indirectly through the information systems plan usefulness, but not through the information technology (IT) infrastructure flexibility. The former was found, as predicted, to be a significant mediator of the effects of top management support on SISP success. By contrast, IT infrastructure flexibility was not found to be a significant mediator. | ['Gerald Elysee'] | An empirical examination of a mediated model of strategic information systems planning success | 294,430 |
This study numerically evaluates the effect of secondary flow on the reaction performance in heterogeneous gaseous spiral coiled reactor utilizing selective wall coatings. Laminar multispecies gas flow in spiral coiled reactor with circular and square cross-section is investigated using a validated three-dimensional computational fluid dynamics (CFD) model. Various selective wall coating strategies are evaluated within a range of Reynolds number. The reactor performance is measured not only based on the conversion rate but also in terms of figure of merit (FoM) defined as reaction throughput per unit pumping power and catalyst coating active area. The results indicate that secondary flow enhance reaction performance and improve catalyst utilization, especially at the outer wall. By maximizing this effect, the requirement of expensive catalyst materials can be minimized. This study highlight the potential of selective catalyst coating in coiled reactor for process intensification and cost reduction in various applications. | ['Jundika C. Kurnia', 'Agus P. Sasmito', 'Arun S. Mujumdar'] | Potential catalyst savings in heterogeneous gaseous spiral coiled reactor utilizing selective wall coating – A computational study | 631,809 |
Retransmission ambiguity, arising from delay spikes in a wireless mobile environment, results in poor TCP performance. Eifel improves the performance of TCP by using the timestamp option, which requires additional header bytes, resulting in increased overhead in bandwidth constrained wireless networks. Moreover, the destination needs to support the timestamp option. In this paper, we propose a new algorithm, called DualRTT, which increases the performance of TCP in the presence of delay spikes, without requiring any additional header bytes. It requires changes only at the sender, and hence is easier to deploy in the existing Internet infrastructure. It also does not require the destination to support the TCP timestamp option. Results show that DualRTT increases the performance of TCP, and also achieves a higher transport layer efficiency than previous algorithms. | ['Shaojian Fu', 'Mohammed Atiquzzaman'] | DualRTT: detecting spurious timeouts in wireless mobile environments | 271,352 |
The formalism of active integrity constraints was introduced as a way to specify particular classes of integrity constraints over relational databases together with preferences on how to repair existing inconsistencies. The rule-based syntax of such integrity constraints also provides algorithms for finding such repairs that achieve the best asymptotic complexity. However, the different semantics that have been proposed for these integrity constraints all exhibit some counter-intuitive examples. In this work, we look at active integrity constraints using ideas from algebraic fixpoint theory. We show how database repairs can be modeled as fixpoints of particular operators on databases, and study how the notion of grounded fixpoint induces a corresponding notion of grounded database repair that captures several natural intuitions, and in particular avoids the problems of previous alternative semantics. In order to study grounded repairs in their full generality, we need to generalize the notion of grounded fixpoint to non-deterministic operators. We propose such a definition and illustrate its plausibility in the database context. | ['Luís Cruz-Filipe'] | Grounded Fixpoints and Active Integrity Constraints | 856,442 |
In this study, we experimentally investigated the adjacent channel leakage ratio (ACLR) performance degradation for the output signal of the RF HPA equipped with the adaptive linearization caused by RoF links placed in both direct and feedback paths of the transmitting system. We show that ACLR exceeding -57 dBc @ 5 MHz offset, which completely satisfy the requirement defined in the 3GPP technical specification, can be achieved for a 20 W class Doherty power amplifier linearized through 1 km fiber with commercial RoF links. With the experimental results, we show that the achieved ACLR strongly depends on the RoF link noise figure; meanwhile most of the nonlinear distortions caused by RoF can be successively suppressed with the proposed joint linearization approach. | ['Alexander N. Lozhkin', 'Kazuo Nagatani', 'Yasuyuki Oishi'] | Joint Linearization for Radio-over-Fiber Links Equipped with High Power Amplifiers | 824,347 |
For any positive integers $n\geq 3, r\geq 1$ we present formulae for the number of irreducible polynomials of degree $n$ over the finite field $\mathbb{F}_{2^r}$ where the coefficients of $x^{n-1}$, $x^{n-2}$ and $x^{n-3}$ are zero. Our proofs involve counting the number of points on certain algebraic curves over finite fields, a technique which arose from Fourier-analysing the known formulae for the $\mathbb{F}_2$ base field cases, reverse-engineering an economical new proof and then extending it. This approach gives rise to fibre products of supersingular curves and makes explicit why the formulae have period $24$ in $n$. | ['Omran Ahmadi', 'Faruk Göloğlu', 'Robert Granger', 'Gary McGuire', 'Emrah Sercan Yılmaz'] | Fibre Products of Supersingular Curves and the Enumeration of Irreducible Polynomials with Prescribed Coefficients | 731,691 |
This work extends the state of the art in the modeling of the lifetime of a sensor network. This is performed concentrating on the peculiarities exhibited by a specific, but realistic scenario, surveillance sensor networks, sensor networks in charge of reporting the passage of targets across an area of interest. Focusing on surveillance sensor networks, we here show that a useful lifetime definition can be directly derived from the targets mobility model. In fact, instead of resorting to abstract lifetime or coverage indices, we here derive the lifetime of a sensor network as the time when a target is first able to cross the area of interest without being detected, studying such variable in terms of its stochastic model. To the best of our knowledge, this is the first contribution which approaches the problem of assessing the lifetime of a sensor network building upon the mobility model of its targets, while adopting a fully probabilistic approach to perform such type of analysis. | ['Lorenzo Donatiello', 'Gustavo Marfia'] | Leveraging on Mobility Models for Sensor Network Lifetime Modeling | 927,080 |
Abstract In this paper we prove a stability result for the reconstruction of the potential q associated with the operator ∂ t − Δ + q in an infinite guide using a finite number of localized observations. | ['Laure Cardoulis', 'Michel Cristofol'] | An inverse problem for the heat equation in an unbounded guide | 828,217 |
This Is What’s Important – Using Speech and Gesture to Create Focus in Multimodal Utterance | ['Farina Freigang', 'Stefan Kopp'] | This Is What’s Important – Using Speech and Gesture to Create Focus in Multimodal Utterance | 910,937 |
A Preliminary Framework for a Social Robot “Sixth Sense” | ['Lorenzo Cominelli', 'Daniele Mazzei', 'Nicola Carbonaro', 'Roberto Garofalo', 'Abolfazl Zaraki', 'Alessandro Tognetti', 'Danilo De Rossi'] | A Preliminary Framework for a Social Robot “Sixth Sense” | 841,858 |
Educational Modeling Languages (EML) are currently used to design and specify complex learning process that adapt the demands of a particular learning setting. LPCEL (Learning Process Composition and Execution Language) is a formal language used to specify complex learning scenarios. From a software architectural point-of-view, a service-oriented architecture (SOA) provide an important approach in order to implement dynamically learning environments from diverse, distributed and heterogeneous learning resources and services, these can be hardly synchronized and coordinated through an EML specification. Based upon SOA principles, Grid-Computing offer a powerful environment to use of resources such as processing power, disk storage, applications and data. This work presents a grid-based architectural framework that enable the dynamic and unanticipated composition of distributed learning services. | ['Jorge Torres', 'César Cárdenas', 'Juan Manuel Dodero', 'Ignacio Aedo'] | A Grid-based Architectural Framework for Composition and Execution of Complex Learning Processes | 216,229 |
Motivation: Pixel saturation occurs when the pixel intensity exceeds a threshold and the recorded pixel intensity is truncated. Microarray experiments are commonly afflicted with saturated pixels. As a result, estimators of gene expression are biased, with the amount of bias increasing as a function of the proportion of pixels saturated. Saturation is directly related to the photomultiplier tube (PMT) voltage settings and RNA abundance and is not necessarily associated with poor array or poor spot quality. When choosing PMT settings, higher PMT settings are desired because of improved signal-to-noise ratios of low-intensity spots. This improved signal is somewhat offset by saturation of high-intensity spots. In practice, spots with saturated pixels are discarded or the biased value is used. Neither of these approaches is appealing, particularly the former approach when a highly expressed gene is discarded because of saturation.#R##N##R##N#Results: We present a method to correct for saturation using pixel-level data. The method is based on a censored regression model. Evaluations on several arrays indicate that the method performs well. Simulation studies suggest that the method is robust under certain model violations.#R##N##R##N#Supplementary material: Supplementary tables and figures can be found at http://linus.nci.nih.gov/Data/doddl/saturation/extras.pdf | ['Lori E. Dodd', 'Edward L. Korn', 'Lisa M. McShane', 'Gadisetti V.R. Chandramouli', 'Eric Y. Chuang'] | Correcting log ratios for signal saturation in cDNA microarrays | 426,667 |
For patients with mental health problems, various treatments exist. Before a treatment is assigned to a patient, a team of clinicians must decide which of the available treatments has the best chance of succeeding. This is a difficult decision to make, as the effectiveness of a treatment might depend on various factors, such as the patient's diagnosis, background and social environment. Which factors are the predictors for successful treatment is mostly unknown. In this article, we present a case-based reasoning approach for predicting the effect of treatments for patients with anxiety disorders. We investigated which techniques are suitable for implementing such a system to achieve a high level of accuracy. For our evaluation, we used data from a professional mental healthcare centre. Our application correctly predicted the success factor of 65% of the cases, which is significantly higher than the prediction of the baseline of 55%. Under the condition that the prediction was based on only cases with a similarity of at least 0.62, the success rate of 80% of the cases was predicted correctly. These results warrant further development of the system. | ['Rosanne Janssen', 'Pieter Spronck', 'Arnoud Arntz'] | Case-based reasoning for predicting the success of therapy | 450,450 |
Predicting Learning-Related Emotions from Students' Textual Classroom Feedback. | ['Nabeela Altrabsheh', 'Mihaela Cocea', 'Sanaz Fallahkhair'] | Predicting Learning-Related Emotions from Students' Textual Classroom Feedback. | 996,882 |
This study investigated pulse transit time (PTT) variability and trends for a wide range of heart rates (HR). PTT is considered as a significant index for estimating vital signs such as blood pressure (BP) and arteriosclerosis. However, PTT is still not an accurate indicator of these vital parameters, because PTT changes over time and is influenced by several factors, such as BP, HR, and other cardiovascular variables. Previous research indicated a correlation between HR and PTT, but only under limited experimental conditions in which HR was adjusted through decreased respiration in a supine position, resulting in a relatively low HR. Hence, it is not clear whether correlations between PTT and vital parameters are maintained if HR increases. In this study, HR was increased before and after exercise, allowing analysis of PTT variability over a wide range of HRs. Results obtained from PTT and HR measurements indicated a high degree of correlation between these factors, with correlation coefficients ranging from -0.836 to -0.967. Moreover, the correlation trend between PTT and HR variability held even through changes in applied gravity achieved by shifting body positions. | ['Kenta Murakami', 'Mototaka Yoshioka'] | Pulse Transit Time Variability on a Range of Heart Rates between Resting and Elevated States | 603,548 |
Disasters affect not only the welfare of individuals and family groups, but also the well-being of communities, and can serve as a catalyst for innovative uses of information and communication technology (ICT). In this paper, we present evidence of ICT use for re-orientation toward the community and for the production of public goods in the form of information dissemination during disasters. Results from this study of information seeking practices by members of the public during the October 2007 Southern California wildfires suggest that ICT use provides a means for communicating community-relevant information especially when members become geographically dispersed, leveraging and even building community resources in the process. In the presence of pervasive ICT, people are developing new practices for emergency response by using ICT to address problems that arise from information dearth and geographical dispersion. In doing so, they find community by reconnecting with others who share their concern for the locale threatened by the hazard. | ['Irina Shklovski', 'Leysia Palen', 'Jeannette Sutton'] | Finding community through information and communication technology in disaster response | 221,233 |
A Steganographic Method Based on DCT and New Quantization Technique. | ['Mohamed Amin', 'Hatem M. Abdullkader', 'Hani M. Ibrahem', 'Ahmed S. Sakr'] | A Steganographic Method Based on DCT and New Quantization Technique. | 785,016 |
Obstacle Detection is a central problem for any robotic system, and critical for autonomous systems that travel at high speeds in unpredictable environment. This is often achieved through scene depth estimation, by various means. When fast motion is considered, the detection range must be longer enough to allow for safe avoidance and path planning. Current solutions often make assumption on the motion of the vehicle that limit their applicability, or work at very limited ranges due to intrinsic constraints. We propose a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (∼ 300Hz), without making assumptions on the type of motion. We achieve these results using a Deep Neural Network approach trained on real and synthetic images and trading some depth accuracy for fast, robust and consistent operation. We show how photo-realistic synthetic images are able to solve the problem of training set dimension and variety typical of machine learning approaches, and how our system is robust to massive blurring of test images. | ['Michele Mancini', 'Gabriele Costante', 'Paolo Valigi', 'Thomas A. Ciarfuglia'] | Fast robust monocular depth estimation for Obstacle Detection with fully convolutional networks | 849,953 |
We provide a game-theoretic analysis of consensus, assuming that processes are controlled by rational agents and may fail by crashing. We consider agents that care only about consensus : that is, (a) an agent's utility depends only on the consensus value achieved (and not, for example, on the number of messages the agent sends) and (b) agents strictly prefer reaching consensus to not reaching consensus. We show that, under these assumptions, there is no ex post Nash Equilibrium , even with only one failure. Roughly speaking, this means that there must always exist a failure pattern (a description of who fails, when they fail, and which agents they do not send messages to in the round that they fail) and initial preferences for which an agent can gain by deviating. On the other hand, if we assume that there is a distribution π on the failure patterns and initial preferences, then under minimal assumptions on π, there is a Nash equilibrium that tolerates f failures (i.e., π puts probability 1 on there being at most f failures) if f +1 n (where n is the total number of agents). Moreover, we show that a slight extension of the Nash equilibrium strategy is also a sequential equilibrium (under the same assumptions about the distribution π). | ['Joseph Y. Halpern', 'Xavier Vilaça'] | Rational Consensus: Extended Abstract | 842,977 |
Diversified histone modifications (HMs) are essential epigenetic features. They play important roles in fundamental biological processes including transcription, DNA repair and DNA replication. Chromatin regulators (CRs), which are indispensable in epigenetics, can mediate HMs to adjust chromatin structures and functions. With the development of ChIP-Seq technology, there is an opportunity to study CR and HM profiles at the whole-genome scale. However, no specific resource for the integration of CR ChIP-Seq data or CR-HM ChIP-Seq linkage pairs is currently available. Therefore, we constructed the CR Cistrome database, available online at http://compbio.tongji.edu.cn/cr and http://cistrome.org/cr/, to further elucidate CR functions and CR-HM linkages. Within this database, we collected all publicly available ChIP-Seq data on CRs in human and mouse and categorized the data into four cohorts: the reader, writer, eraser and remodeler cohorts, together with curated introductions and ChIP-Seq data analysis results. For the HM readers, writers and erasers, we provided further ChIP-Seq analysis data for the targeted HMs and schematized the relationships between them. We believe CR Cistrome is a valuable resource for the epigenetics community. | ['Qixuan Wang', 'Jinyan Huang', 'Hanfei Sun', 'Jing Liu', 'Juan Wang', 'Qian Wang', 'Qian Qin', 'Shenglin Mei', 'Chengchen Zhao', 'Xiaoqin Yang', 'X. Shirley Liu', 'Yong Zhang'] | CR Cistrome: a ChIP-Seq database for chromatin regulators and histone modification linkages in human and mouse | 245,974 |
Identifying Helpful Online Reviews with Word Embedding Features | ['Jie Chen', 'Chunxia Zhang', 'Zhendong Niu'] | Identifying Helpful Online Reviews with Word Embedding Features | 903,685 |
Due to growing endurance, safety and non-invasivity, small drones can be increasingly experimented in unstructured environments. Their moderate computing power can be assimilated into swarm coordination algorithms, performing tasks in a scalable manner. For this purpose, it is challenging to investigate the use of biologically-inspired mechanisms. In this paper the focus is on the coordination aspects between small drones required to perform target search. We show how this objective can be better achieved by combining stigmergic and flocking behaviors. Stigmergy occurs when a drone senses a potential target, by releasing digital pheromone on its location. Multiple pheromone deposits are aggregated, increasing in intensity, but also diffused, to be propagated to neighborhood, and lastly evaporated, decreasing intensity in time. As a consequence, pheromone intensity creates a spatiotemporal attractive potential field coordinating a swarm of drones to visit a potential target. Flocking occurs when drones are spatially organized into groups, whose members have approximately the same heading, and attempt to remain in range between them, for each group. It is an emergent effect of individual rules based on alignment, separation and cohesion. In this paper, we present a novel and fully decentralized model for target search, and experiment it empirically using a multi-agent simulation platform. The different combination strategies are reviewed, describing their performance on a number of synthetic and real-world scenarios. | ['Mario G. C. A. Cimino', 'A. Lazzeri', 'Gigliola Vaglini'] | Combining stigmergic and flocking behaviors to coordinate swarms of drones performing target search | 607,321 |
Bengali, Hindi and Telugu to English Ad-hoc Bilingual Task. | ['Sivaji Bandyopadhyay', 'Tapabrata Mondal', 'Sudip Kumar Naskar', 'Asif Ekbal', 'Rejwanul Haque', 'Srinivasa Rao Godhavarthy'] | Bengali, Hindi and Telugu to English Ad-hoc Bilingual Task. | 733,887 |
Cooperative communication is a promising technique for future wireless networks, which significantly improves link capacity and reliability by leveraging broadcast nature of wireless medium and exploiting cooperative diversity. However, most of existing works investigate its performance theoretically or by simulation. It has been widely accepted that simulations often fail to faithfully capture many real-world radio signal propagation effects, which can be overcome through developing physical wireless network testbeds. In this work, we build a cooperative testbed based on GNU Radio and Universal Software Radio Peripheral (USRP) platform, which is a promising open-source software-defined radio system. Both single-relay cooperation and multi-relay cooperation can be supported in our testbed. Some key techniques are provided to solve the main challenges during the testbed development: e.g., maximum ratio combine in single-relay transmission and synchronized transmission among multiple relays. Extensive experiments are carried out in the testbed to evaluate performance of various cooperative communication schemes. The results show that cooperative transmission achieves significant performance enhancement in terms of link reliability and end-to-end throughput. | ['Jin Zhang', 'Juncheng Jia', 'Qian Zhang', 'Eric M. K. Lo'] | Implementation and Evaluation of Cooperative Communication Schemes in Software-Defined Radio Testbed | 289,462 |
In computational musicology research, clustering is a common approach to the analysis of expression. Our research uses mathematical model selection criteria to evaluate the performance of clustered and non-clustered models applied to intra-phrase tempo variations in classical piano performances. By engaging different standardisation methods for the tempo variations and engaging different types of covariance matrices, multiple pieces of performances are used for evaluating the performance of candidate models. The results of tests suggest that the clustered models perform better than the non-clustered models and the original tempo data should be standardised by the mean of tempo within a phrase. | ['Shengchen Li', 'Dawn A. A. Black', 'Mark D. Plumbley'] | The Clustering of Expressive Timing Within a Phrase in Classical Piano Performances by Gaussian Mixture Models | 891,286 |
The primary purpose of this paper is to introduce and mathematically formulate the covering salesman problem (CSP). The CSP may be stated as follows: identify the minimum cost tour of a subset of n given cities such that every city not on the tour is within some predetermined covering distance standard, S , of a city that is on the tour. The CSP may be viewed as a generalization of the traveling salesman problem. A heuristic procedure for solving the CSP is presented and demonstrated with a sample problem. | ['John R. Current', 'David Schilling'] | THE COVERING SALESMAN PROBLEM | 166,348 |
A survey on the feasibility of surface electromyography (EMG) measurements in facial pacing is presented. Pacing for unilateral facial paralysis consists of the measurement of activity from the healthy side of the face and functional electrical stimulation to reanimate the paralyzed one. The goal of this study is to evaluate the feasibility of surface EMG as a measurement method to detect muscle activations and to determine their intensities. Prior work is discussed, and results from experiments where 12 participants carried out a set of facial movements are presented. EMG was registered from zygomaticus major (smile), orbicularis oris (lip pucker), orbicularis oculi (eye blink), corrugator supercilii (frown), and masseter (chew). Most important facial functions that are limited due to the paralysis are blinking, smiling, and puckering. With majority of the participants, crosstalk between the measured EMG channels was found to be acceptably small to be able to pace smiling and puckering based on detecting their contraction intensities from the healthy side. However, pacing blinking based on orbicularis oculi EMG measurement does not seem possible due to crosstalk from other muscles, but the electro-oculographic (EOG) signals that couple to the same measurement channel could help to detect eye blinks and trigger stimuli. Futhermore, masseter greatly disturbs EMG measurement of most facial muscles, which needs to be addressed in the pacing system to avoid falsely interpreting its activity as the activity of another muscle. | ['Ville Rantanen', 'Mirja Ilves', 'Antti Vehkaoja', 'Anton Kontunen', 'Jani Lylykangas', 'Eeva Mäkelä', 'Markus Rautiainen', 'Veikko Surakka', 'Jukka Lekkala'] | A survey on the feasibility of surface EMG in facial pacing | 911,845 |
Motivation: The computational identification of non-coding RNA regions on the genome is currently receiving much attention. However, it is essentially harder than gene-finding problems for protein-coding regions because non-coding RNA sequences do not have strong statistical signals. Since comparative sequence analysis is effective for non-coding RNA detection, efficient computational methods are expected for structural alignment of RNA sequences. Several methods have been proposed to accomplish the structural alignment tasks for RNA sequences, and we found that one of the most important points is to estimate an accurate score matrix for calculating structural alignments.#R##N##R##N#Results: We propose a novel approach for RNA structural alignment based on conditional random fields (CRFs). Our approach has some specific features compared with previous methods in the sense that the parameters for structural alignment are estimated such that the model can most probably discriminate between correct alignments and incorrect alignments, and has the generalization ability so that a satisfiable score matrix can be obtained even with a small number of sample data without overfitting. Experimental results clearly show that the parameter estimation with CRFs can outperform all the other existing methods for structural alignments of RNA sequences. Furthermore, structural alignment search based on CRFs is more accurate for predicting non-coding RNA regions than the other scoring methods. These experimental results strongly support our discriminative method employing CRFs to estimate the score matrix parameters.#R##N##R##N#Availability: The program which is implemented in C++ is available at http://phmmts.dna.bio.keio.ac.jp/ under the GNU public license.#R##N##R##N#Contact: [email protected] | ['Kengo Sato', 'Yasubumi Sakakibara'] | RNA secondary structural alignment with conditional random fields | 394,006 |
Transmit-reference ultra-wideband (TR-UWB) systems are attractive due to their relatively low complexity at both the transmitter and the receiver. Partly, this is achieved by making restrictive assumptions such as a frame length which should be much larger than the channel length. This limits their use to low data rate applications. In this paper, we lift this restriction and allow inter-frame interference (IFI) to occur. We propose a suitable signal processing data model and corresponding receiver algorithms which take the IFI into account. The performance of the algorithms are verified using simulations | ['Quang Hieu Dang', 'A. van Veen'] | Resolving inter-frame interference in a transmit-reference ultra-wideband communication system | 537,906 |
Psychological Effects of a Synchronously Reliant Agent on Human Beings | ['Felix Jimenez', 'Teruaki Ando', 'Masayoshi Kanoh', 'Tsuyoshi Nakamura'] | Psychological Effects of a Synchronously Reliant Agent on Human Beings | 759,256 |
Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary. | ['Paola Merlo', 'Lonneke van der Plas'] | Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both? | 107,884 |
The logic $$\mathsf {PJ}$$ is a probabilistic logic defined by adding non-iterated probability operators to the basic justification logic $$\mathsf {J}$$. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic $$\mathsf {PJ}$$. The main result of the paper is that the complexity of the derivability problem in $$\mathsf {PJ}$$ remains the same as the complexity of the derivability problem in the underlying logic $$\mathsf {J}$$, which is $$\varPi _2^p$$-complete. This implies hat the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language. | ['Ioannis Kokkinis'] | The Complexity of Non-Iterated Probabilistic Justification Logic | 841,822 |
In this paper we introduce libcrn, a multiplatform open-source document image processing library aimed at researchers and companies. It is written in C++11 and has a non-contaminating license that makes it available for use in any project without legal constraints. The features include low-level image processing (color format conversion, binarization, convolution, PDE…), document images specific tools (connected components extraction, recursive block description, PDF export…), maths (matrix arithmetics, linear algebra, GMMs, equation solvers…), classification and clustering (kNN, k-means, HMMs…). The API is comprehensively documented and libcrn's architecture follows modern C++ guidelines to facilitate the handling of the library and enforce its safe usage. A sample OCR, which is only 30 lines long, is described to illustrate libcrn's scope of possibilities. | ['Yann Leydier', 'Jean Duong', 'Stéphane Bres', 'Véronique Eglin', 'Frank Lebourgeois', 'Martial Tola'] | libcrn, an Open-Source Document Image Processing Library | 954,067 |
Repositorio abierto de locuciones de fórmulas matemáticas. | ['Maria Antonia Huertas', 'Mireia Pascual', 'César Pablo Córcoles', 'Laia Llorens', 'Roger Grises'] | Repositorio abierto de locuciones de fórmulas matemáticas. | 663,654 |
The main goal of this study is to identify bridging reading strategy — a strategy that a reader uses to make a connection from the current sentence to previous sentences to help understanding the meaning of the text. For a specific target sentence, there are two types of bridging: local and distal. Benchmarks were created to help represent each type of bridging. The two immediate prior sentences of each target sentence together created a benchmark for the local bridging. The benchmarks for distal bridging were those prior sentences, excluding two immediate prior sentences. There were three ways that distal benchmarks were created: chunks based-on paragraph, chunks based-on target sentence, and entire collection of prior sentences. The results showed that using modified benchmark by removing up to 4 words within a threshold 0.4 has significantly improved the identification of distal bridging reading strategy by 14% from the original benchmark evaluation. On the other hand, to identify local bridging, using modified benchmark by removing 4 words has significantly improved the identification by 19% from the original benchmark evaluation. | ['Martha Brhane', 'Chutima Boonthum-Denecke'] | Using Latent Semantic Analysis and Word Matching to Enhance Bridging Reading Strategy Identification | 584,819 |
An Empirical Analysis of the Perception of Mobile Website Interfaces and the Influence of Culture. | ['Kiemute Oyibo', 'Yusuf Sahabi Ali', 'Julita Vassileva'] | An Empirical Analysis of the Perception of Mobile Website Interfaces and the Influence of Culture. | 993,644 |
SUMMARY How to minimize the number of mirroring resources under a QoS constraint (resource minimization problem) is an important issue in content delivery networks. This paper proposes a novel approach that takes advantage of the parallelism of dynamically reconfigurable processors (DRPs) to solve the resource minimization problem, which is NPhard. Our proposal obtains the optimal solution by running an exhaustive search algorithm suitable for DRP. Greedy algorithms, which have been widely studied for tackling the resource minimization problem, cannot always obtain the optimal solution. The proposed method is implemented on an actual DRP and in experiments reduces the execution time by a factor of 40 compared to the conventional exhaustive search algorithm on a Pentium 4 (2.8 GHz). | ['Sho Shimizu', 'Hiroyuki Ishikawa', 'Yutaka Arakawa', 'Naoaki Yamanaka', 'Kosuke Shiba'] | Resource Minimization Method Satisfying Delay Constraint for Replicating Large Contents | 286,536 |
The Indian folk tale recorded in the well-known John Saxe poem tells of six blind men, each grabbing a different part of an elephant, and describing their impression of the whole beast from a single part's perspective. So the elephant appears to each blind man to be like a snake, a fan, a tree, a rope, a wall, a spear. As the poem concludes:#R##N##R##N#“And so these men of Indostan, Disputed loud and long, Each in his own opinion, exceeding stiff and strong. Though each was partly right, All were in the wrong.”#R##N##R##N#Although this tale suggests a general metaphor for poor collaboration and social coordination, the insinuation of blindness indicates an inability to share the common information that is normally available through visual perception. When fundamental cognitive resources such as shared information or visual cues are missing, collaborative work practices may suffer from the “anti-cognition” suggested by the elephant metaphor. When individuals believe they are contributing to the whole, but are unable to verify the models that are held by other participants, continued progress might founder. We may find such “blind men” situations when organizations value and prefer independent individual cognition at the expense of supporting whole system coordination. Blindness to shared effects is practically ensured when those who work together are not able to share information. | ['Peter H. Jones', 'Christopher P. Nemeth'] | Cognitive artifacts in complex work | 546,258 |
In this paper, a development of automatic signature classification system is proposed. We have presented offline and online signature verification system, based on the signature invariants and its dynamic features. The proposed system segments each signature based on its perceptually important points and then, for each segment, computes a number of features that are scale, rotation and displacement invariant. The normalized moments and the normalized Fourier descriptors are used for this invariancy, while the speed of pen is used as a dynamic feature of the signature. In both cases the data acquisition, pre-processing, feature extraction and comparison steps are analyzed and discussed. Both static and dynamic features were used as an input to a neural network. The neural network used for classification is a multi-layer perception (MLP) with one input layer, one hidden layer and one output layer. The performance of the proposed system is presented through simulation examples. | ['Abdullah I. Al-Shoshan'] | Handwritten Signature Verification Using Image Invariants and Dynamic Features | 103,773 |
Employing the Balanced Scorecard for the Online Media Business - A Conceptual Framework. | ['Markus Anding', 'Thomas Hess'] | Employing the Balanced Scorecard for the Online Media Business - A Conceptual Framework. | 785,636 |
We introduce an interface for horror-themed entertainment experiences based on integrating breath sensors and WiFi into gas masks. Beyond enabling the practical breath control of entertainment systems, our design aims to heighten the intensity of the experience by amplifying the user's awareness of their breathing, as well as their feelings of isolation, claustrophobia and fear. More generally, this interface is intended to act as a technology probe for exploring an emerging research agenda around fearsome interactions. We describe the deployment of our gas masks in two events: as a control mechanism for an interactive ride, and to enhance a theme park horror maze. We identify six broad dimensions - cultural, visceral, control, social, performance and engineering - that frame an agenda for future research into fearsome interactions. | ['Joe Marshall', 'Brendan Walker', 'Steve Benford', 'George Tomlinson', 'Stefan Rennick Egglestone', 'Stuart Reeves', 'Patrick Brundell', 'Paul Tennent', 'Jo Cranwell', 'Paul Harter', 'Jo Longhurst'] | The gas mask: a probe for exploring fearsome interactions | 334,944 |
We consider the multi-rate retry (MRR) capability provided by current 802.11 implementations and carry out simulation-based study of its impact on performance with state-of- the-art rate control mechanisms in typical indoor wireless LAN scenarios. We find that MRR is more effective in non-congested environments, necessitating the need for a mechanism to differentiate between congested and non-congested situations to better exploit the MRR capability. We also observe that decoupling the long-term rate adaptation algorithm from the MRR mechanism is key to fully realizing the benefits of MRR. | ['Neda Koci', 'Mahesh K. Marina'] | Understanding the role of multi-rate retry mechanism for effective rate control in 802.11 wireless LANs | 314,443 |
The paper presents a parallel implementation of the membrane systems. We implement the simplest variant of P systems, which however defines the essential features of the membrane systems, and acts as a framework for other variants of P systems with advanced functionalities. The mechanisms used in this implementation could be easily adapted to other versions of P systems with minor changes. The implementation is designed for a cluster of computers; it is written in C++ and it makes use of Message Passing Interface as its communication mechanism. | ['Gabriel Ciobanu', 'Wenyuan Guo'] | P systems running on a cluster of computers | 825,544 |
Fast advection in rotating gaseous objects (FARGO, Masset (2000)) algorithm has been widely used in simulating disk-type object in computational astrophysics. In this paper, we revisit this algorithm and propose some improvement. We also propose a semi-Lagrangian adaptive mesh refinement for this algorithm to enhance resolution locally near the embedded proto-planet. Numerical tests are provided to demonstrate the effectiveness of our method. | ['Shengtai Li', 'Hui Li'] | Modified FARGO algorithm and its combination with adaptive mesh refinement | 650,616 |
Speech Rhythm in Parkinson's Disease: A Study on Italian. | ['Massimo Pettorino', 'Maria Grazia Busa', 'Elisa Pellegrino'] | Speech Rhythm in Parkinson's Disease: A Study on Italian. | 879,602 |
The supervisory control theory is a conceptual framework to keep a discrete-event system in a desired state space by disabling controllable events. This paper introduces a new software tool to create supervisors for actual, physical plants in terms of a PLC code implementation. It points out the differences between SCT-synthesized supervisors and controllers and identifies a scenario in which practical applications can benefit from supervision. The presented tool provides a graphical user interface for the modeling and a template-based code generator. For safety specifications, a slightly altered automaton principle, called restricting specification, is used with the goal to decrease the manual modeling effort. Furthermore, event preemption is supported, which allows to permissively supervise controllers that react to uncontrollable events. This paper intends to give a brief overview of the key features of the tool. | ['Florian Gobe', 'Thomas Timmermanns', 'Oliver Ney', 'Stefan Kowalewski'] | Synthesis Tool for Automation Controller Supervision | 821,968 |
Due to increasing possibilities to create digital video, we are facing the emergence of large video archives that are made accessible either online or offline. Though a lot of research has been spent on video retrieval tools and methods, which allow for automatic search in videos, still the performance of automatic video retrieval is far from optimal. At the same time, the organization of personal data is receiving increasing research attention due to the challenges that are faced in gathering, enriching, searching and visualizing this data. Given the increasing quantities of personal data being gathered by individuals, the concept of a heterogeneous personal digital libraries of rich multimedia and sensory content for every individual is becoming a reality. Despite the differences between video archives and personal lifelogging libraries, we are facing very similar challenges when accessing these multimedia repositories. For example, users will struggle to find the information they are looking for in either collection if they are not able to formulate their search needs through a query. In this tutorial we discussed (i) proposed solutions for improved video & lifelog content navigation, (ii) typical interaction of content-based querying features, and (iii) advanced content visualization methods. Moreover, we discussed and demonstrate interactive video & lifelog search systems and ways to evaluate their performance. | ['Frank Hopfgartner', 'Klaus Schoeffmann'] | Interactive Search in Video & Lifelogging Repositories | 964,549 |
In this paper, we apply an emerging method, online learning with dynamics, to deduce properties of distributed energy resources (DERs) from coarse measurements, e.g., measurements taken at distribution substations, rather than household-level measurements. Reduced sensing requirements can lower infrastructure costs associated with reliably incorporating DERs into the distribution network. We specifically investigate whether dynamic mirror descent (DMD), an online learning algorithm, can determine the real-time controllable demand served by a distribution feeder using feeder-level active power demand measurements. In our scenario, DMD incorporates various controllable demand and uncontrollable demand models to generate real-time controllable demand estimates. In a realistic scenario, these estimates have an RMS error of 8.34% of the average controllable demand, which improves to 5.53% by incorporating more accurate models. We propose topics for additional work in modeling, system identification, and the DMD algorithm itself that could improve the RMS errors. | ['Gregory S. Ledva', 'Laura Balzano', 'Johanna L. Mathieu'] | Inferring the behavior of distributed energy resources with online learning | 938,088 |
In some online labor markets, workers are paid by the task, choose what tasks to work on, and have little or no interaction with their (usually anonymous) buyer/employer. These markets look like true spot markets for tasks rather than markets for employment. Despite appearances, we find via a field experiment that workers act more like parties to an employment contract: workers quickly form wage reference points and react negatively to proposed wage cuts by quitting. However, they can be mollified with “reasonable” justifications for why wages are being cut, highlighting the importance of fairness considerations in their decision making. We find some evidence that “unreasonable” justifications for wage cuts reduce subsequent work quality. We also find that not explicitly presenting the worker with a decision about continuing to work eliminates “quits,” with no apparent reduction in work quality. One interpretation for this finding is that workers have a strong expectation that they are party to a quasi-em... | ['Daniel L. Chen', 'John J. Horton'] | Research Note—Are Online Labor Markets Spot Markets for Tasks? A Field Experiment on the Behavioral Response to Wage Cuts | 795,892 |
This paper proposes a dynamic model of DNA microarray hybridization properties in moving fluid. Prior experimental studies indicate hybridization efficiency is closely related to fluid dynamics, temperature, DNA probe density and microarray surface properties. Simulation results using the model proposed here agree well with practical observations. The model may be used to improve and manipulate performance of DNA microarray hybridization, and implement as a control model for hybridization automation to improve reliability and robustness of microarray hybridization process. | ['Tad Hogg', 'Mingjun Zhang', 'Ruoting Yang'] | Modeling and analysis of DNA hybridization dynamics at microarray surface in moving fluid | 137,505 |
iLU Preconditioning of the Anisotropic-Finite-Difference Based Solution for the EEG Forward Problem | ['Ernesto Cuartas-Morales', 'Carlos Daniel Daniel-Acosta', 'Germán Castellanos-Domínguez'] | iLU Preconditioning of the Anisotropic-Finite-Difference Based Solution for the EEG Forward Problem | 669,606 |
In this work we present a novel security architecture for MANETs that merges the clustering and the threshold key management techniques. The proposed distributed authentication architecture reacts with the frequently changing topology of the network and enhances the process of assigning the node's public key. In the proposed architecture, the overall network is divided into clusters where the clusterheads (CH) are connected by virtual networks and share the private key of the central authority (CA) using Lagrange interpolation. Experimental results show that the proposed architecture reaches to almost 95.5% of all nodes within an ad-hoc network that are able to communicate securely, 9 times faster than other architectures, to attain the same results. Moreover, the solution is fully decentralized to operate in a large-scale mobile network. | ['Atef Z. Ghalwash', 'Aliaa A. A. Youssif', 'Sherif M. Hashad', 'Robin Doss'] | Self Adjusted Security Architecture for Mobile Ad Hoc Networks (MANETs) | 184,180 |
Mutual intelligibility of American, Chinese and Dutch-accented speakers of English tested by SUS and SPIN sentences. | ['Hongyan Wang', 'Vincent J. van Heuven'] | Mutual intelligibility of American, Chinese and Dutch-accented speakers of English tested by SUS and SPIN sentences. | 794,697 |
Closing the Loop with an Enhanced Referral Management System. | ['Harley Z. Ramelson', 'Amanda von Taube', 'Pamela M. Neri'] | Closing the Loop with an Enhanced Referral Management System. | 981,541 |
This paper presents an optimal global planner for autonomous tracked vehicles navigating in off-road terrain with uncertain slip, which affects the vehicle as a process noise. This paper incorporates two fields of study: slip estimation and motion planning. For slip estimation, an experimental result from [9] is used to model the effect of the slip on the vehicle in various soil types. For motion planning, a robust incremental sampling based motion planning algorithm (CC-RRT*) is combined with the LQG-MP algorithm. CC-RRT* yields the optimal and probabilistically feasible trajectory by using a chance constrained approach under the RRT* framework. LQG-MP provides the capability of considering the role of compensator in the motion planning phase and bounds the degree of uncertainty to appropriate size. In simulation, the planner successfully finds the optimal and robust solution. In addition, the planner is compared with an RRT* algorithm with dilated obstacles to show that it avoids being overly conservative. | ['Sang Uk Lee', 'Ramon Gonzalez', 'Karl Iagnemma'] | Robust sampling-based motion planning for autonomous tracked vehicles in deformable high slip terrain | 810,930 |
Due to their pliability, sensitivity and cheapness, piezoresistive sensors can be usefully adopted to recover joint bend angles in human body movement tracking. After providing quasi-static and dynamic electrical characterization of piezoresistive sensors, the authors develop a simple and accurate RLC model fitted on sensor electrical response under fast deformation and relaxation movements, which allows to predict the actual device behavior in tracking body fast movements. | ['Giancarlo Orengo', 'Giovanni Saggio', 'Stefano Bocchetti', 'Franco Giannini'] | Advanced characterization of piezoresistive sensors for human body movement tracking | 368,735 |
The process of checking mobile notifications can be challenging when the user is engaged with another task that requires him/her to monitor the path ahead (e.g. running, driving). Developing expressive tactile feedback to communicate key components of the message would enable users to decide whether to attend to the notification, or to continue with the on-going activity. We describe the design of a paired-comparison task to determine how to map tactile parameters to characteristics of incoming messages. Early findings from a field study highlight the promise offered by multi-parameter tactile cues designed using mappings identified from the paired-comparison task, even when distracters are present. | ['Huimin Qian', 'Ravi Kuber', 'Andrew Sears'] | Supporting the mobile notification process through tactile cues selected using a paired comparison task | 490,510 |
Ambiguity in requirements isn't always a bad thing. In the right hands, it can be positively useful. | ['Neil A. M. Maiden'] | Cherishing Ambiguity | 685,991 |
ICT has become a crucial element in supporting energy management. It allows for design and implementation of smart grids. In this paper we present a distributed solution for improving the efficiency for electricity distribution and for more rational use of energy, minimizing overloads and voltage variations. It is implemented by a Virtual Market that has been built according to a distributed approach over a P2P server-less overlay. In our prototype, RetroShare is used to implement a F2F (Friend-To-Friend) network where intelligent agents can broker and negotiate energy autonomously on user's behalf according to high level policies are discussed. | ['Alba Amato', 'Beniamino Di Martino', 'Marco Scialdone', 'Salvatore Venticinque'] | A Virtual Market for Energy Negotiation and Brokering | 688,382 |
Anthropology has always included and continues to include the social aspects of technology and innovation. Despite early calls for the inclusion of this research in the field of science, technology, and society, anthropological research, particularly in the realm of prehistory, has by and large not been integrated into this new discipline. This is unfortunate, as anthropology draws data from a range of human experience much broader culturally and deeper historically than political scientists, historians, economists, and so forth, which can benefit areas of inquiry such as technology and gender. | ['Michael N. Geselowitz'] | Anthropology, archaeology, and the social study of technology: an overview | 346,419 |
We consider a large wireless network constituting a radio telescope. Each of the anticipated 3000 nodes is triggered to collect data for further analysis at a rate of more than 200 Hz, mostly caused by noisy environmental sources. However, relevant cosmic rays occur only a few times a day. As every trigger has an associated 12.5 KB of data, and considering the size of the telescope in number of nodes and covered area, centralised processing is not an option. We propose a fully decentralised event detection algorithm based on collaborative local data analysis, effectively filtering out only those triggers that need further centralised processing. As we show through performance evaluations, the crux in the design is finding the right balance between accuracy and efficient use of resources such as the communication bandwidth in the unreliable communication environment. | ['Suhail Yousaf', 'Rena Bakhshi', 'Maarten van Steen'] | Reliable localised event detection in a wireless distributed radio telescope | 128,205 |
PRIDE is one of the most efficient lightweight block cipher proposed so far for connected objects with high performance and low-resource constraints. In this paper we describe the first ever complete Differential Fault Analysis against PRIDE. We describe how fault attacks can be used against implementations of PRIDE to recover the entire encryption key. Our attack has been validated first through simulations, and then in practice on a software implementation of PRIDE running on a device that could typically be used in IoT devices. Faults have been injected using electromagnetic pulses during the PRIDE execution and the faulty ciphertexts have been used to recover the key bits. We also discuss some countermeasures that could be used to thwart such attacks. | ['Benjamin Lac', 'Marc Beunardeau', 'Anne Canteaut', 'Jacques Fournier', 'Renaud Sirdey'] | A First DFA on PRIDE: from Theory to Practice | 982,483 |
One type of information technology which has the potential to support learning is group support system (GSS) technology. This study investigates the opportunities and pitfalls associated with using GSS to support discussion in the college classroom. As the prior research in this area is not extensive, this study is exploratory in nature. The primary contribution of the paper is that it identifies important issues which should be considered by educators who wish to use GSS to support classroom discussions. In this study, the use of GSS improved participation and had favorable impacts on selected perceptions of the discussion experience (e.g., process losses, structure, and classroom climate). However, the use of GSS did not significantly increase the subjects' perceptions of synergy, quality of contributions, or learning. The implications of the findings for future research are discussed. | ['Craig K. Tyran'] | GSS to support classroom discussion: opportunities and pitfalls | 409,151 |
This paper contributes a new high quality dataset for person re-identification, named "Market-1501". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset. | ['Liang Zheng', 'Liyue Shen', 'Lu Tian', 'Shengjin Wang', 'Jingdong Wang', 'Qi Tian'] | Scalable Person Re-identification: A Benchmark | 577,423 |
Executable business process models build on the specification of process activities, their implemented business functions (e.g., Web services) and the control flow between these activities. Before deploying such a model, it is important to verify control-flow correctness. A process is sound if its control-flow guarantees proper completion and there are no deadlocks. However, a sound control flow is not sufficient to ensure that an executable process model indeed behaves as expected. This is due to business functions requiring certain preconditions to be fulfilled for execution and having an effect on the process (postconditions). Semantic annotations provide a means for taking such further aspects into account. Inspired by OWL-S and WSMO, we consider process models in which the individual activities are annotated with logical preconditions and postconditions specified relative to an ontology that axiomatizes the underlying business domain. Verification then means to determine whether the interaction of control flow and logical states of the process is correct. To this end, we formalize the semantics of annotated processes and point out which kinds of flaws may arise. We then identify a class of processes with restricted semantic annotations where correctness can be verified in polynomial time; and we prove that the semantic annotations cannot be generalized without losing computational efficiency. The paper is written at a semi-formal level using an illustrative example, details can be looked up in a longer technical report. | ['Ingo Weber', 'Jörg Hoffmann', 'Jan Mendling'] | Beyond Soundness: On the Semantic Consistency of Executable Process Models | 334,934 |
In the world of Internet of Things IoT, huge number of resource constrained devices are directly accessible over the Internet. For allowing the constrained devices to exchange information, the IETF standard group has specified the CoAP which works on top of UDP/IP. Also, Datagram TLS DTLS binding is recommended to make the CoAP secure. When DTLS is enabled, a device can select one of three security modes that are PreSharedKey, RawPublicKey and Certificate mode. Especially, the RawPublicKey mode, which uses an asymmetric-key pair without a certificate, is mandatory to implement CoAP over DTLS. But there are several challenges in using the asymmetric-key based secure mode in resource constrained device. This paper compares the RawPublicKey mode and the PreSharedKey mode, which uses a symmetric-key, to discuss DTLS performance in resource constrained devices and networks. For the comparison, we implemented an experimental environment based on IEEE 802.15.4 wireless networks consisting of resource constrained devices in the Cooja Simulator and in the real test-bed as well. Then we analyze the comparison results with regard to code size, energy consumption and processing and receiving time. | ['Hyeokjin Kwon', 'Jiye Park', 'Namhi Kang'] | Challenges in Deploying CoAP Over DTLS in Resource Constrained Environments | 841,884 |
Existing distributed transactional system execution model based on globally-consistent contention management policies may abort many transactions that could potentially commit without violating correctness. To reduce unnecessary aborts and increase concurrency, we propose the distributed dependency-aware (DDA) model, which adopts different conflicting resolution strategies for different transactions. In the DDA model, the concurrency of transactions is enhanced by ensuring that read-only and write-only transactions never abort, through established precedence relations with other transactions. Non-write-only update transactions are handled through a contention management policy. We identify the inherent limitations in establishing precedence relations in distributed transactional systems and propose their solutions. We present a set of algorithms to support the DDA model, then we prove the correctness and permissiveness of the DDA model and show that it supports invisible reads and efficiently garbage collects useless object versions. | ['Bo Zhang', 'Binoy Ravindran', 'Roberto Palmieri'] | Reducing Aborts in Distributed Transactional Systems through Dependency Detection | 526,295 |
Wireless Sensor Networks are becoming even more a key element in networking and telecommunications especially with the advent of the Internet of Things paradigm, where each single device obtains a unique IPv6 address and is potentially reachable from everywhere through the Internet. Such technologies can be applied in many application fields with great success in terms of optimization of costs and resources, variety of implemented features, level of customization and expandability of each solution. Ambient Assisted Living is definitely one of the most interesting area for WSN and IoT application. In this scenery we propose an applicative example of the use of a IPv6 self configuring WSN with mesh topology. The network is formed by low cost and low power sensor nodes that integrate sub-GHz radio connectivity, belonging to the so-called LLN (Low Power and Lossy Network) scenery. The network stack is fully compatible with the major RFCs about Internet and wireless communications (CSMA, 6LowPAN, uIP, UDP, CoAP, etc.). The case study is a localization system using RSSI feature and does not need additional expensive hardware to be integrated into the nodes. The system proves to be fundamental in indoor environments (houses, clinics, nursing homes) for AAL systems that require localization or tracking of patients and medical equipments. Our tests are performed in extremely complex environment and shown good results localizing targets with an average error of about 2 meters that allows to properly detect the room where targets are located. | ['Paola Pierleoni', 'Luca Pernini', 'Alberto Belli', 'Lorenzo Palma', 'Lorenzo Maurizi', 'Simone Valenti'] | Indoor localization system for AAL over IPv6 WSN | 963,043 |
Data intelligence on the Internet of Things | ['Zhangbing Zhou', 'Kim Fung Tsang', 'Zhuofeng Zhao', 'Walid Gaaloul'] | Data intelligence on the Internet of Things | 721,143 |
Does Sexting Improve Adult Sexual Relationships | ['Brenda K. Wiederhold'] | Does Sexting Improve Adult Sexual Relationships | 605,855 |
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated. | ['Marta Kersten-Oertel', 'Pierre Jannin', 'D.L. Collins'] | DVV: A Taxonomy for Mixed Reality Visualization in Image Guided Surgery | 296,127 |
We present a lossless data hiding method for JPEG2000 compressed data based on the reversible information hiding for binary images we have proposed. In JPEG2000 compression, full color images with RGB three colors are transformed to YCrCb color space, and then, for each color component, wavelet transform, quantization and entropy coding are performed independently. The wavelet coefficients of each color component are quantized, therefore, a least significant bit plane (LSB) would be extracted. The proposed method embeds additional information to be hidden into the quantized wavelet coefficients of the Y color component in a reversible way. To realize this, we embed not only secret data and a JBIG2 bit-stream of a part of the LSB plane but also the bit-depth of the quantized coefficients on some code-blocks. Experimental results demonstrate the feasibility of an application of the proposed method to image alteration detection for JPEG2000 compressed data. | ['Shogo Ohyama', 'Michiharu Niimi', 'Kazumi Yamawaki', 'Hideki Noda'] | Reversible data hiding of full color JPEG2000 compressed bit-stream preserving bit-depth information | 482,839 |
Intelligent tutoring systems (ITS) provide individualized instruction. They offer many advantages over the traditional classroom scenario: they are always available, nonjudgmental and provide tailored feedback resulting in increased and effective learning. However, they are still not as effective as one-on-one human tutoring. The next generation of intelligent tutors is expected to be able to take into account the cognitive and emotional state of students. We present a proposed contribution of affect to student modeling, and reports on the progress made in the development of a facial expression analysis component for intelligent tutoring systems. | ['Abdolhossein Sarrafzadeh', 'Hamid Hosseini', 'Chao Fan', 'Scott P. Overmyer'] | Facial expression analysis for estimating learner's emotional state in intelligent tutoring systems | 216,853 |
Built-in self-repair (BISR) techniques are widely used to enhance the yield of memories in a system-on-chip (SOC). A SOC typically consists of hundreds of memories. Cost-efficient BISR schemes for repairing those memories thus are imperative. In this paper, we propose a memory BISR automatic generation (MBAG) framework for designing memory BISR circuits in a SOC. The MBAG framework consists of a test scheduling engine and a memory grouping engine for the minimization of test time and area cost of the BISR circuits. The test scheduling algorithm has been presented in our previous work [1]. In this paper, therefore, we focus on the introduction of the grouping algorithm determining the memories which can share a BISR circuit under the constraints of distance and scheduling results. Simulation results show that the proposed MBAG can generate reconfigurable BISR circuits for 20 memories such that 50% area reduction is achieved in comparison with a dedicated BISR scheme if the distance constraint is 3mm and the test power constraint is 80mW. | ['Tsu-Wei Tseng', 'Chih-Sheng Hou', 'Jin-Fu Li'] | Automatic generation of memory built-in self-repair circuits in SOCs for minimizing test time and area cost | 203,952 |
The authors develop a method based on the premise that optimal state assignment corresponds to finding an optimal general decomposition of a finite state mechanism (FSM). They discuss the use of this approach for encoding state transition graphs extracted from logic-level descriptions. The notion of transition pairing is used to decompose a given FSM into several submachines such that the state assignment problem for the submachines is simpler than the original problem, attempting to avoid compromising the optimality of the solution. A novel decomposition algorithm that can decompose a FSM into an arbitrary number of submachines and a novel constraint satisfaction algorithm to encode the different submachines are given. Experimental results validate the use of decomposition-based techniques to solve the encoding problem. > | ['James H. Kukula', 'Srinivas Devadas'] | Finite state machine decomposition by transition pairing | 156,840 |
Selected web search engines provide statistics of user activities according to the topics, time and locations. The utilization requires well prepared phrases and searching range. The system of etalons for calibration searching frequencies provided by Google Trends is proposed. It was applied for evaluation of searching names of Czech towns. The regression analysis proved high correlation with population. Highlighted anomalies were explored. K-means cluster analysis enabled a categorization of selected towns. The geographical network analysis of relationships among towns suffers from low quality of locations provided by Google. The discussion includes an overview of main pros and cons of Google Trends and provides recommendations. | ['Jiří Horák', 'Igor Ivan', 'Pavel Kukuliaă', 'Tomáš Inspektor', 'Branislav Deveăka', 'Markéta Návratová'] | Google Trends for Data Mining. Study of Czech Towns | 560,332 |
Tight Bounds for Keyed Sponges and Truncated CBC. | ['Peter Gazi', 'Krzysztof Pietrzak', 'Stefano Tessaro'] | Tight Bounds for Keyed Sponges and Truncated CBC. | 787,751 |
In this paper, we will discuss the problem of optimal model order reduction of bilinear control systems with respect to the generalization of the well-known ${\cal H}_2$-norm for linear systems. We revisit existing first order necessary conditions for ${\cal H}_2$-optimality based on the solutions of generalized Lyapunov equations arising in bilinear system theory and present an iterative algorithm which, upon convergence, will yield a reduced system fulfilling these conditions. While this approach relies on the solution of certain generalized Sylvester equations, we will establish a connection to another method based on generalized rational interpolation. This will lead to another way of computing the ${\cal H}_2$-norm of a bilinear system and will extend the pole-residue optimality conditions for linear systems, also allowing for an adaption of the successful iterative rational Krylov algorithm to bilinear systems. By means of several numerical examples, we will then demonstrate that the new techniques ... | ['Peter Benner', 'Tobias Breiten'] | Interpolation-Based H2-Model Reduction of Bilinear Control Systems | 200,206 |
By implementing a dynamic, iterative development process, technology transfer offices can earn additional value and recognition for their institution, bring commercially viable and valuable technologies to the marketplace, support local economic development, and earn funding to support future projects. | ['Steven A Fontana'] | Technology Development as an Alternative to Traditional Technology Transfer Models | 164,346 |
Wireless sensor networks have potential to monitor environments for both military and civil applications. Due to inhospitable conditions these sensors are not always deployed uniformly ion the area of interest. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial to extend the life of the sensors. Sensors' energy cannot support long haul communication to reach a remote command site and thus requires many levels of hops or a gateway to forward the data on behalf of the sensor. In this paper, we propose an algorithm to network these sensors in to well define clusters with less energy-constrained gateway nodes acting as cluster-heads, and balance load among these gateways. Simulation results show how our approach can balance the load and improve the lifetime of the system. | ['Gaurav Gupta', 'Mohamed F. Younis'] | Load-balanced clustering of wireless sensor networks | 286,410 |
We show that convex optimization methods have fundamental properties that complicate performing signal segmentation based on sparsity assumptions. We review the recently introduced overcomplete sparse segmentation model, we perform experiments revealing the limits, and we explain this behaviour. We also propose modifications and alternatives. | ['Pavel Rajmic', 'Michaela Novosadova'] | On the limitation of convex optimization for sparse signal segmentation | 947,704 |
An ad hoc wireless network is an autonomous self-organizing system of mobile nodes connected by wireless links where nodes not in direct range communicate via intermediary nodes. Routing in ad hoc networks is a challenging problem as a result of highly dynamic topology as well as bandwidth and energy constraints. In addition, security is critical in these networks due to the accessibility of the shared wireless medium and the cooperative nature of ad hoc networks. However, none of the existing routing algorithms can withstand a dynamic proactive adversarial attack. The routing protocol presented in this work attempts to provide throughput-competitive route selection against an adaptive adversary. A proof of the convergence time of our algorithm is presented as well as preliminary simulation results. | ['Baruch Awerbuch', 'David Holmer', 'Herbert Rubens', 'Robert Kleinberg'] | Provably competitive adaptive routing | 8,196 |
We examined age-related changes in the interactions among brain regions in children performing rhyming judgments on visually presented words. The difficulty of the task was manipulated by including a conflict between task-relevant (phonological) information and task-irrelevant (orthographic) information. The conflicting conditions included pairs of words that rhyme despite having different spelling patterns (jazz-has), or words that do not rhyme despite having similar spelling patterns (pint-mint). These were contrasted with nonconflicting pairs that have similar orthography and phonology (dime-lime) or different orthography and phonology (press-list). Using fMRI, we examined effective connectivity among five left hemisphere regions of interest: fusiform gyrus (FG), inferior frontal gyrus (IFG), intraparietal sulcus (IPS), lateral temporal cortex (LTC), and medial frontal gyrus (MeFG). Age-related increases were observed in the influence of the IFG and FG on the LTC, but only in conflicting conditions. These results reflect a developmental increase in the convergence of bottom-up and top-down information on the LTC. In older children, top-down control process may selectively enhance the sensitivity of the LTC to bottom-up information from the FG. This may be evident especially in situations that require selective enhancement of task-relevant versus task-irrelevant information. Altogether these results provide a direct evidence for a developmental increase in top-down control processes in language processing. The developmental increase in bottom-up processing may be secondary to the enhancement of top-down processes. | ['Tali Bitan', 'Jimmy Cheon', 'Dong Lu', 'Douglas D. Burman', 'James R. Booth'] | Developmental increase in top-down and bottom-up processing in a phonological task: An effective connectivity, fmri study | 238,769 |
In this paper, the IP core is designed with ALT ERA NIOSII soft-core processors as the core and Cyclone II FPGA series as the digital platform, the SOPC technology is used to make the I/O interface controller soft-core such as microprocessors and PS2 keyboard on a chip of FPGA. NIOSII IDE is used to accomplish the software testing of system and the hardware test is completed by ALTERA Cyclone II EP2C35 FPGA chip experimental platform. The result shows that the functions of this IP core are correct, furthermore it can be reused conveniently in the SOPC system. | ['Sujuan Li', 'Fei Xiang', 'Juwei Zhang'] | Design of PS2 keyboard controller IP core based on SOPC | 292,562 |
Contextualizing Concepts | ['Liane Gabora', 'Diederik Aerts'] | Contextualizing Concepts | 707,117 |
Nonlinear Stabilization of a DC-Bus Supplying a Constant Power Load | ['Ahmed-Bilal Awan', 'Babak Nahid-Mobarakeh', 'Serge Pierfederici', 'Farid Meibody-Tabar'] | Nonlinear Stabilization of a DC-Bus Supplying a Constant Power Load | 539,730 |
�� ��� ��� ��� ��� ���� ������ ������������ ������������ ������������ ������������ ����������� ����� �������������� �� ��������������� ����������������� ��������������� � ������������� ��������� �� ��� ��� ��� ��� ���� �������������� ��������� ��������������� ����������������� ����������� ��������������� � �������������� �� ����������� �������������� ����� $� %� &� '� (� )� ��������������� ��������������� ��� �������������� � ����� �������������� �����!� �������������� ��������� �������� �������������� ������ ����� ����� * * * * * * * * | ['André Calero Valdez', 'Anne Kathrin Schaar', 'Martina Ziefle'] | Personality Influences on Etiquette Requirements for Social Media in the Work Context When Jaunty Juveniles Communicate with Serious Suits | 761,141 |
Motivated by the waiting lines between the U.S.--Canadian border crossings, we investigate a security-check system with both security and customer service goals. In such a system, every customer has to be inspected by the first-stage inspector, but only a proportion of customers need to go through the second stage for further inspection. This “further inspection proportion,” affecting both security screening and the system congestion, becomes a key decision variable for the security-check system. Using a stylized two-stage queueing model, we established the convexity of the expected waiting cost function. With such a property, the optimal further inspection proportion can be determined to achieve the balance of the two goals and the service capacities can be classified into “security-favorable,” “security-unfavorable,” or “security-infeasible” categories. A specific capacity category implies if the security and customer service goals are consistent or in conflict. In addition, we have verified that the properties discovered in the stylized model also hold approximately in a more general multiserver setting. Numerical results are presented to demonstrate the accuracy and robustness of the approximations and the practical value of the model.#R##N##R##N#This paper was accepted by Assaf Zeevi, stochastic models and simulation. | ['Zhe George Zhang', 'Hsing Luh', 'Chia-Hung Wang'] | Modeling Security-Check Queues | 215,909 |
In wireless environments, video quality can be severely degraded due to channel errors. Improving error robustness towards the impact of packet loss in error-prone network is considered as a critical concern in wireless video networking research. Data partitioning (DP) is an efficient error-resilient tool in video codec that is capable of reducing the effect of transmission errors by reorganizing the coded video bitstream into different partitions with different levels of importance. Significant video performance improvement can be achieved if DP is jointly optimized with unequal error protection (UEP). This paper proposes a fast and accurate frame-recursive block-based distortion estimation model for the DP tool in H.264.AVC. The accuracy of our model comes from appropriately approximating the error-concealment cross-correlation term (which is neglected in earlier work in order to reduce computation burden) as a function of the first moment of decoded pixels.Without increasing computation complexity, our proposed distortion model can be applied to both fixed and variable block size intra-prediction and motion compensation. Extensive simulation results are presented to show the accuracy of our estimation algorithm. | ['Werayut Saesue', 'Jian Zhang', 'Chun Tung Chou'] | Hybrid frame-recursive block-based distortion estimation model for wireless video transmission | 448,879 |
This paper introduces the observability radius of network systems, which measures the robustness of a network to perturbations of the edges. We consider linear networks, where the dynamics are described by a weighted adjacency matrix, and dedicated sensors are positioned at a subset of nodes. We allow for perturbations of certain edge weights, with the objective of preventing observability of some modes of the network dynamics. Our work considers perturbations with a desired sparsity structure, thus extending the classic literature on the controllability and observability radius of linear systems. We propose an optimization framework to determine a perturbation with smallest Frobenius norm that renders a desired mode unobservable from a given set of sensor nodes. We derive optimality conditions and a heuristic optimization algorithm, which we validate through an example. | ['Gianluca Bianchin', 'Paolo Frasca', 'Andrea Gasparri', 'Fabio Pasqualetti'] | The observability radius of network systems | 861,516 |
Can automatically generated questions scaffold reading comprehension? We automated three kinds of multiple-choice questions in children's assisted reading: 1. Wh- questions: ask a generically worded What/Where/When question. 2. Sentence prediction: ask which of three sentences belongs next. 3. Cloze: ask which of four words best fills in a blank in the next sentence. A within-subject experiment in the spring 2003 version of Project LISTEN's Reading Tutor randomly inserted all three kinds of questions during stories as it helped children read them. To compare their effects on story-specific comprehension, we analyzed 15,196 subsequent cloze test responses by 404 children in grades 1-4. ○ Wh- questions significantly raised children's subsequent cloze performance. ○ This effect was cumulative over the story rather than a recency effect. ○ Sentence prediction questions probably helped (p =.07). ○ Cloze questions did not improve performance on later questions. ○ The rate of hasty responses rose over the year. ○ Asking a question less than 10 seconds after the previous question increased the likelihood of the student giving a hasty response. The results show that a computer can scaffold a child's comprehension of a given text without understanding the text itself, provided it avoids irritating the student. | ['Joseph E. Beck', 'Jack Mostow', 'Juliet Bey'] | Can automated questions scaffold children's reading comprehension? | 895,496 |
The Critical Role of External Validity in Advancing Organizational Theorizing | ['Ghiyoung Im', 'Detmar W. Straub'] | The Critical Role of External Validity in Advancing Organizational Theorizing | 549,937 |
Discusses a spatio-temporal object query language (OQL), which treats spatial data and temporal data in the same way. We address spatial, temporal and spatio-temporal predicates and operators, and then show the queries in the spatio-temporal OQL and the ones in the internal expression of the spatio-temporal database system Hawks. This language is going to be implemented in INADA/ODMG, which is a database programming language based on C++ and which provides the C++ bindings of the ODMG-93 Object Database Standard. The expressions, in which spatio-temporal objects are defined as the figures of the 4D topological space resulting from the direct product of the 3D space and time, are used to retrieve those spatio-temporal objects which satisfy a condition. | ['S. Kuroki', 'A. Makinouchi', 'K. Ishizuka'] | Towards a spatio-temporal OQL for the four dimensional spatial database system Hawks | 55,043 |