abstract
stringlengths 8
10.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 6
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Unequal but Fair? Weights in the Serial Integration of Haptic Texture Information | ['Alexandra Lezkan', 'Knut Drewing'] | Unequal but Fair? Weights in the Serial Integration of Haptic Texture Information | 618,839 |
In this paper, linear precoding for non-orthogonal space time block codes (STBC) is investigated. A theoretical model of spatial correlation with a Laplacian distribution of AOA is first derived. The design of the precoder is based on the choice of the codeword error matrix according to a criterion. We propose here a new criterion based on the system outage probability to select the suitable codeword error matrix allowing to move rapidly from one diversity order to the next. Codeword selection points out the importance of the determinant and the eigenvalues of the error matrices. The proposed method is applied to the non-orthogonal optimal STBC : 2 times 2 golden code and 4 times 4 perfect code. | ['Ali Ameur Haj Salah', 'Ahmed Saadani', 'G.R. Ben Othman'] | On the Linear Precoding of Non-Orthogonal STBC for Correlated MIMO Channel | 392,554 |
Mobile hosts typically have scarce energy due to short battery lifetimes. We propose a scheduling algorithm which is suitable for battery-constrained multihop packet radio networks. The proposed algorithm, called ECTS (energy conserving transmission scheduling), focuses on conserving battery power while preserving topology transparency, guaranteed minimum throughput, bounded maximum delay, and fair transmission policy. The ECTS algorithm conserves the power using strategies that allow the network interface to use the low power sleep mode instead of the idle mode, and eliminates data collisions using RTS (request-to-send) and CTS (clear-to-send) control slots. As observed in previous experiments, the cost of mode transition is quite expensive. To relieve this unnecessary power consumption, the ECTS algorithm significantly reduces the number of mode transitions. For low-power hosts, the ECTS protocol reduces the number of mode transitions further. We have simulated and compared the energy efficiency of our protocol with the IEEE 802.11 and GRAND (Galois radio network design) algorithms. Simulation results show our protocol is very efficient in terms of power conservation. | ['Jong-Hoon Youn', 'Bella Bose'] | An energy conserving medium access control protocol for multihop packet radio networks | 477,734 |
Semi-rotation invariant feature descriptors using Zernike moments for MLP classifier. | ['Yu-Bin Yoon', 'Lae-Kyoung Lee', 'Se-Young Oh'] | Semi-rotation invariant feature descriptors using Zernike moments for MLP classifier. | 939,888 |
Short-Time Fourier Transform-Brillouin Optical Time-Domain Reflectometry (STFT-BOTDR) implements STFT over the full frequency spectrum to measure the distributed temperature and strain along the optic fiber, providing new research advances in dynamic distributed sensing. The spatial and frequency resolution of the dynamic sensing are limited by the Signal to Noise Ratio (SNR) and the Time-Frequency (T-F) localization of the input pulse shape. T-F localization is fundamentally important for the communication system, which suppresses interchannel interference (ICI) and intersymbol interference (ISI) to improve the transmission quality in multicarrier modulation (MCM). This paper demonstrates that the T-F localized input pulse shape can enhance the SNR and the spatial and frequency resolution in STFT-BOTDR. Simulation and experiments of T-F localized different pulses shapes are conducted to compare the limitation of the system resolution. The result indicates that rectangular pulse should be selected to optimize the spatial resolution and Lorentzian pulse could be chosen to optimize the frequency resolution, while Gaussian shape pulse can be used in general applications for its balanced performance in both spatial and frequency resolution. Meanwhile, T-F localization is proved to be useful in the pulse shape selection for system resolution optimization. | ['Linqing Luo', 'Bo Li', 'Yifei Yu', 'Xiaomin Xu', 'Kenichi Soga', 'Jize Yan'] | Time and Frequency Localized Pulse Shape for Resolution Enhancement in STFT-BOTDR | 630,393 |
This research presents a working methodology for developing an automatic planning system of the scanning process of free-form surfaces. The surface has been modelled using a STL format, that permits the automatic recognizing of any type of surface. This work does only consider collision-free orientations that guarantee the visibility of the zone to scan and that are compatible with the constraints imposed by the process parameters. In order to speed up the calculation of these orientations, different methods like back-face culling and space partitioning techniques, such as kd-trees, are applied. Once the space occupied by the part is partitioned in regions, recursive ray traversal algorithms are used in order to check for intersection exclusively the part triangles (STL) that can potentially be traversed by each laser beam direction. | ['P. Fernández', 'B. J. Álvarez', 'J.C. Rico', 'D. Blanco', 'Gonzalo Valiño'] | Constraints Evaluation and Working Methodology for Laser Scanning of Free-Form Surfaces | 14,331 |
In this paper we consider linear fractional programming problem and look at its linear complementarity formulation. In the literature, uniqueness of solution of a linear fractional programming problem is characterized through strong quasiconvexity. We present another characterization of uniqueness through complementarity approach and show that the solution set of a fractional programming problem is convex. Finally we formulate the complementarity condition as a set of dynamical equations and prove certain results involving the neural network model. A computational experience is also reported. | ['S. K. Neogy', 'A. K. Das', 'P. K. Das'] | On Linear Fractional Programming Problem and its Computation Using a Neural Network Model | 181,662 |
Composing Swarm Robot Formations Based on Their Distributions Using Mobile Agents | ['Ryotaro Oikawa', 'Munehiro Takimoto', 'Yasushi Kambayashi'] | Composing Swarm Robot Formations Based on Their Distributions Using Mobile Agents | 857,409 |
In order to avoid stress conditions in information systems, the use of a simple admission control (SAC) mechanism is widely adopted by systems' administrators. Most of the SAC approaches limit the number of concurrent work, redirecting to a waiting FCFS queue all transactions that exceed that number. The introduction of such a policy can be very useful when the most important metric for the system is the total throughput. But such a simple AC approach may not be sufficient when transactions have deadlines to meet, since in stressed scenarios a transaction may spend a lot of time only waiting for execution. This paper presents 2 enhancements that help keeping the number of transactions executed within the deadline near to the throughput. The enhancements are DiffServ, in which short transactions have priority, and a 2-Phase Admission Control (2PAC) mechanism, which tries to avoid the previousmentioned problem by limiting the queue size dynamically using informations provided by a feedback control. It also introduces the QoS-Broker --- a tool which implements both SAC and 2PAC --- and uses it to compare their performances when submitted to the TPC-C benchmark. Our results show that both total throughput and throughput within deadline increase when the 2 enhancements are used, although it becomes clear that 2PAC has a much bigger impact on performance than DiffServ. | ['Luís Fernando Orleans', 'Geraldo Zimbrão', 'Pedro Furtado'] | Controlling the Behaviour of Database Servers with 2PAC and DiffServ | 484,613 |
Abstract It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing. | ['Olivier Collignon', 'Marco Davare', 'Anne De Volder', 'Colline Poirier', 'Etienne Olivier', 'Claude Veraart'] | Time-course of Posterior Parietal and Occipital Cortex Contribution to Sound Localization | 480,299 |
This paper presents computer graphics. Computer graphics and visualization are essentially about producing images for a target audience, be it the millions watching a new CG-animated movie or the small group of researchers trying to gain insight into the large amount of numerical data resulting from a scientific experiment. To ascertain the final images' effectiveness for their intended audience or the designed visualizations' accuracy and expressiveness, formal user studies are often essential. In human-computer interaction (HCI), such user studies play a similar fundamental role in evaluating the usability and applicability of interaction methods and metaphors for the various devices and software systems we use. | ['Beatriz Sousa Santos', 'Paulo Dias', 'Samuel S. Silva', 'Carlos Ferreira', 'Joaquim Madeira'] | Integrating User Studies into Computer Graphics-Related Courses | 47,132 |
We devised and tested two new visual guides to help users comprehend distorted sketched information in magnification lenses. Distortion techniques, such as fisheye lenses, have the advantage of magnifying information without occluding the surrounding content. However distorted information in the transition region requires extra mental workload to understand: this can lead to frustration and rejection of magnification lenses. Our evaluation shows any visual guide is better than none and identifies strengths and weaknesses of the new guides. We tested for the four visual properties important for understanding distorted information: scale, alignment, distance and direction. Surprisingly grids are not as effective in many contexts as our new lenses. | ['Paul Schmieder', 'Andrew Luxton-Reilly', 'Beryl Plimmer', 'John G. Hosking'] | Visual guides for comprehending digital ink in distortion lenses | 609,731 |
Towards Activity Recognition of Learners in On-line Lecture. | ['Hiromichi Abe', 'Takuya Kamizono', 'Kazuya Kinoshita', 'Kensuke Baba', 'Shigeru Takano', 'Kazuaki Murakami'] | Towards Activity Recognition of Learners in On-line Lecture. | 736,135 |
Various claims have been made regarding the benefits that Enterprise Architecture (EA) delivers for both individual systems development projects and the organization as a whole. This paper presents the statistical findings of a survey study (n=293) carried out to empirically test these claims. First, we investigated which techniques are used in practice to stimulate conformance to EA. Secondly, we studied which benefits are actually gained. Thirdly, we verified whether EA creators (e.g. enterprise architects) and EA users (e.g. project members) differ in their perceptions regarding EA. Finally, we investigated which of the applied techniques most effectively increase project conformance to and effectiveness of EA. A multivariate regression analysis demonstrates that three techniques have a major impact on conformance: carrying out compliance assessments, management propagation of EA and providing assistance to projects. Although project conformance plays a central role in reaping various benefits at both the organizational and the project level, it is shown that a number of important benefits have not yet been fully achieved. | ['Ralph Foorthuis', 'Marlies van Steenbergen', 'Sogeti Nederland Bv', 'Nino Mushkudiani', 'Wiel Bruls', 'David Ricardostraat'] | ON COURSE, BUT NOT THERE YET: ENTERPRISE ARCHITECTURE CONFORMANCE AND BENEFITS IN SYSTEMS DEVELOPMENT | 522,709 |
Multi-Sorted Inverse Frequent Itemsets Mining: On-Going Research | ['Domenico Saccà', 'Edoardo Serra', 'Antonio Piccolo'] | Multi-Sorted Inverse Frequent Itemsets Mining: On-Going Research | 904,763 |
Many techniques in the social sciences and graph theory deal with the problem of examining and analyzing patterns found in the underlying structure and associations of a group of entities. However, much of this work assumes that this underlying structure is known or can easily be inferred from data, which may often be an unrealistic assumption for many real-world problems. Below we consider the problem of learning and querying a graph-based model of this underlying structure. The model is learned from noisy observations linking sets of entities. We explicitly allow different types of links (representing different types of relations) and temporal information indicating when a link was observed. We quantitatively compare this representation and learning method against other algorithms on the task of predicting future links and new “friendships” in a variety of real world data sets. | ['Jeremy Kubica', 'Andrew W. Moore', 'David Cohn', 'Jeff G. Schneider'] | Finding Underlying Connections: A Fast Graph-Based Method for Link Analysis and Collaboration Queries | 358,989 |
In order to apply the results of formal studies of real-time task models, a practitioner must account for the effects of phenomena present in the implementation but not present in the formal model. We study the feasibility and schedulability problems for periodic tasks that must compete for the processor with interrupt handlers - tasks that are assumed to always have priority over application tasks. The emphasis in the analysis is on deadline driven scheduling methods. We develop conditions that solve the feasibility and schedulability problems and demonstrate that our solutions are computationally feasible. Lastly, we compare our analysis with others developed for static priority task systems. > | ['Kevin Jeffay', 'Donald L. Stone'] | Accounting for interrupt handling costs in dynamic priority task systems | 456,034 |
An Efficient and Simple Graph Model for Scientific Article Cold Start Recommendation | ['Tengyuan Cai', 'Hongrong Cheng', 'Jiaqing Luo', 'Shijie Zhou'] | An Efficient and Simple Graph Model for Scientific Article Cold Start Recommendation | 902,313 |
We give deterministic and stochastic models of the traffic on a circular road without overtaking. From this model the mean speed is derived as an eigenvalue of the min-plus matrix describing the dynamics of the system in the deterministic case and as the Lyapunov exponent of a min-plus stochastic matrix in the stochastic case. The eigenvalue and the Lyapunov exponent are computed explicitly. From these formulas, we derive the fundamental law that links the flow to the density of vehicles on the road. Numerical experiments using the MAXPLUS toolbox of SCILAB confirm the theoretical results obtained. | ['Pablo Lotito', 'E. Mancinelli', 'Jean-Pierre Quadrat'] | A min-plus derivation of the fundamental car-traffic law | 542,120 |
The generalized method of time and transfer constants is introduced. It can be used to determine the transfer function to the desired level of accuracy in terms of time and transfer constants of first-order systems using exclusively low frequency calculations. This method can be used to determine the poles and zeros of circuits with both inductors and capacitors. An inductive proof of this generalized method is given which subsumes special cases, such as methods of zero- and infinite-value time constants. Several important and useful corollaries of this method are discussed and several examples are analyzed. | ['Ali Hajimiri'] | Generalized Time- and Transfer-Constant Circuit Analysis | 429,596 |
Besides the spinal deformity, scoliosis modifies notably the general appearance of the trunk resulting in trunk rotation, imbalance, and asymmetries that constitutes patients' major concern. Existing classifications of scoliosis, based on the type of spinal curve as depicted on radiographs, are currently used to guide treatment strategies. Unfortunately, even though a perfect correction of the spinal curve is achieved, some trunk deformities remain, making patients dissatisfied with the treatment received. The purpose of this study is to identify possible shape patterns of trunk surface deformity associated with scoliosis. First, trunk surface is represented by a multivariate functional trunk shape descriptor based on 3-D clinical measurements computed on cross sections of the trunk. Then, the classical formulation of hierarchical clustering is adapted to the case of multivariate functional data and applied to a set of 236 trunk surface 3-D reconstructions. The highest internal validity is obtained when considering 11 clusters that explain up to 65% of the variance in our dataset. Our clustering result shows a concordance with the radiographic classification of spinal curves in 68% of the cases. As opposed to radiographic evaluation, the trunk descriptor is 3-D and its functional nature offers a compact and elegant description of not only the type, but also the severity and extent of the trunk surface deformity along the trunk length. In future work, new management strategies based on the resulting trunk shape patterns could be thought of in order to improve the esthetic outcome after treatment, and thus patients satisfaction. | ['Lama Seoud', 'Jean Dansereau', 'Hubert Labelle', 'Farida Cheriet'] | Noninvasive Clinical Assessment of Trunk Deformities Associated With Scoliosis | 239,520 |
Marx's argument on `the form of value' is based on state-oriented physics (SOP) and has `the problem of transitivity.' This problem is inevitably led by an assumption of SOP that observer is able to identify a rule of exchange, so we cannot solve this problem directly. If we hold a standpoint of measurement-oriented physics (MOP), this problem is not a pardox but can be treated as the possibility to get a new outlook for understanding an aspect of exchange as a movement. The problem of transitivity arises because an observer who can describe individual exchanges only as specific parts of the whole exchange tries to describe a general rule. If we take it as a measurement problem, we can positively use the problem of transitivity and construct an internal measurement model in which exchange has the duality, operator and operand. We construct an internal measurement model of exchange as an interaction between cone-relation and equivalent-relation. Then we get patterns named `particle' that can be interpreted as a rule that can be regarded adaptable for the whole process of exchange, and the particle also has the duality of stability and instability. | ['Y. Nakajima', 'Yukio Pegio Gunji'] | The dynamically changing model of exchange as interaction between cone-relation and equivalent-relation | 8,454 |
Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)—50 with an ejection fraction (EF) of ≥50 s and 50 with EF <50 %—and 100 cases of suspected pediatric appendicitis (static organ)—50 with signs of acute appendicitis and 50 with no findings of appendicitis—were consecutively selected. Twelve reviewers reviewed the original videos using the liquid crystal display (LCD) monitor of an ultrasound machine and using a smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer’s interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949–0.986) and 0.963 (0.945–0.982) (P = 0.548) for echocardiography and 0.972 (0.954–0.989) and 0.966 (0.947–0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography. | ['Changsun Kim', 'Hyunmin Cha', 'Bo Seung Kang', 'Hyuk Joong Choi', 'Tae Ho Lim', 'Jaehoon Oh'] | A Feasibility Study of Smartphone-Based Telesonography for Evaluating Cardiac Dynamic Function and Diagnosing Acute Appendicitis with Control of the Image Quality of the Transmitted Videos | 619,361 |
Cooperative relaying has recently been recognized as an alternative to MIMO in a typical multi cellular environment Inserting random delays at the non-regenerative fixed relays, further improve the system performance. However, random delay results in limited performance gain from multipath diversity. In this paper, two promising delay optimization schemes are introduced for a multi cellular OFDM system with cooperative relaying with stationary multiple users and fixed relays. Both of the schemes basically aim to take the most advantages of the potential frequency selectivity by inserting pre-determined delays at the relays, in order to further improve the system performance (coverage and throughput). Evaluation results for different multipath fading environments show that the system performance with delay optimization increases tremendously compared with the case of random delay. | ['S. Ben Slimane', 'Xuesong Li', 'Bo Zhou', 'Nauroze Syed', 'Mohammad Abu Dheim'] | Delay Optimization in Cooperative Relaying with Cyclic Delay Diversity | 270,049 |
Constraint satisfaction arises in many domains in different forms. Search and inference compete for solving constraint satisfaction problems (CSPs) but the most successful approaches are those which benefit from both techniques. Based on this idea, this article introduces a new scheme for solving the general Max-CSP problem. The new approach exploits the simplicity and efficiency of a modified Particle Swarm Optimization and the advantage of adaptable inference levels offered by the Mini-Bucket Elimination algorithm. Experiments conducted on binary CSPs using different levels of inference are illustrative for the inference/search trade-off. Comparative studies highlight the differences between our stochastic population-based method and the systematic search performed by a Branch and Bound algorithm. | ['Mihaela Breaban', 'Madalina Ionita', 'Cornelius Croitoru'] | A new PSO approach to constraint satisfaction | 41,078 |
Basis und Überbau | ['Claudia Eckert', 'Helmut Reimer'] | Basis und Überbau | 573,439 |
This paper presents an evaluation of the benefits and user acceptance of a multimodal interface in which the user interacts with a game-like interactive virtual reality application "The Enigma of the Sphinx". The interface consists of a large projection screen as the main display, a "magic wand", a stereo sound system and the user's voice for "casting spells". We present our conclusions concerning "friendliness" and sense of presence, based on observations of more than 150 users in a public event. | ['Tolga Abaci', 'R. de Bondeli', 'Jan Ciger', 'Mireille Clavien', 'Fatih Erol', 'Mario Gutiérrez', 'Stéphanie Noverraz', 'Olivier Renault', 'Frédéric Vexo', 'Daniel Thalmann'] | The enigma of the sphinx | 291,000 |
In the context of the DCMI RDF Application Profile task group and the W3C Data Shapes Working Group solutions for the proper formulation of constraints and validation of RDF data on these constraints are being developed. Several approaches and constraint languages exist but there is no clear favorite and none of the languages is able to meet all requirements raised by data practitioners. To support the work, a comprehensive, community-driven database has been created where case studies, use cases, requirements and solutions are collected. Based on this database, we have hitherto published 81 types of constraints that are required by various stakeholders for data applications. We are using this collection of constraint types to gain a better understanding of the expressiveness of existing solutions and gaps that still need to be filled. Regarding the implementation of constraint languages, we have already proposed to use high-level languages to describe the constraints, but map them to SPARQL queries in order to execute the actual validation; we have demonstrated this approach for the Web Ontology Language in its current version 2 and Description Set Profiles. In this paper, we generalize from the experience of implementing OWL 2 and DSP by introducing an abstraction layer that is able to describe constraints of any constraint type in a way that mappings from high-level constraint languages to this intermediate representation can be created more or less straight-forwardly. We demonstrate that using another layer on top of SPARQL helps to implement validation consistently accross constraint languages, simplifies the actual implementation of new languages, and supports the transformation of semantically equivalent constraints across constraint languages. | ['Thomas Bosch', 'Kai Eckert'] | Guidance, please! towards a framework for RDF-based constraint languages | 568,935 |
This paper studies cluster load balancing policies and system support for fine-grain network services. Load balancing on a cluster of machines has been studied extensively in the literature, mainly focusing on coarse-grain distributed computation. Fine-grain services introduce additional challenges because system states fluctuate rapidly,for those services and system performance is highly sensitive to various overhead. The main contribution of our work is to identify effective load balancing schemes for fine-grain services through simulations and empirical evaluations on synthetic workload and real traces. Another contribution is the design and implementation of a load balancing system in a Linux cluster that strikes a balance between acquiring enough load information and minimizing system overhead. Our study concludes that: 1) Random polling based load-balancing policies are well-suited for fine-grain network services: 2) A small poll size provides sufficient information for load balancing, while an excessively large poll size may in fact degrade the performance clue to polling overhead; 3) Discarding slow-responding polls can further improve system performance. | ['Kai Shen', 'Tao Yang', 'Lingkun Chu'] | Cluster load balancing for fine-grain network services | 73,901 |
Storage is an important research direction of the data management of the Internet of Things. Massive and heterogeneous data of the Internet of Things brings the storage huge challenges. Based on the analysis of the IoT data characteristics, this paper proposed a storage management solution called IOTMDB based on NoSQL as current storage solutions are not well support storing massive and heterogeneous IoT data. Besides, the storage strategies about expressing and organizing IoT data in a uniform manner were proposed, some evaluations were carried out. Furthermore, we not only just concerned about descripting the data itself, but also cared for sharing of the data, so a data sharing mechanism based on ontology was proposed. Finally, index was studied and a set of query syntaxes to meet the needs of different kinds of IoT queries based NoSQL was given. | ['Tingli Li', 'Yang Liu', 'Ye Tian', 'Shuo Shen', 'Wei Mao'] | A Storage Solution for Massive IoT Data Based on NoSQL | 931,336 |
It is very well known in computer science that partially ordered files are easier to search. In the worst case, for example, a totally unordered file requires no preprocessing, but ?(n) time to search, while a totally ordered file requires ?(n log n) preprocessing time to sort, but can be searched in O(log n) time. Behind the casual observation, then, lurks the notion of a computational tradeoff between sorting and searching. We analyze this tradeoff in the average case, using the decision tree model. Let P be a preprocessing algorithm that produces partial orders given a set U of n elements, and let S be a searching algorithm for these partial orders. Assuming any of the n! permutations of the elements of U are equally likely, and that we search for any y isin; U with equal probability (in unsuccessful search, all "gaps" are considered equally likely), the average costs P(n) of preprocessing and S(n) of searching may be computed. We demonstrate a tradeoff of the form P(n) + n log S(n) = ?(n log n), for both successful and unsuccessful search. The bound is tight up to a constant factor. In proving this tradeoff, we show a lower bound on the average case of searching a partial order. Let A be a partial order on n elements consistent with Π permutations. We show S(n) = ?(Π3/n/n2) for successful search of A, and S(n) = ?(Π2/n/n) for unsuccessful search. These lower bounds show, for example, that heaps require linear time to search on the average. | ['Harry G. Mairson'] | Average case lower bounds on the construction and searching of partial orders | 460,994 |
Human communication takes many forms, including speech, text and instructional videos. It typically has an underlying structure, with a starting point, ending, and certain objective steps between them. In this paper, we consider instructional videos where there are tens of millions of them on the Internet. #R##N#We propose a method for parsing a video into such semantic steps in an unsupervised way. Our method is capable of providing a semantic "storyline" of the video composed of its objective steps. We accomplish this using both visual and language cues in a joint generative model. Our method can also provide a textual description for each of the identified semantic steps and video segments. We evaluate our method on a large number of complex YouTube videos and show that our method discovers semantically correct instructions for a variety of tasks. | ['Ozan Sener', 'Amir Roshan Zamir', 'Chenxia Wu', 'Silvio Savarese', 'Ashutosh Saxena'] | Unsupervised Semantic Action Discovery from Video Collections | 729,201 |
Understanding how pathogen's proteins interact with its host's proteins is the key concept for understanding pathogen's infection mechanism, which can lead to the discovery of improved therapeutics for treating infectious diseases. Several studies suggest that proteins from various pathogens tend to interact with human proteins involved in the same biological pathway. This implies that pathogens are inclined to target host's proteins with similar function. In addition, conservation between a protein's function and its local topological structure in a protein-protein interaction network (PIN) has been previously characterized. This leads to the hypothesis that pathogens target the host's proteins with a similar local topological structure in a PIN. In this work, this hypothesis is examined by adding a graphlet degree vector of a protein in the human PIN as a feature in the prediction model and using that model to predict the protein-protein interaction between human and four pathogens. The results show that this graphlet degree vector increases the performance significantly for all pathogens. This suggests that the intraspecies protein-protein interactions should be taken into consideration when developing prediction methods for host-pathogen protein interaction. The results also support the hypothesis that there exists a relationship between a protein's function and the local topology of the PIN. | ['Jira Jindalertudomdee', 'Morihiro Hayashida', 'Jiangning Song', 'Tatsuya Akutsu'] | Host-Pathogen Protein Interaction Prediction Based on Local Topology Structures of a Protein Interaction Network | 966,017 |
This paper presents a control and acquisition platform design method of travelling wave dielectrophoresis microchip based on SOPC technique. The direct digital frequency synthesis (DDS) modeling is built by using the digital signal processing (DSP) development tool of FPGA, DSP Builder. By the controlling of NIOSII, which embedded in the FPGA (CyclonII EP2C35) of Altera corp., the four-phase DDS module outputs four channels AC signals with the same magnitude, and their phase of four channels AC signals are 0deg, 90deg, 180deg and 270deg, respectively. They supply control signals for the travelling wave ITO electrodes array. The travelling wave dielectrophoresis electric field is created by the four-channel AC signals, it drives biological particles direction moving to achieve different biological particles separating in the microchannel of dielectrophoresis micochip. Adopting the custom I 2 C module achieves the configuration of 3-Megapixel CMOS image sensor, MT9T001, which achieves the contactless detection of different biological particles. The design of DDS IP core based on DSP Builder, the design of custom I 2 C module, the hardware framework and software design of control system is mainly introduced in article. The correctness and practicability of design method is testified by simulation results. | ['Honghua Liao', 'Jun Yu', 'Jun Wang', 'Jianjun Chen', 'Yu Liao', 'Jinqiao Yi'] | A System Platform Design of Travelling Wave Dielectrophoresis Microchip Based on SOPC | 268,657 |
Almost all approaches to model-based diagnosis presume that the system being diagnosed behaves non-intermittently and analyze behavior over a small number (often only one) of time instants. In this paper we show how existing approaches to model-based diagnosis can be extended to diagnose intermittent failures as they manifest themselves over time. In addition, we show where to insert probe points to best distinguish among the intermittent faults those that best explain the symptoms and isolate the fault in minimum expected cost. | ['Johan de Kleer'] | Diagnosing multiple persistent and intermittent faults | 495,676 |
English is the only language available for global communication. Due to the influence of speakers’ mother tongue, however, those from different regions often have different accents in their pronunciation of English. The ultimate goal of our project is automatic creation of a global pronunciation map of World Englishes on an individual basis, for speakers to use to locate similar English pronunciations. Creating the map mathematically requires a matrix of pronunciation distances among all the speakers considered. Our previous study proposed a good algorithm for that purpose [1], where, using phonetic reference pronunciation distances calculated from labeled data, a pronunciation distance predictor was trained and built for unlabeled data. Due to space limit in [1], the procedure for calculating the reference distances was not described in detail. Then in this paper, detailed descriptions are given and 498 world-wide native and non-native speakers in the Speech Accent Archive [2] are clustered using the phonetic reference distances. Results show high validity of using the calculated distances as reference distances for training a distance predictor. | ['Han-Ping Shen', 'Nobuaki Minematsu', 'Takehiko Makino', 'Steven H. Weinberger', 'Teeraphon Pongkittiphan', 'Chung-Hsien Wu'] | Speaker-based Accented English Clustering Using a World English Archive | 673,540 |
For their scalability needs, data-intensive Web applications can use a database scalability service (DBSS), which caches applications' query results and answers queries on their behalf. One way for applications to address their security/privacy concerns when using a DBSS is to encrypt all data that passes through the DBSS. Doing so, however, causes the DBSS to invalidate large regions of its cache when data updates occur. To invalidate more precisely, the DBSS needs help in order to know which results to invalidate; such help inevitably reveals some properties about the data. In this paper, we present invalidation clues, a general technique that enables applications to reveal little data to the DBSS, yet limit the number of unnecessary invalidations. Compared with previous approaches, invalidation clues provide applications significantly improved tradeoffs between security/privacy and scalability. Our experiments using three Web application benchmarks, on a prototype DBSS we have built, confirm that invalidation clues are indeed a low-overhead, effective, and general technique for applications to balance their privacy and scalability needs. | ['Amit Manjhi', 'Phillip B. Gibbons', 'Anastassia Ailamaki', 'Charles Garrod', 'Bruce M. Maggs', 'Todd C. Mowry', 'Christopher Olston', 'Anthony Tomasic', 'Haifeng Yu'] | Invalidation Clues for Database Scalability Services | 312,433 |
In this paper, we proposed personalized guided walking holidays in the city with wearable devices, which aim to provide a personalized service based on one's interest [Figure 1]. We firstly hypothesize that one's heart rate rises when he/she sees something he/she is curious about, and then test this using our developing prototype device. We conducted an experiment with four participants, in popular holiday walking areas such as Akihabara and Asakusa area in Tokyo. The data suggests that heart rate is significantly higher when participants see what they consider an interesting spot when compared with spots they are indifferent towards, implying that our concept is supported by quantitative physiological data responses. Perspectives of this research direction are discussed in terms of the relationship between city and human emotions. | ['Feng Liang', 'Masashi Nakatani', 'Kai Kunze', 'Kouta Minamizawa'] | Personalized record of the city wander with a wearable device: a pilot study | 889,430 |
An interesting problem which has been widely investigated is under what circumstances will a society of rational agents realize some particular stable situations, and whether they satisfy the condition of social efficiency? This will crucially depend on how they interact and what information they have when they interact. For instance, when strategic interactions are modeled as coordination games, it is known that the evolutionary process selects the risk-dominant equilibrium which is not necessarily efficient. We consider the networks of agents, in which each agent faces several types of strategic decision problems. We investigate the dynamics of collective decisions when each agent adapts the strategy of interaction to its neighbors. We are interested in showing how society gropes its way towards an equilibrium situation. We show that society selects the most efficient equilibrium among multiple equilibria when the agents composing it, learn from each other as collective learning, and they co-evolve their strategies over time. We also investigate the mechanism that leads society to an equilibrium of social efficiency. | ['Hiroshi Sato', 'Akira Namatame'] | Co-evolution in social interactions | 413,291 |
Abstract The spectral efficiency results for different adaptive transmission schemes over correlated diversity branches with unequal average signal to noise ratio (SNR) obtained so far in literature are not applicable for Nakagami-0.5 fading channels. In this paper, we investigate the effect of fade correlation and level of imbalance in the branch average received SNR on the spectral efficiency of Nakagami-0.5 fading channels in conjunction with dual-branch selection combining (SC). This paper derived the expressions for the spectral efficiency over correlated Nakagami-0.5 fading channels with unequal average received SNR. This spectral efficiency is evaluated under different adaptive transmission schemes using dual-branch SC diversity scheme. The corresponding expressions for Nakagami-0.5 fading are considered to be the expressions under worst fading conditions. Finally, numerical results are provided to illustrate the spectral efficiency degradation due to channel correlation and unequal average received SNR between the different combined branches under different adaptive transmission schemes. It has been observed that optimal simultaneous power and rate adaptation (OPRA) scheme provides improved spectral efficiency as compared to truncated channel inversion with fixed rate (TIFR) and optimal rate adaptation with constant transmit power (ORA) schemes under worst case fading scenario. It is very interesting to observe that TIFR scheme is always a better choice over ORA scheme under correlated Nakagami-0.5 fading channels with unequal average received SNR. | ['Mohammad Irfanul Hasan', 'Sanjay Kumar'] | Spectral efficiency of dual diversity selection combining schemes under correlated Nakagami-0.5 fading with unequal average received SNR | 699,682 |
We propose a dynamic rate allocation scheme based on power-rate-distortion (PRD) optimization model among multiple video sources over ad hoc networks. This work is an extension of the PRD model for single source video streaming. With a total rate constraint and different power consumption constraints for each node, our optimization algorithm minimizes the average video distortion for all sources. The optimization is performed at the receiver of the video streams. Experimental results for a video surveillance scenario demonstrate that the proposed scheme outperforms a simple fixed-QP scheme. The proposed scheme promotes the average PSNR by 0.32-0.45 dB without shortening the system lifetime, or prolongs system lifetime by more than 20% without cutting down the overall PSNR. | ['Juntao Ouyang', 'Lifeng Sun', 'Yuzhuo Zhong', 'Shiqiang Yang'] | Power-Rate-Distortion Optimization for Multi-Source Video Streaming under Energy Constraints over Ad Hoc Networks | 30,190 |
This paper presents an enhanced stochastic mapping technique in the discriminative feature (fMPE) space that exploits stereo data for noise robust LVCSR. Both MMSE and MAP estimates of the mapping are given and the performance of the two is investigated. Due to the iterative nature of the MAP estimate, we show that combining MMSE and MAP estimates is possible and yields superior performance than each individual estimate. A multi-style discriminative training with minimum phone error (MPE) criterion is further applied to the compensated features and obtains significant performance improvement on real-world noisy test sets. | ['Xiaodong Cui', 'Mohamed Afify', 'Yuqing Gao'] | Stereo-based stochastic mapping with discriminative training for noise robust speech recognition | 354,590 |
Sport science is a research discipline that aims to understand exercise and apply scientific methods in support of increasing an athlete's performance. In this paper, we present initial results on modeling, managing and analyzing an athlete's data gathered by sport scientists. An Olympic data warehouse is designed initially to support the monitoring of an athlete's biochemical data. A trajectory data model is extended to represent the athlete's measurements along his/her training states, referred to here as metaphoric trajectories. Furthermore, a data warehouse for metaphoric trajectories is designed and two analysis approaches — a relational and a multidimensional one — are evaluated. We compare both approaches and discuss their benefits to the athlete's follow-up analyses applied by sport scientists. Copyright © 2011 John Wiley & Sons, Ltd. | ['Fábio Porto', 'Ana Maria de Carvalho Moura', 'Frederico C. da Silva', 'Adriana Bassini', 'Daniele Palazzi', 'Maira Poltosi', 'Luis Eduardo Viveiros de Castro', 'Luiz Claudio Cameron'] | A metaphoric trajectory data warehouse for Olympic athlete follow-up | 516,155 |
Grid-based computing environments are becoming increasingly popular for scientific computing. One of the key issues for scientific computing is the efficient transfer of large amounts of data across the Grid. In this paper we present a reliable file transfer (RFT) service that significantly improves the efficiency of large-scale file transfer. RFT can detect a variety of failures and restart the file transfer from the point of failure. It also has capabilities for improving transfer performance through TCP tuning. | ['Ravi K. Madduri', 'Cynthia S. Hood', 'William E. Allcock'] | Reliable file transfer in Grid environments | 55,010 |
In the past years there has been substantial amount of research focused on the transition mechanisms to allow the Internet to gradually evolve into a fully fledged IPv6 network. With the prevalence of smartphones, internet tablets and netbooks, a large portion of the end users has become truly mobile. Now, that all of the remaining IPv4 addresses have been allocated, the transition from IPv4 to IPv6 is becoming reality also for the mobility management. In this paper we examine the mobility aspect of the transition using real world measurements with a tunneling based transition scheme on our network aware Mobile IPv6 testbed. | ['Markus Luoto', 'Teemu Rautio', 'Jukka Mäkelä'] | Providing Support for Legacy IPv4 Applications in IPv6 Network with Network Aware Mobility | 67,125 |
This essay discusses the use of big data analytics (BDA) as a strategy of enquiry for advancing information systems (IS) research. In broad terms, we understand BDA as the statistical modelling of large, diverse, and dynamic data sets of user-generated content and digital traces. BDA, as a new paradigm for utilising big data sources and advanced analytics, has already found its way into some social science disciplines. Sociology and economics are two examples that have successfully harnessed BDA for scientific enquiry. Often, BDA draws on methodologies and tools that are unfamiliar for some IS researchers (e.g., predictive modelling, natural language processing). Following the phases of a typical research process, this article is set out to dissect BDA’s challenges and promises for IS research, and illustrates them by means of an exemplary study about predicting the helpfulness of 1.3 million online customer reviews. In order to assist IS researchers in planning, executing, and interpreting their own studies, and evaluating the studies of others, we propose an initial set of guidelines for conducting rigorous BDA studies in IS. | ['Oliver Müller', 'Iris A. Junglas', 'Jan vom Brocke', 'Stefan Debortoli'] | Utilizing big data analytics for information systems research: challenges, promises and guidelines | 621,971 |
Build systems are responsible for transforming static source code artifacts into executable software. While build systems play such a crucial role in software development and maintenance, they have been largely ignored by software evolution researchers. With a firm understanding of build system aging processes, project managers could allocate personnel and resources to build system maintenance tasks more effectively, reducing the build maintenance overhead on regular development activities. In this paper, we study the evolution of ANT build systems from two perspectives: (1) a static perspective, where we examine the build system specifications using software metrics adopted from the source code domain; and (2) a dynamic perspective where representative sample build runs are conducted and their output logs are analyzed. Case studies of four open source ANT build systems with a combined history of 152 releases show that not only do ANT build systems evolve, but also that they need to react in an agile manner to changes in the source code. | ['Shane McIntosh', 'Bram Adams', 'Ahmed E. Hassan'] | The evolution of ANT build systems | 325,649 |
Since synchronized flow is observed in the real traffic, its mechanism has been investigated in a great dealof traffic models. Various mechanisms have been proposed to explain synchronized flow. In this paper, we proposed a new mechanism, i.e. speed variation dependent randomization. Based on this mechanism, a new cellular automata model is built. The fundamental diagram and spatiotemporal diagrams are studied. We also perform the microscopic analysis of time series data, which demonstrate that the new model could reproduce the synchronized flow. Moreover a spontaneous transition and a disturbance transition from synchronized flow to jam could be observed in the new model. We expect our work will helpful in understanding the real mechanism of the synchronized flow. | ['Junfang Tian', 'Bin Jia', 'Xin-Gang Li', 'Ziyou Gao'] | Synchronized Flow in a Cellular Automata Model with Speed Variation Dependent Randomization | 415,899 |
Circuit Lower Bounds for Heuristic MA. | ['Alexander Knop'] | Circuit Lower Bounds for Heuristic MA. | 738,595 |
Questionnaire tools from open source learning management systems (LMS) present a great number of features to customize the creation and assignment of question-based online tests, but the statistical analysis featured is very limited. This paper introduces stats engine, a tool to improve test reporting in online tests. Stats engine allows the teacher to use reports as a feedback to improve test planning and assessment; automatically label questions under skills a student can acquire and automatically classify students by taking into account their performance on previously found skills. | ['Xavier Gumara', 'Lluis Vicent', 'Marc Segarra'] | QTI Result Reporting Stats Engine for Question-Based Online Tests | 462,399 |
The paper deals with the problem of modeling and implementation of distributed automation systems for manufacturing applications. In particular, a novel approach has been conceived. A suitable meta-model of the production system has been derived, compliant with the state-of-the-art industrial standards. After the compilation and validation stages, an ontology is derived containing all the necessary definitions of modeling entities and constraints, to be used directly and explicitly in the execution of the control code. This entitles the control application to be defined as "model-driven" or better " ontology-driven". The results are obtained within the running EUproject PABADIS'PROMISE (P2). | ['Athanasios P. Kalogeras', 'Luca Ferrarini', 'A. Lueder', 'John V. Gialelis', 'Christos E. Alexakos', 'Jörn Peschke', 'Carlo Veber'] | Ontology-driven control application design methodology | 369,153 |
We report our image based static facial expression recognition method for the Emotion Recognition in the Wild Challenge (EmotiW) 2015. We focus on the sub-challenge of the SFEW 2.0 dataset, where one seeks to automatically classify a set of static images into 7 basic emotions. The proposed method contains a face detection module based on the ensemble of three state-of-the-art face detectors, followed by a classification module with the ensemble of multiple deep convolutional neural networks (CNN). Each CNN model is initialized randomly and pre-trained on a larger dataset provided by the Facial Expression Recognition (FER) Challenge 2013. The pre-trained models are then fine-tuned on the training set of SFEW 2.0. To combine multiple CNN models, we present two schemes for learning the ensemble weights of the network responses: by minimizing the log likelihood loss, and by minimizing the hinge loss. Our proposed method generates state-of-the-art result on the FER dataset. It also achieves 55.96% and 61.29% respectively on the validation and test set of SFEW 2.0, surpassing the challenge baseline of 35.96% and 39.13% with significant gains. | ['Zhiding Yu', 'Cha Zhang'] | Image based Static Facial Expression Recognition with Multiple Deep Network Learning | 607,764 |
Healthcare and supply chain management have recently been the two most active areas for RFID applications. The healthcare environment is a natural fit for generating and utilizing instance-level data for decision support. We consider a scenario from the French healthcare environment involving tracking and tracing of surgical equipment within and among hospitals and develop a knowledge-based system for decision support that helps improve the overall performance of the surgical instrument management process while reducing errors. We illustrate the process through the developed healthcare knowledge-based system and evaluate its performance. | ['Yannick Meiller', 'Sylvain Bureau', 'Wei Zhou', 'Selwyn Piramuthu'] | RFID-Embedded Decision Support for Tracking Surgical Equipment | 69,891 |
The ProteomeXchange consortium in 2017: supporting the cultural change in proteomics public data deposition | ['Eric W. Deutsch', 'Attila Csordas', 'Zhi Sun', 'Andrew F. Jarnuczak', 'Yasset Perez-Riverol', 'Tobias Ternent', 'David S. Campbell', 'Manuel Bernal-Llinares', 'Shujiro Okuda', 'Shin Kawano', 'Robert L. Moritz', 'Jeremy J. Carver', 'Mingxun Wang', 'Yasushi Ishihama', 'Nuno Bandeira', 'Henning Hermjakob', 'Juan Antonio Vizcaíno'] | The ProteomeXchange consortium in 2017: supporting the cultural change in proteomics public data deposition | 915,461 |
Wearable sensor network to study laterality of brain functions. | ['Gabriela Postolache', 'Pedro Silva Girao', 'Octavian Postolache'] | Wearable sensor network to study laterality of brain functions. | 592,285 |
Watermarking algorithms are used for image copyright protection. The algorithms proposed select certain blocks in the image based on a Gaussian network classifier. The pixel values of the selected blocks are modified such that their discrete cosine transform (DCT) coefficients fulfil a constraint imposed by the watermark code. Two different constraints are considered. The first approach consists of embedding a linear constraint among selected DCT coefficients and the second one defines circular detection regions in the DCT domain. A rule for generating the DCT parameters of distinct watermarks is provided. The watermarks embedded by the proposed algorithms are resistant to JPEG compression. | ['Adrian G. Bors', 'Ioannis Pitas'] | Image watermarking using DCT domain constraints | 139,074 |
The last decade has experienced an exponential growth of popularity in online social networks. This growth in popularity has also paved the way for the threat of cyberbullying to grow to an extent that was never seen before. Online social network users are now constantly under the threat of cyberbullying from predators and stalkers. In our research paper, we perform a thorough investigation of cyberbullying instances in Vine, a video-based online social network. We collect a set of media sessions (shared videos with their associated meta-data) and then label those using CrowdFlower, a crowd-sourced website for cyberaggression and cyberbullying. We also perform a second survey that labels the videos’ contents and emotions exhibited. After the labeling of the media sessions, we provide a detailed analysis of the media sessions to investigate the cyberbullying and cyberaggression behavior in Vine. After the analysis, we train different classifiers based upon the labeled media sessions. We then investigate, evaluate and compare the classifers’ performances to detect instances of cyberbullying. | ['Rahat Ibn Rafiq', 'Homa Hosseinmardi', 'Sabrina Arredondo Mattson', 'Richard Han', 'Qin Lv', 'Shivakant Mishra'] | Analysis and detection of labeled cyberbullying instances in Vine, a video-based social network | 899,745 |
Tremendous researches have been carried out on quantization techniques for secret key generation, but there is little consideration about the causes of quantization discrepancy. In fact, the non-reciprocity channel measurements and the unavoidable quantization noise could result in the quantization discrepancy of the quantization bits at the two intended nodes, which has an important effect on the quantizer design. Moreover, these studies ignore the fact that the quantization level should be bounded by the mutual information between the two intended nodes. In this paper, we first analyze the factors of quantization discrepancy and then propose the Entropy-Constrained-like Quantization Scheme (ECQS) to minimize the quantization discrepancy. This scheme mainly focuses on two segments: non-reciprocity balancing and Entropy-Constrained-like optimal quantization. Simulation results show that, with this scheme, the non-reciprocity channel components can be well balanced and the bit mismatch ratio (BMR) is reduced significantly. | ['Xuanxuan Wang', 'Lars Thiele', 'Thomas Haustein', 'Yongming Wang'] | Secret key generation using entropy-constrained-like quantization scheme | 832,783 |
A new generic model-based segmentation algorithm is presented, which can be trained from examples akin to the active shape model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Whereas ASM alternates between shape and intensity information during search, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized noniteratively using dynamic programming, without the need for initialization. The algorithm was validated for segmentation of anatomical structures in chest and hand radiographs. In each experiment, the presented method had a significant higher performance when compared to the ASM schemes. As the method is highly effective, optimally suited for pathological cases and easy to implement, it is highly useful for many medical image segmentation tasks. | ['Dieter Seghers', 'Dirk Loeckx', 'Frederik Maes', 'Dirk Vandermeulen', 'Paul Suetens'] | Minimal Shape and Intensity Cost Path Segmentation | 322,072 |
A polynomial source of randomness over F n is a random variable X = f(Z) where f is a polynomial map and Z is a random variable distributed uniformly over F r for some integer r. The three main parameters of interest associated with a polynomial source are the order q of the field, the (total) degree D of the map f , and the base-q logarithm of the size of the range of f over inputs in F r , denoted by k. For simplicity we call X a (q; D; k)-source. | ['Eli Ben-Sasson', 'Ariel Gabizon'] | Extractors for Polynomial Sources over Fields of Constant Order and Small Characteristic | 558,178 |
Aviation spare parts provisioning is a highly complex problem. Traditionally, provisioning has been carried out using a conventional Poisson-based approach where inventory quantities are calculated separately for each part number and demands from different operations bases are consolidated into one single location. In an environment with multiple operations bases, however, such simplifications can lead to situations in which spares -- although available at another airport -- first have to be shipped to the location where the demand actually arose, leading to flight delays and cancellations. In this paper we demonstrate how simulation-based optimisation can help with the multi-location inventory problem by quantifying synergy potential between locations and how total service lifecycle cost can be further reduced without increasing risk right away from the Initial Provisioning (IP) stage onwards by taking into account advanced logistics policies such as pro-active re-balancing of spares between stocking locations. | ['Peter Lendermann', 'Annamalai Thirunavukkarasu', 'Malcolm Yoke Hean Low', 'Leon F. McGinnis'] | Initial provisioning and spare parts inventory network optimisation in a multi maintenance base environment | 284,162 |
Interactive Visual Analysis of Lumbar Back Pain - What the Lumbar Spine Tells About Your Life. | ['Paul Klemm', 'Sylvia Glaßer', 'Kai Lawonn', 'Marko Rak', 'Henry Völzke', 'Katrin Hegenscheid', 'Bernhard Preim'] | Interactive Visual Analysis of Lumbar Back Pain - What the Lumbar Spine Tells About Your Life. | 749,188 |
This paper introduces an approach that reduces the size of the state and maximizes the sparsity of the information matrix in exactly sparse delayed-state SLAM. We propose constant time procedures to measure the distance between a given pair of poses, the mutual information gain for a given candidate link, and the joint marginals required for both measures. Using these measures, we can readily identify non redundant poses and highly informative links and use only those to augment and to update the state, respectively. The result is a delayed-state SLAM system that reduces both the use of memory and the execution time and that delays filter inconsistency by reducing the number of linearization introduced when adding new loop closure links. We evaluate the advantage of the proposed approach using simulations and data sets collected with real robots. | ['Viorela Ila', 'Josep M. Porta', 'Juan Andrade-Cetto'] | Reduced state representation in delayed-state SLAM | 535,264 |
In this paper, performance of binary pulse position modulation-time hopping (BPPM-TH) and the binary pulse amplitude modulation-direct sequence (BPAM-DS) ultra wide band (UWB) systems with a double binary turbo code is analyzed and simulated in an indoor wireless channel. The indoor wireless channel is modeled as a modified Saleh and Valenzuela (SV) channel. The performance is evaluated in terms of bit error probability (BER). From the simulation results, it is seen that double binary turbo coding offers considerable coding gain with reasonable encoding complexity. It is also demonstrated that the performance of the UWB system can be substantially improved by increasing the number of iterations. | ['Eun Cheol Kim', 'Jin Young Kim'] | Double binary turbo coding for BPPM-TH and BPAM-DS UWB systems | 285,527 |
Scale spaces allow us to organize, compare and analyse differently sized structures of an object. The linear scale space of a monochromatic image is the solution of the heat equation using that image as an initial condition. Alternatively, this linear scale space can also be obtained by applying Gaussian filters of increasing variances to the original image. The authors compare (by looking at theoretical properties, running time and output differences) five ways of discretizing this Gaussian scale-space: sampling Gaussian distributions; recursively calculating Gaussian approximations; using splines; approximating by first-order generators; and finally, by a new method we call "Crossed Convolutions". In particular, we explicitly present a correct way of initializing the recursive method to approximate Gaussian convolutions. | ['Anderson F. Cunha', 'Ralph Teixeira', 'Luiz Velho'] | Discrete scale spaces via heat equation | 464,018 |
The recent Digital Video Satellite Broadcast Standard (DVB-S2) has adopted a powerful FEC scheme based on the serial concatenation of Bose-Chaudhuri-Hocquenghen (BCH) and low-density parity-check (LDPC) codes. The high-speed requirements, long block lengths and adaptive encoding defined in the DVB-S2 standard, present complex challenges in the design of an efficient codec hardware architecture. In this paper, synthesizable, high throughput, scalable and parallel HDL models supporting the 21 different BCH+LDPC DVB-S2 code configurations are presented. For BCH decoding, an efficient Chien search circuit for shortened BCH codes is proposed. The LDPC codec architecture explores the periodicity M = 360 of the special LDPC-IRA codes adopted by the standard. Synthesis results for an FPGA device from Xilinx show a throughput above the minimal 90 Mbps. | ['Manuel Gomes', 'Gabriel Falcao', 'V. Silva', 'Vitor Ferreira', 'Alexandre Sengo', 'L. Silva', 'N. Marques', 'Miguel Falcão'] | Scalable and parallel codec architectures for the DVB-S2 FEC system | 63,540 |
A feedback power control approach that allows power commands to be updated at a higher rate than the rate of multipath fading is investigated. The signal and interference statistics as received at the base stations after power control are obtained for a simulated direct-sequence code-division multiple-access (CDMA) system, which includes multiple base stations with diversity receivers and a large number of power-controlled users continuously moving at various speeds. It is shown that often-used analyses based on perfect average power control lead to optimistic capacity results (by 25% to 60%) because interference is underestimated by 1 to 2 dB. > | ['Sirikiat Ariyavisitakul', 'Li Fung Chang'] | Signal and interference statistics of a CDMA system with feedback power control | 842,835 |
A novel method of attaching the extra stages in multistage interconnection networks (MINs) that allows for the bypassing of the extra stages is presented. Using a novel routing scheme, messages adaptively select to take the shortest path toward their destination or use one of the longer paths going through the extra stages. This results in significant performance improvement under many traffic patterns, including hot spots, as shown by simulation results. > | ['Smaragda Konstantinidou'] | The selective extra-stage butterfly | 64,743 |
We present a method for the removal of noise including non-Gaussian impulses from a signal. Impulse noise is removed jointly a homogenous Gaussian noise floor using a Gabor regression model [1]. The problem is formulated in a joint Bayesian framework and we use a Gibbs MCMC sampler to estimate parameters. We show how to deal with variable magnitude impulses using a shifted inverse gamma distribution for their variance. Our results show improved signal to noise ratios and perceived audio quality by explicitly modelling impulses with a discrete switching process and a new heavy-tailed amplitude model. | ['James K. Murphy', 'Simon J. Godsill'] | Joint Bayesian removal of impulse and background noise | 440,196 |
A combined detection-estimation scheme is proposed for state estimation in linear systems with random Markovian noise statistics. The optimal MMSE estimator requires exponentially increasing memory and computations with time. The proposed approach is an attempt to circumvent this problem. Simulation results are presented which show the advantages of the proposed scheme over some of the existing suboptimal approaches. | ['Jitendra K. Tugnait', 'Abraham H. Haddad'] | Brief paper: A detection-estimation scheme for state estimation in switching environments | 611,646 |
MicroRNAs (miRNA) are 21 nucleotide-long non-coding small RNAs, which function as posttranscriptional regulators in eukaryotes. miRNAs play essential roles in regulating plant growth and development. In recent years, research into the mechanism and consequences of miRNA action has made great progress. With whole genome sequence available in such plants as Arabidopsis thaliana, Oryza sativa, Populus trichocarpa, Glycine max, etc., it is desirable to develop a plant miRNA database through the integration of large amounts of information about publicly deposited miRNA data. The plant miRNA database (PMRD) integrates available plant miRNA data deposited in public databases, gleaned from the recent literature, and data generated in-house. This database contains sequence information, secondary structure, target genes, expression profiles and a genome browser. In total, there are 8433 miRNAs collected from 121 plant species in PMRD, including model plants and major crops such as Arabidopsis, rice, wheat, soybean, maize, sorghum, barley, etc. For Arabidopsis, rice, poplar, soybean, cotton, medicago and maize, we included the possible target genes for each miRNA with a predicted interaction site in the database. Furthermore, we provided miRNA expression profiles in the PMRD, including our local rice oxidative stress related microarray data (LC Sciences miRPlants_10.1) and the recently published microarray data for poplar, Arabidopsis, tomato, maize and rice. The PMRD database was constructed by open source technology utilizing a user-friendly web interface, and multiple search tools. The PMRD is freely available at http://bioinformatics.cau.edu.cn/PMRD. We expect PMRD to be a useful tool for scientists in the miRNA field in order to study the function of miRNAs and their target genes, especially in model plants and major crops. | ['Zhenhai Zhang', 'Jingyin Yu', 'Daofeng Li', 'Zu-Yong Zhang', 'Fengxia Liu', 'Xin Zhou', 'Tao Wang', 'Yi Ling', 'Zhen Su'] | PMRD: plant microRNA database | 28,414 |
This paper introduces an adaptive calibration structure for the blind calibration of frequency response mismatches in a two-channel time-interleaved analog-to-digital converter (TI-ADC). By representing frequency response mismatches as polynomials, we can exploit slight oversampling to estimate the coefficients of the polynomials by using the filtered-X least-mean square (FxLMS) algorithm. Utilizing the coefficients in an adaptive structure, we can compensate frequency response mismatches including time offset and bandwidth mismatches. We develop an analytical framework for the calibration structure and analyze its performance. We show the efficiency of the calibration structure by simulations, where we include examples from the literature. | ['Shahzad Saleem', 'Christian Vogel'] | Adaptive Blind Background Calibration of Polynomial-Represented Frequency Response Mismatches in a Two-Channel Time-Interleaved ADC | 101,451 |
Irregular joint source and channel coding (JSCC) scheme is proposed, which we refer to as the irregular unary error correction (IrUEC) code. This code operates on the basis of a single irregular trellis, instead of employing a set of separate regular trellises, as in previous irregular trellis-based codes. Our irregular trellis is designed with consideration of the UEC free distance, which we characterize for the first time in this paper. We conceive the serial concatenation of the proposed IrUEC code with an irregular unity rate code (IrURC) code and propose a new EXtrinsic Information Transfer (EXIT) chart matching algorithm for parametrizing these codes. This facilitates the creation of a narrow EXIT tunnel at a low $E_\text{b}/N_0$ value and provides near-capacity operation. Owing to this, our scheme is found to offer a low symbol error ratio (SER), which is within 0.4 dB of the discrete-input continuous-output memoryless channel (DCMC) capacity bound in a particular practical scenario, where gray-mapped quaternary phase shift keying (QPSK) modulation is employed for transmission over an uncorrelated narrowband Rayleigh-fading channel with an effective throughput of $0.508 \,\text{bit}\,{\text{s}}^{-1}\,{\text{Hz}}^{-1}$ . Furthermore, the proposed IrUEC–IrURC scheme offers a SER performance gain of 0.8 dB, compared to the best of several regular and irregular separate source and channel coding (SSCC) benchmarkers, which is achieved without any increase in transmission energy, bandwidth, transmit duration, or decoding complexity. | ['Wenbo Zhang', 'Matthew F. Brejza', 'Tao Wang', 'Robert G. Maunder', 'Lajos Hanzo'] | Irregular Trellis for the Near-Capacity Unary Error Correction Coding of Symbol Values From an Infinite Set | 578,803 |
An Approach to the Relationship between Efficiency and Process Management. | ['Inés González-González', 'Enric Serradell-López', 'David Castillo-Merino'] | An Approach to the Relationship between Efficiency and Process Management. | 806,956 |
This paper deals with a memetic algorithm for the reconstruction of binary images, by using their projections along four directions. The algorithm generates by network flows a set of initial images according to two of the input projections and lets them evolve toward a solution that can be optimal or close to the optimum. Switch and compactness operators improve the quality of the reconstructed images which belong to a given generation, while the selection of the best image addresses the evolution to an optimal output. | ['Vito Di Gesù', 'Giosuè Lo Bosco', 'Filippo Millonzi', 'Cesare Valenti'] | A memetic algorithm for binary image reconstruction | 544,288 |
Video-based person re-identification (re-id) is an important application in practice. However, only a few methods have been presented for this problem. Since large variations exist between different pedestrian videos, as well as within each video, it's challenging to conduct re-identification between pedestrian videos. In this paper, we propose a simultaneous intra-video and inter-video distance learning (SI2DL) approach for video-based person re-id. Specifically, SI2DL simultaneously learns an intra-video distance metric and an inter-video distance metric from the training videos. The intravideo distance metric is to make each video more compact, and the inter-video one is to make that the distance between two truly matching videos is smaller than that between two wrong matching videos. To enhance the discriminability of learned metrics, we design a video relationship model, i.e., video triplet, for SI2DL. Experiments on the public iLIDS-VID and PRID 2011 image sequence datasets show that our approach achieves the state-of-the-art performance. | ['Xiaoke Zhu', 'Xiao-Yuan Jing', 'Fei Wu', 'Hui Feng'] | Video-based person re-identification by simultaneously learning intra-video and inter-video distance metrics | 982,434 |
In the framework of our Reactive Virtual Trainer (RVT) project, we are developing an Intelligent Virtual Agent (IVA) capable to act similarly to a real trainer. Besides presenting the physical exercises to be performed, she keeps an eye on the user. She provides feedback whenever appropriate, to introduce and structure the exercises, to make sure that the exercises are performed correctly, and also to motivate the user. In this paper we talk about the corpora we collected, serving a basis to model repetitive exercises on high level. Then we discuss in detail how the actual performance of the user is compared to what he should be doing, what strategy is used to provide feedback, and how it is generated. We provide preliminary feedback from users and outline further work. | ['Zsófia Ruttkay', 'Herwin van Welbergen'] | Elbows Higher! Performing, Observing and Correcting Exercises by a Virtual Trainer | 111,449 |
Performance comparison of position based routing protocols using different mobility models | ['Adam Macintosh', 'Mohammad Ghavami', 'Ming Fei Siyau'] | Performance comparison of position based routing protocols using different mobility models | 636,974 |
Classification of imbalanced data is pervasive but it is a difficult problem to solve. In order to improve the classification of imbalanced data, this letter proposes a new error function for the error back-propagation algorithm of multilayer perceptrons. The error function intensifies weight-updating for the minority class and weakens weight-updating for the majority class. We verify the effectiveness of the proposed method through simulations on mammography and thyroid data sets. | ['Sang-Hoon Oh'] | Letters: Error back-propagation algorithm for classification of imbalanced data | 507,298 |
In this paper, we study the result of applying a lowpass variant filtering using scaling-rotating kernels to both the spatial and spatial-frequency representations of a two-dimensional (2-D) signal (image). It is shown that if we apply this transformation to a Fourier pair, the two resulting signals can also form a Fourier pair when the filters used in each domain maintain a dual relationship. For a large class of "self-dual" filters, a perfect symmetry exists, so that the lowpass scaling-rotating variant filtering (SRVF) is the same in both domains, thus commuting with the Fourier transform operator. The lowpass SRVF of an image is often referred to as a "foveated" image, whereas its Fourier pair (the lowpass SRVF of its spectrum) can be realized as a local spectrum estimation around the point of attention. This lowpass SRVF is equivalent to a log-polar warping of the image representation followed by a lowpass invariant filtering and the corresponding inverse warping. The use of the log-polar warped representation allows us to extend the one-dimensional (1-D) scale transform to higher dimensions, in particular to images, for which we have defined a scale-rotation invariant representation. We also present an efficient implementation using steerable filters to compute both the foveated image and the local spectrum. | ['Antonio Tabernero', 'Javier Portilla', 'Rafael Navarro'] | Duality of log-polar image representations in the space and spatial-frequency domains | 520,223 |
Efforts to derive maximum value from data have led to an expectation that this is "just the cost of living in the modern world." Ultimately this form of data exploitation will not be sustainable either due to customer dissatisfaction or government intervention to ensure private information is treated with the same level of protection that we currently find in paper-based systems. Legal, technical, and moral boundaries need to be placed on how personal information is used and how it can be combined to create inferences that are often highly accurate but not guaranteed to be correct. Agrawal's initial call-to-arms in 2002 has generated a large volume of work but the analytics and privacy communities are not truly communicating with the goal of providing high utility from the data collected but in such a way that it does not violate the intended purpose for which it was initially collectedi¾?[2]. This paper describes the current state of the art and makes a call to open a true dialog between these two communities. Ultimately, this may be the only way current analytics will be allowed to continue without severe government intervention and/or without severe actions on behalf of the people from whom the data is being collected and analyzed by either refusing to work with exploitative corporations or litigation to address the harms arising from the current practices. | ['Ken Barker'] | Privacy Protection or Data Value: Can We Have Both? | 598,759 |
Encouraging the Learning of Written Language by Deaf Users: Web Recommendations and Practices | ['Marta Angélica Montiel Ferreira', 'Juliana Bueno', 'Rodrigo Bonacin'] | Encouraging the Learning of Written Language by Deaf Users: Web Recommendations and Practices | 851,088 |
Arbitrarily tight upper and lower bounds on the pairwise error probability (PEP) of a trellis-coded or convolutional-coded direct-sequence spread-spectrum multiple-access (DS/SSMA) communication system over a Rayleigh fading channel are derived. A new set of probability density functions (PDFs) and cumulative distribution functions (CDFs) of the multiple-access interference (MAI) statistic is derived, and a modified bounding technique is proposed to obtain the bounds. The upper bounds and lower bounds together specify the accuracy of the resulting estimation of the PEP, and give an indication of the system error performance. Several suboptimum decoding schemes are proposed and their performances are compared to that of the optimum decoding scheme by the average pairwise error probability (APEP) values. The approach can be used to accurately study the multiple-access capability of the coded DS/SSMA system without numerical integrations. | ['Tsao-Tsen Chen', 'James S. Lehnert'] | Bounds on the pairwise error probability of coded DS/SSMA communication systems in Rayleigh fading channels | 255,712 |
Sudden Unexpected Death in Epilepsy (SUDEP) is the leading mode of epilepsy-related death and is most common in patients with intractable, frequent, and continuing seizures. A statistically significant cohort of patients for SUDEP study requires meticulous, prospective follow up of a large population that is at an elevated risk, best represented by the Epilepsy Monitoring Unit (EMU) patient population. Multiple EMUs need to collaborate, share data for building a larger cohort of potential SUDEP patient using a state-of-the-art informatics infrastructure. To address the challenges of data integration and data access from multiple EMUs, we developed the Multi-Modality Epilepsy Data Capture and Integration System (MEDCIS) that combines retrospective clinical free text processing using NLP, prospective structured data capture using an ontology-driven interface, interfaces for cohort search and signal visualization, all in a single integrated environment. A dedicated Epilepsy and Seizure Ontology (EpSO) has been used to streamline the user interfaces, enhance its usability, and enable mappings across distributed databases so that federated queries can be executed. MEDCIS contained 936 patient data sets from the EMUs of University Hospitals Case Medical Center (UH CMC) in Cleveland and Northwestern Memorial Hospital (NMH) in Chicago. Patients from UH CMC and NMH were stored in different databases and then federated through MEDCIS using EpSO and our mapping module. More than 77GB of multi-modal signal data were processed using the Cloudwave pipeline and made available for rendering through the web-interface. About 74% of the 40 open clinical questions of interest were answerable accurately using the EpSO-driven VISual AGregagator and Explorer (VISAGE) interface. Questions not directly answerable were either due to their inherent computational complexity, the unavailability of primary information, or the scope of concept that has been formulated in the existing EpSO terminology system. | ['Guo Qiang Zhang', 'Licong Cui', 'Samden D. Lhatoo', 'Stephan U. Schuele', 'Satya S. Sahoo'] | MEDCIS: Multi-Modality Epilepsy Data Capture and Integration System | 816,617 |
Computer science Olympiad: community project for disadvantaged schools | ['Donald A. Cook'] | Computer science Olympiad: community project for disadvantaged schools | 471,101 |
This letter presents a novel unsupervised competitive learning rule called the boundary adaptation rule (BAR), for scalar quantization. It is shown both mathematically and by simulations that BAR converges to equiprobable quantizations of univariate probability density functions and that, in this way, it outperforms other unsupervised competitive learning rules. > | ['M.M. Van Hulle', 'David R. Martinez'] | On a novel unsupervised competitive learning algorithm for scalar quantization | 237,459 |
This document contains improved and updated proofs of convergence for the sampling method presented in our paper "Free-configuration Biased Sampling for Motion Planning" (2) The following is the abstract of the original paper: In sampling-based motion planning algorithms the initial step at every iteration is to generate a new sample from the obstacle- free portion of the configuration space. This is usually accom- plished via rejection sampling, i.e., repeatedly drawing points from the entire space until an obstacle-free point is found. This strategy is rarely questioned because the extra work associated with sampling (and then rejecting) useless points contributes at most a constant factor to the planning algorithm's asymptotic runtime complexity. However, this constant factor can be quite large in practice. We propose an alternative approach that enables sampling from a distribution that provably converges to a uniform distribution over only the obstacle-free space. Our method works by storing empirically observed estimates of obstacle-free space in a point-proximity data structure, and then using this information to generate future samples. Both theoretical and experimental results validate our approach. | ['Joshua Bialkowski', 'Michael W. Otte', 'Emilio Frazzoli'] | Free-configuration Biased Sampling for Motion Planning: Errata | 594,756 |
Understanding the main trends in human behavior is fundamental to developing effective adaptive treatments. Inspired by this insight, this paper presents a mathematical quantification of the change of human behavior following external stimuli. In particular, statistical methods are applied to real physical activity data collected intensively using mobile wearable technologies. We explain the setup of the study conducted with multiple participants. Then, a preprocessing of the collected measurements, required to overcome the hurdles associated with behavioral data, is briefly discussed. Furthermore, we identify a dynamical affine model that approximates humans' sedentary behavior. The affine model is simple yet insightful. We show results of fitting time-invariant as well as switched models along with a quantification of the prediction errors. Moreover, the effect of various types of treatments on the sedentary behavior of several subjects is investigated. As expected, the results show that people react differently to external stimuli. However, common tendencies are clearly observed. Our findings emphasize the necessity of the application of personalized adaptive intervention. Future research directions are discussed accordingly. | ['Mahmoud Ashour', 'Korkut Bekiroglu', 'Chih-Hsiang Yang', 'Constantino M. Lagoa', 'David E. Conroy', 'Joshua M. Smyth', 'Stephanie T. Lanza'] | On the mathematical modeling of the effect of treatment on human physical activity | 902,449 |
Security measures taken in isolation and without reference to a concrete and relevant assessment and evaluation of actual risks are doomed to be inefficient. At best they do not address the real issues facing an organization and simply waste resources, at worst they provide management with inappropriate comfort over the level of security management that is in place. This paper reviews the key points of some relevant international standards, discusses the links between effective risk management and optimized security measures, and provides a case study illustrating the benefits to be obtained from a structured and integrated approach. | ['Solange Ghernaouti-Hélie', 'Igli Tashi', 'David Simms'] | Optimizing Security Efficiency through Effective Risk Management | 223,535 |
In this paper, we propose a multi-node differential amplify-and-forward scheme for cooperative communications. The proposed scheme efficiently combines signals from the direct and multiple relay links to improve communication reliability. Bit-error-rate (BER) analysis for M-ary differential phase shift keying is provided as performance measure of the proposed scheme, and optimum power allocation is investigated. While the exact BER formulation of the proposed scheme is not available currently, we provide as a performance benchmark a tight BER formulation based on optimum combining weights. A simple BER upper bound and a tight BER approximation show that the proposed scheme can achieve the full diversity which equals to the number of cooperating nodes. We further provide simple BER approximation in order to provide analytical result on power allocation scheme. A closed-form optimum power allocation based on the tight simple BER approximation is obtained for single-relay scenario. An approximate optimum power allocation scheme is provided for multi-relay systems. The provided BER formulations are shown to closely match to the simulation results. Moreover, simulation results show that the optimum power allocation scheme achieves up to 2 dB performance gain over the equal power allocation scheme | ['T. Himsoon', 'Weifeng Su', 'K.J.R. Liu'] | Differential modulation for multi-node amplify-and-forward wireless relay networks | 230,428 |
E-fraud is an e-crime that affects society, as a whole, impacting upon individuals, businesses and governments. Recent studies suggest that e-fraud is on the increase and that a lack of awareness, and inappropriate, limited or absent countermeasures have only exacerbated the negative impact of E-fraud to society. The response to e-fraud has concentrated on context specific technical solutions being narrowly focused, and typically dealing with only a few of the numerous factors and dimensions that may be seen to be constituent to e-fraud. A review of the literature suggests that e-fraud is, at a very basic level, poorly understood, which may in go some way to explain the above-mentioned difficulties in addressing e-fraud. Fundamental to this poor understanding is the lack of a theoretical basis upon which a comprehensive understanding may be built and from which a co-ordinated response may be made. This study seeks to redress this situation through the development of a model of the process of e-fraud, using the existing literature as a guide. Based on a broad definition of both e-crime and e-fraud, the resultant model describes the five key elements of e -fraud: perpetrator, mode of attack, target system, target entity and impact. It is envisaged that the model will allow the mechanics and context of e -fraud to be more fully understood, thus assisting in the development and implementation of effective countermeasures. | ['Pattama Malakedsuwan', 'Ken Stevens'] | A Model of E-Fraud | 370,244 |
Undergraduate ethics instruction in engineering can be broadly divided into two models — disciplinary ethics (integrated within a course) and standalone semester-long ethics course. While both these models have educational value, they insufficiently prepare students in dealing with everyday routine ethical decision making that they might encounter in the workplace. This is because both these models barely consider the organizational or cultural context in their discussions. This study investigates the merits of using cognitive apprenticeship — a situated learning model, to enculturate millennial undergraduate students to everyday workplace ethical decision making. This quasi-experimental study was conducted at a large public Midwestern university. The study participants included a cohort of graduating seniors from the mechanical engineering class, 90% of whom had full/part time relevant work experience. In addition, 55% of them had prior standalone or disciplinary ethics instruction. Despite their work experience and prior ethics instruction, data analysis revealed that there were marked improvements in students' organizational ethical decision making. The quantitative comparative analysis indicated significant improvements in scenario-based scores between the pre and post tests. The nature and depth of the qualitative responses revealed that students appreciated situated ethics instruction because it informed them on how to deal with everyday ethical decision making in the workplace. | ['Wilkistar Otieno', 'Nisha Kumar'] | A situated learning ethics instruction model for engineering undergraduate students | 564,029 |
Object proposal is essential for current state-of-the-art object detection pipelines. However, the existing proposal methods generally fail in producing results with satisfying localization accuracy. The case is even worse for small objects, which, however, are quite common in practice. In this paper, we propose a novel scale-aware pixelwise object proposal network (SPOP-net) to tackle the challenges. The SPOP-net can generate proposals with high recall rate and average best overlap, even for small objects. In particular, in order to improve the localization accuracy, a fully convolutional network is employed which predicts locations of object proposals for each pixel. The produced ensemble of pixelwise object proposals enhances the chance of hitting the object significantly without incurring heavy extra computational cost. To solve the challenge of localizing objects at small scale, two localization networks, which are specialized for localizing objects with different scales are introduced, following the divide-and-conquer philosophy. Location outputs of these two networks are then adaptively combined to generate the final proposals by a large-/small-size weighting network. Extensive evaluations on PASCAL VOC 2007 and COCO 2014 show the SPOP network is superior over the state-of-the-art models. The high-quality proposals from SPOP-net also significantly improve the mean average precision of object detection with Fast-Regions with CNN features framework. Finally, the SPOP-net (trained on PASCAL VOC) shows great generalization performance when testing it on ILSVRC 2013 validation set. | ['Zequn Jie', 'Xiaodan Liang', 'Jiashi Feng', 'Wen Feng Lu', 'Eng Hock Francis Tay', 'Shuicheng Yan'] | Scale-Aware Pixelwise Object Proposal Networks | 636,592 |
In spite of the success of the standard wavelet transform (WT) in image processing, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in only horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours), which are very important elements in visual perception, intersect too many wavelet basis functions and reduce the sparsity of the representation. To capture efficiently these anisotropic geometrical structures, a more complex multi-directional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT (with the corresponding basis functions called directionlets) that retains the separable filtering and simple filter design from the standard two-dimensional (2-D) WT and imposes directional vanishing moments (DVM). Further-more, we show that this novel transform has non-linear approximation efficiency competitive to the other previously proposed over-sampled transform constructions. | ['Vladan Velisavljevic', 'Baltasar Beferull-Lozano', 'Martin Vetterli', 'Pier Luigi Dragotti'] | Approximation power of directionlets | 208,006 |
The DASH (directory architecture for shared-memory) multiprocessor, which combines the programmability of shared-memory machines with the scalability of message-passing machines, is described. Hardware-supported coherent caches provide for low-latency access of shared data and ease of programming. Caches are kept coherent by means of a distributed directory-based protocol. Shared memory in the machine is distributed among the processing nodes, and scalable memory bandwidth is provided by connecting the nodes through a general interconnection network. The prototype DASH machine will consists of 64 high-performance microprocessors, with an aggregate performance of over 1200 MIPS and 250 scalar MFLOPS. The fundamental premise in DASH is that it is possible to build a scalable shared-memory machine with hardware-supported coherent caches by using a distributed directory-based cache coherence protocol. The mechanisms for providing scalable memory bandwidth, reducing and tolerating memory latency, and supporting efficient synchronization are described. A brief description of the machine's implementation is given. > | ['Daniel E. Lenoski', 'Kourosh Gharachorloo', 'James Laudon', 'Anoop Gupta', 'John L. Hennessy', 'Mark Horowitz', 'Monica S. Lam'] | Design of scalable shared-memory multiprocessors: the DASH approach | 917,787 |
The problem of handling emergency situations (e.g., earthquakes, tornados, hurricanes, political rebellions, etc.) is very challenging since the fixed network infrastructures can become unusable, and it is fundamental to build a peer-to-peer network intended to spread information among people involved in the emergency. The goal of this paper is to propose an approach that enables an adaptive behavior of ungoverned communication networks with the purpose of maximizing the information diffusion and minimizing the energy consumption of the communication devices (e.g., smartphones). We introduce an energy-aware gossip algorithm to adapt the message passing methodology among the involved devices on the basis of their battery level, while guaranteeing the information diffusion within a certain geographical area and minimizing the overall energy consumption. The approach is implemented in a simulation context that allows to quantify the percentage of area coverage within a certain interval of time by adapting the process of message passing on the basis of devices' battery level. Experimental results demonstrate that our approach outperforms classic broadcast algorithms up to 85.58% in terms of energy consumption. | ['Lorenzo Pagliari', 'Raffaela Mirandola', 'Diego Perez-Palacin', 'Catia Trubiani'] | Energy-Aware Adaptive Techniques for Information Diffusion in Ungoverned Peer-to-Peer Networks | 842,314 |
A separation algorithm for achieving color constancy and theorems concerning its accuracy are presented. The algorithm requires extra information, over and above the usual three values mapping human cone responses, from the optical system. However, with this additional information-specifically, a sampling across the visible range of the reflected, color-signal spectrum impinging on the optical sensor-the authors are able to separate the illumination spectrum from the surface reflectance spectrum contained in the color-signal spectrum which is, of course, the product of these two spectra. At the heart of the separation algorithm is a general statistical method for finding the best illumination and reflectance spectra, within a space represented by finite-dimensional linear models of statistically typical spectra, whose product closely corresponds to the spectrum of the actual color signal. Using this method, the authors are able to increase the dimensionality of the finite-dimensional linear model for surfaces to a realistic value. One method of generating the spectral samples required for the separation algorithm is to use the chromatic aberration effects of a lens. An example of this is given. The accuracy achieved in a large range of tests is detailed, and it is shown that agreement with actual surface reflectance is excellent. > | ['Jian Ho', 'Brian V. Funt', 'Mark S. Drew'] | Separating a color signal into illumination and surface reflectance components: theory and applications | 437,819 |
We demonstrate that a generative model for object shapes can achieve state of the art results on challenging scene text recognition tasks, and with orders of magnitude fewer training images than required for competing discriminative methods. In addition to transcribing text from challenging images, our method performs fine-grained instance segmentation of characters. We show that our model is more robust to both affine transformations and non-affine deformations compared to previous approaches. | ['Xinghua Lou', 'Ken Kansky', 'Wolfgang Lehrach', 'Christopher Laan', 'Bhaskara Marthi', 'D. Scott Phoenix', 'Dileep George'] | Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data | 947,624 |
We present an Angluin-style algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alphabets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particular we can learn a subclass of nominal non-deterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments. | ['Joshua Moerman', 'Matteo Sammartino', 'Alexandra Silva', 'Bartek Klin', 'Michał Szynwelski'] | Learning nominal automata | 858,012 |
This paper examines some issues that affect the efficiency and fairness of the Transmission Control Protocol (TCP), the backbone of Internet protocol communication, in multi-hops satellite network systems. It proposes a scheme that allows satellite systems to automatically adapt to any change in the number of active TCP flows due to handover occurrence, the free buffer size, and the bandwidth-delay product of the network. The proposed scheme has two major design goals: increasing the system efficiency, and improving its fairness. The system efficiency is controlled by matching the aggregate traffic rate to the sum of the link capacity and total buffer size. On the other hand, the system min-max fairness is achieved by allocating bandwidth among individual flows in proportion with their RTTs. The proposed scheme is dubbed Recursive, Explicit, and Fair Window Adjustment (REFWA). Simulation results elucidate that the REFWA scheme substantially improves the system fairness, reduces the number of packet drops, and makes better utilization of the bottleneck link. The results demonstrate also that the proposed scheme works properly in more complicated environments where connections traverse multiple bottlenecks and the available bandwidth may change over data transmission time | ['Tarik Taleb', 'Nei Kato', 'Yoshiaki Nemoto'] | REFWA: an efficient and fair congestion control scheme for LEO satellite networks | 19,985 |