abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
['Chenchen Ding', 'Takashi Inui', 'Mikio Yamamoto']
Long-distance hierarchical structure transformation rules utilizing function words.
780,632
This paper empirically investigates the behavior of three variants of covariance matrix adaptation evolution strategies (CMA-ES) for dynamic optimization. These three strategies include the elitist (1+1)-CMA-ES, the non-elitist (μ,λ)-CMA-ES and sep-CMA-ES. To better understand the influence of covariance matrix adaptation methods and of the selection methods to the strategies in dynamic environments, we use the state-of-art dynamic optimization benchmark problems to evaluate the performance. We compare these CMA-ES variants with the traditional (1+1)-ES with the one-fifth success rule. Our experimental results show that the simple elitist strategies including the (1+1)-ES and the (1+1)-CMA-ES generally outperform those non-elitist CMA-ES variants on one out of the six dynamic functions. We also investigate the performance when the dynamic environments change with different severity and when the problems are in higher dimensions. The elitist strategies are robust to different severity of dynamic changes, but the performance is worse when the problem dimensions are increased. In high dimensions, the performance of the elitist and the non-elitist versions of CMA-ES are marginally the same.
['Chun-Kit Au', 'Ho-fung Leung']
An empirical comparison of CMA-ES in dynamic environments
635,451
To understand the utilization of clinical resources and improve the efficiency of healthcare, it is often necessary to accurately locate patients and doctors in a healthcare facility. However, existing tracking methods, such as GPS, Wi-Fi and RFID, have technological drawbacks or impose significant costs, thus limiting their applications in many clinical environments, especially those with indoor enclosures. This paper proposes a low-cost and flexible tracking system that is well suited for operating in an indoor environment. Based on readily available RF transceivers and microcontrollers, our wearable sensor system can facilitate locating users (e.g., patients or doctors) or objects (e.g., medical devices) in a building. The strategic construction of the sensor system, along with a suitably designed tracking algorithm, together provide for reliability and dispatch in localization performance. For demonstration purposes, several simplified experiments, with different configurations of the system, are implemented in two testing rooms to assess the baseline performance. From the obtained results, our system exhibits immense promise in acquiring a user location and corresponding time-stamp, with high accuracy and rapid response. This capability is conducive to both short- and long-term data analytics, which are crucial for improving healthcare management.
['Yuzhe Ouyang', 'Kai Shan', 'Francis Minhthang Bui']
An RF-based wearable sensor system for indoor tracking to facilitate efficient healthcare management
908,518
The application of neural networks to the optimum routing problem in packet-switched computer networks, where the goal is to minimize the network-wide average time delay, is addressed. Under appropriate assumptions, the optimum routing algorithm relies heavily on shortest path computations that have to be carried out in real time. For this purpose an efficient neural network shortest path algorithm that is an improved version of previously suggested Hopfield models is proposed. The general principles involved in the design of the proposed neural network are discussed in detail. Its computational power is demonstrated through computer simulations. One of the main features of the proposed model is that it will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. >
['Mustafa K. Mehmet Ali', 'Faouzi Kamoun']
Neural networks for shortest path computation and routing in computer networks
101,875
In this paper, a new method of inter-cell interference (ICI) cancellation in the downlink is proposed for users at the cell boundary in OFDM-based cellular systems, by using the concept of a subcarrier-based virtual multiple-input multiple-output (SV- MIMO) where multiple antenna techniques are performed on a set of subcarriers, not on the actual antenna array. In the proposed SV-MIMO, "subcarrier signatures," obtained at the mobile station (MS) with a single antenna, are exploited to separate the desired signal from ICIs. The concept of a virtual signature randomizer (VSR) is introduced to improve channel separability in the SV-MIMO approach. Also, the concept of a virtual smart antenna, in which the steering vector is formed by a set of subcarriers with the estimated symbol timing offset (STO) between BSs, is compared with conventional ICI reduction methods. It is shown by simulation that the proposed method is effective in reducing ICI and inter-sector interference for fully- loaded OFDM cellular systems with a frequency reuse factor equal to 1, compared with conventional methods.
['Kyu In Lee', 'Kyung Soo Woo', 'Yo Han Ko', 'Jae Young Ahn', 'Yong Soo Cho']
An Inter-Cell Interference Cancellation Method for OFDM Cellular Systems Using a Subcarrier-Based Virtual MIMO
222,701
The complexity of automated negotiation in a multi-issue, incomplete-information and continuous-time environment poses severe challenges, and in recent years many strategies have been proposed in response to this challenge. For the traditional evolution, strategies are studied in games assuming that "globally" negotiates with all other participates. This evaluation, however, is not suited for negotiation settings that are primarily characterized by "local" interactions among the participating agents, that is, settings in which each of possibly many participating agents negotiates only with its local neighbors rather than all other agents. A new class of negotiation games is therefore introduced that take negotiation locality (hence spatial information about the agents) into consideration. It is shown how spatial evolutionary game theory can be used to interpret bilateral negotiation results among state-of-the-art strategies.
['Siqi Chen', 'Jianye Hao', 'Gerhard Weiss', 'Karl Tuyls', 'Ho-fung Leung']
Spatial evolutionary game-theoretic perspective on agent-based complex negotiations
736,740
['Hien Duong', 'Linglong Zhu', 'Yutao Wang', 'Neil T. Heffernan']
A prediction model that uses the sequence of attempts and hints to better predict knowledge: "Better to attempt the problem first, rather than ask for a hint".
795,569
['Andi Drebes', 'Jean-Baptiste Bréjon', 'Antoniu Pop', 'Karine Heydemann', 'Albert Cohen']
Language-Centric Performance Analysis of OpenMP Programs with Aftermath
889,478
In integrated all-digital FPGA based communication systems bit synchronization is a fundamental operation for the best symbol detection. In this paper a highly flexible early-late gate implementation is proposed. It is optimized for low resource consumption in FPGA implementations.
['Paolo Zicari', 'Pasquale Corsonello', 'Stefania Perri']
An Efficient Bit-Detection and Timing Recovery Circuit for FPGAs
251,684
['Abhishek K. Gupta', 'Harpreet S. Dhillon', 'Sriram Vishwanath', 'Jeffrey G. Andrews']
Downlink MIMO HetNets with Load Balancing
775,023
Malware poses a significant threat to commerce and banking systems. Specifically, the Zeus banking botnet is reported to have caused more than 100 million dollars in damages. This type of malware has been around for over ten years, and in 2013 alone was responsible for compromising over one-million computers. The impact of banking botnets (i.e., typically Zeus or its derivatives) can be lessened by exploiting the inherent vulnerabilities of their command and control (CC however we do not discourage traditional malware removal and clean-up processes. As a complement to traditional processes, we offer our approach to organizations with the proper authority for an active defense (i.e., offensive measures). We demonstrate the feasibility of this approach by using the leaked Zeus 2.0.8.9 toolkit that included the C&C web application. The following security flaws exist in the Zeus 2.0.8.9 C&C web application: (1) no authentication between the zbot (i.e., client-side malware) and the C&C, (2) a lack of proper access control in the web application folders, and (3) simple clear text authentication between C&C and the remote bot-herder. Our results suggest that because of these security flaws, a range of offensive measures are viable against the Zeus C&C, including Buffer-Overflow, Denial-of-Service, and Dictionary or Brute Force Attacks.
['Lanier Watkins', 'Christina Kawka', 'Cherita L. Corbett', 'William H. Robinson']
Fighting banking botnets by exploiting inherent command and control vulnerabilities
926,208
We consider the design of space-time constellations when the channel state information that is available at the receiver is an estimate of the channel coefficients with known error covariance. This setup encompasses the well-studied scenarios of perfect and no channel knowledge and allows a smooth transition between these two cases. We perform an asymptotic pairwise error probability analysis and derive a criterion to design constellations matched to the level of channel knowledge available at the receiver. Moreover, we use the criterion to assess the power tradeoff between data transmission and channel coefficient acquisition for any given specific set of constellations. Simulation results illustrate the benefit of the proposed criterion
['Jochen Giese', 'Mikael Skoglund']
Space-time constellation design for partial CSI at the receiver
959,109
['Wolfgang Günther', 'Nicole Drechsler', 'Rolf Drechsler', 'Bernd Becker']
Verification of Designs Containing Black Boxes.
745,215
Currently, there exist a lot of challenges in the transportation scope that researcher are trying to resolve and one of them can be focused on transportation planning. The main contribution of this paper was the design and implementation of an ITS smart sensor prototype that incorporates and combine the Internet of Things (IoT) and Bigdata approaches in order to produce ITS cloud services for helping transportation planning for Bus Rapid Transit (BRT) systems. The ITS smart sensor prototype is capable of detecting several Bluetooth signals belonging to several devices (for instance from mobile phones) that people uses into the BRT (for instance, in Bogota city) system. As from that information, the ITS smart sensor prototype can create the O/D (origin/Destiny) Matrix for several BRT routes and this information can be used by the Administrator Authorities (AA) in order to produce a suitable transportation planning for the BRT systems. In addition, that information can be used by AA as from cloud services.
['Luis Felipe Herrera-Quintero', 'Klaus Banse', 'Julian Camilo Vega-Alfonso', 'Andres Venegas-Sanchez']
Smart ITS sensor for the transportation planning using the IoT and Bigdata approaches to produce ITS cloud services
846,195
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations, these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms.We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.
['Ariful Azad', 'Aydin Buluç']
A matrix-algebraic formulation of distributed-memory maximal cardinality matching algorithms in bipartite graphs
800,005
['Chandreyee Chowdhury', 'Sarmistha Neogy']
Reliability Estimation of Delay Tolerant QoS Mobile Agent System in MANET
578,689
We describe our systems for Tasks 1 and 2 of the WMT15 Shared Task on Quality Estimation. Our submissions use (i) a continuous space language model to extract additional features for Task 1 (SHEFGP, SHEF-SVM), (ii) a continuous bagof-words model to produce word embeddings as features for Task 2 (SHEF-W2V) and (iii) a combination of features produced by QuEst++ and a feature produced with word embedding models (SHEFQuEst++). Our systems outperform the baseline as well as many other submissions. The results are especially encouraging for Task 2, where our best performing system (SHEF-W2V) only uses features learned in an unsupervised fashion.
['Kashif Shah', 'Varvara Logacheva', 'Gustavo Paetzold', 'Frédéric Blain', 'Daniel Beck', 'Fethi Bougares', 'Lucia Specia']
SHEF-NN: Translation Quality Estimation with Neural Networks
613,369
We study congestion periods in a finite fluid buffer when the net input rate depends upon a recurrent Markov process; congestion occurs when the buffer content is equal to the buffer capacity. Similarly to O’Reilly and Palmowski (2013), we consider the duration of congestion periods as well as the associated volume of lost information. While these quantities are characterized by their Laplace transforms in that paper, we presently derive their distributions in a typical stationary busy period of the buffer. Our goal is to compute the exact expression of the loss probability in the system, which is usually approximated by the probability that the occupancy of the infinite buffer is greater than the buffer capacity under consideration. Moreover, by using general results of the theory of Markovian arrival processes, we show that the duration of congestion and the volume of lost information have phase-type distributions.
['Fabrice Guillemin', 'Bruno Sericola']
Volume and duration of losses in finite buffer fluid queues
404,391
Gary Shore's "Dracula Untold" is not your typical vampire story. It's the untold story of Vlad Tepes, also known as Vlad the Impaler, or Dracula, and how a man with a famous name prevented another, Mehmed the Conqueror, from living up to his own.
['Gemma Samuell']
Dracula untold
664,273
The persistent storage options in smartphones employ journaling or double-write to enforce atomicity, consistency and durability, which introduces significant overhead to system performance. Our in-depth examination of the issue leads us to believe that much of the overhead would be unnecessary if we rethink the volatility of memory considering the battery-backed characteristics of DRAM in modern-day smartphones. With this rethinking, we propose quasi Non-Volatile Memory (qNVRAM), a new design that makes the DRAM in smartphones quasi non-volatile, to help remove the performance overhead of enforcing persistency. We assess the feasibility and effectiveness of our design by implementing a persistent page cache in SQLite. Our evaluation on a real Android smartphone shows that qNVRAM speeds up the insert, update and delete transactions by up to 16.33×, 15.86× and 15.76× respectively.
['Hao Luo', 'Lei Tian', 'Hong Jiang']
qNVRAM: quasi non-volatile RAM for low overhead persistency enforcement in smartphones
581,118
One of the disadvantages of statically typed languages is the programming overhead caused by writing all the necessary type information: Both type declarations and type definitions are typically required. Traditional type inference aims at relieving the programmer from the former.We present a rule-based constraint rewriting algorithm that reconstructs both type declarations and type definitions, allowing the programmer to effectively program type-less in a strictly typed language. This effectively combines strong points of dynamically typed languages (rapid prototyping) and statically typed ones (documentation, optimized compilation). Moreover it allows to quickly port code from a statically untyped to a statically typed setting.Our constraint-based algorithm reconstructs uniform polymorphic definitions of algebraic data types and simultaneously infers the types of all expressions and functions (supporting polymorphic recursion) in the program. The declarative nature of the algorithm allows us to easily show that it has a number of highly desirable properties such as soundness, completeness and various optimality properties. Moreover, we show how to easily extend and adapt it to suit a number of different language constructs and type system features
['Tom Schrijvers', 'Maurice Bruynooghe']
Polymorphic algebraic data type reconstruction
490,384
In concurrency theory, various semantic equivalences on labelled transition systems are based on traces enriched or decorated with some additional observations. They are generally referred to as decorated traces, and examples include ready, failure, trace and complete trace equivalence. Using the generalized powerset construction, recently introduced by a subset of the authors [Silva, A., F. Bonchi, M.M. Bonsangue and J.J.M.M. Rutten, Generalizing the powerset construction, coalgebraically, in: K. Lodaya and M. Mahajan, editors, FSTTCS 2010, LIPIcs 8, 2010, pp. 272-283. URL http://drops.dagstuhl.de/opus/volltexte/2010/2870], we give a coalgebraic presentation of decorated trace semantics. This yields a uniform notion of canonical, minimal representatives for the various decorated trace equivalences, in terms of final Moore automata. As a consequence, proofs of decorated trace equivalence can be given by coinduction, using different types of (Moore-) bisimulation (up-to), which is helpful for automation.
['Filippo Bonchi', 'Marcello M. Bonsangue', 'Georgiana Caltais', 'Jan Rutten', 'Alexandra Silva']
Final Semantics for Decorated Traces
30,520
In this paper, we investigate the impact of feedback in LT codes to guarantee unequal recovery time (URT) for different message segments. We analyze the URT-LT codes using the AND-OR tree for two scenarios: complete and partial feedback. We derive the necessary conditions for these two feedback schemes to achieve the required recovery time. We validate the analysis by simulation and highlight the cases where feedback is advantageous.
['Rana Zamin Abbas', 'Mahyar Shirvanimoghaddam', 'Yonghui Li', 'Branka Vucetic']
Analysis on LT codes for unequal recovery time with complete and partial feedback
883,006
['Attila Budai', 'Georg Michelson', 'Joachim Hornegger']
Multiscale Blood Vessel Segmentation in Retinal Fundus Images.
763,869
Due to the opportunities provided by the Internet, people are taking advantage of e-learning courses and during the last few years enormous research efforts have been dedicated to the development of e-learning systems. So far, many e-learning systems are proposed and used practically. However, in these systems the e-learning completion rate is low. One of the reasons is the low study desire and motivation. In our previous work, we implemented a e-learning system that is able to increase the learning efficiency by stimulating learners motivation. In this work, we designed and implemented new functions to improve the system performance.
['Keita Matsuo', 'Leonard Barolli', 'Fatos Xhafa', 'Akio Koyama', 'Arjan Durresi', 'Makoto Takizawa']
Implementation and Design of New Functions for a Web-Based E-learning System to Stimulate Learners Motivation
245,416
['Halvor S. Hansen', 'Philippe H. Hünenberger']
Using the local elevation method to construct optimized umbrella sampling potentials: Calculation of the relative free energies and interconversion barriers of glucopyranose ring conformers in water
384,603
['Christine L. Lisetti']
Believable Agents, Engagement, and Health Interventions
636,427
Background#R##N#Glioma is the most common brain tumor and it has very high mortality rate due to its infiltration and heterogeneity. Precise classification of glioma subtype is essential for proper therapeutic treatment and better clinical prognosis. However, the molecular mechanism of glioma is far from clear and the classical classification methods based on traditional morphologic and histopathologic knowledge are subjective and inconsistent. Recently, classification methods based on molecular characteristics are developed with rapid progress of high throughput technology.
['Sujuan Wu', 'Junyi Li', 'Mushui Cao', 'Jing Yang', 'Yixue Li', 'Yuanyuan Li']
A novel integrated gene coexpression analysis approach reveals a prognostic three-transcription-factor signature for glioma molecular subtypes
866,176
The critical cyber-infrastructure of the United States is under a constant barrage of attacks. Adversaries foreign and domestic attack the nation's systems in order to test their design and limits; to steal information spy; to damage the system; and embed malware which can be deployed at a later time. The ability of the United States' military and federal civilian departments to detect, delay, and respond to these attacks is essential to our national security. Identifying the best personnel to place in these critical occupations requires understanding the knowledge, skills, abilities and other factors KSAOs necessary to successfully complete important job tasks. It is also beneficial to understand the cognitive aspects of the job and when cognitive load is too high; when cognitive fatigue is setting in; and how these affect job performance. These factors are identified and measured by Industrial-Organizational I-O psychologists using the methods of job analysis and cognitive task analysis.
['Robert Kittinger', 'Liza Kittinger', 'Glory Emmanuel Avina']
Job Analysis and Cognitive Task Analysis in National Security Environments
863,010
Write-invalidate and write-broadcast coherency protocols have been criticized for being unable to achieve good bus performance across all cache configurations. In particular, write-invalidate performance can suffer as block size increases; and large cache sizes will hurt write-broadcast. Read-broadcast and competitive snooping extensions to the protocols have been proposed to solve each problem. Our results indicate that the benefits of the extensions are limited. Read-broadcast reduces the number of invalidation misses, but at a high cost in processor lockout from the cache. The net effect can be an increase in total execution cycles. Competitive snooping benefits only those programs with high per-processor locality of reference to shared data. For programs characterized by inter-processor contention for shared addresses, competitive snooping can degrade performance by causing a slight increase in bus utilization and total execution time.
['Susan J. Eggers', 'Randy H. Katz']
Evaluating The Performance Of Four Snooping Cache Coherency Protocols
294,251
Determining the correct structure of coordinating conjunctions and the syntactic constituents that they coordinate is a difficult task. This subtask of syntactic parsing is explored here for biomedical scientific literature. In particular, the intuition that sentences containing coordinating conjunctions can often be rephrased as two or more smaller sentences derived from the coordination structure is exploited. Generating candidate sentences corresponding to different possible coordination structures and comparing them with a language model is employed to help determine which coordination structure is best. This strategy is used to augment a simple baseline system for coordination resolution which outperforms both the baseline system and a constituent parser on the same task.
['Philip V. Ogren']
Improving Syntactic Coordination Resolution using Language Modeling
285,060
['Maciej Majewski', 'W. Kacalak']
Human-Machine Speech-Based Interfaces with Augmented Reality and Interactive Systems for Controlling Mobile Cranes
872,686
Time-domain analysis is a powerful tool for testing A/D converters (ADCs), particularly by incoherent sampling. This sampling is encountered for instance in real application and systems where the clock is generated by an on-board crystal oscillator. In the case of the sine-wave model for ADC output samples, fitting a sine-wave function to the data record is nonlinear with respect to the frequency. To overcome the convergence problem, the authors propose some alternative methods based on frequency estimation by means of fast Fourier transform and eigenanalysis of the autocorrelation matrix. This leads to a simple linear problem. The effectiveness of these methods is proven on both simulation and experimental results
['Dominique Dallet', 'David Slepicka', 'Yannick Berthoumieu', 'Djamel Haddadi', 'Philippe Marchegay']
[ADC Characterization in Time Domain] Frequency Estimation to Linearize Time-Domain Analysis of A/D Converters
457,297
Wind speed forecasting is essential for environmental related reasons and many applications i.e. estimating the shortterm energy production from wind farm operations. This work investigates and compares different approaches for the problem of wind forecasting in a simplified manner introducing and considering the prediction of the range within which the mean wind value will fall in the next time step. The proposed forecasting approach treats the forecasting task as a classification problem, which is suitable for this specific application. It takes advantage of the fact that, it is a natural/hidden ordering problem of the predefined classes, thus in this work ordinal classification - also known as ordinal regression- approaches are tested and compared with conventional nominal classifiers. The preliminary results indicate that considering the natural ordering of the classes yields the best performance for the specific test site involved in this study.
['George Georgoulas', 'Stavros Kolios', 'Petros S. Karvelis', 'Chrysostomos D. Stylios']
Examining nominal and ordinal classifiers for forecasting wind speed
934,970
['Kinzang Chhogyal', 'Abhaya C. Nayak', 'Abdul Sattar']
Probabilistic Belief Contraction: Considerations on Epistemic Entrenchment, Probability Mixtures and KL Divergence
591,940
In many audio applications, digital all-pass filters are of central importance; a key property of such filters is energy (l/sub 2/ norm) preservation. In audio effect and sound synthesis algorithms, it is desirable to have filters that behave as all-passes with time-varying characteristics, but direct generalizations of time-invariant designs can lose the important norm-preserving property; for fast parameter variation, large gain increases are possible. We here call attention to some simple time-varying filter structures, based on wave digital filter designs, that do preserve signal energy and that reduce to simple first- and second-order all-pass filters in the time-invariant case.
['Stefan Bilbao']
Time-varying generalizations of all-pass filters
225,418
['Hao-Teng Fan', 'Wen-hsiang Tu', 'Jeih-weih Hung']
強健性語音辨識中基於小波轉換之分頻統計補償技術的研究 (A Study of Sub-band Feature Statistics Compensation Techniques Based on a Discrete Wavelet Transform for Robust Speech Recognition) [In Chinese].
791,249
['Gholamreza Alirezaei', 'Johannes Schmitz']
Geometrical sensor selection in large-scale high-density sensor networks
697,439
Estimating emotions while reading enables new services such as comic recommendation. Most of existing emotion estimation systems employ bulky devices. Furthermore, few applications have been developed for analyzing the emotions while reading. The purpose of our research is to develop a method for estimating emotions while reading. As the target of reading, we select comics, which stimulate emotions often more than other types of documents. As we want our system to be easily usable, we selected sensors embedded in a wristband and an eye tracker. Emotions can be described by two dimensions called emotional valence and arousal. As a first step, we propose in this paper to estimate the emotional arousal. We analyze the electrodermal activity, blood volume pulse, heart rate, skin temperature and pupil diameter of a subject to estimate if the reader feels a high or low arousal while reading. Our experiment shows that for some participants, the arousal can be estimated accurately.
['Mizuki Matsubara', 'Olivier Augereau', 'Charles Lima Sanches', 'Koichi Kise']
Emotional arousal estimation while reading comics based on physiological signal analysis
980,798
Models of rectangular grid structures were constructed in the form of a colored Petri net. The basic model consists of a matrix of switching nodes that deliver packets to computing nodes which are attached to the matrix borders and produce and consume packets. Since grid structures are often employed to solve boundary value problems, square and torus surfaces were studied and generalized to hypercube and hypertorus in multidimensional space using a grid node that aggregates switching and computing nodes. Traffic guns were added to the models to represent traffic attacks. Simulation in CPN Tools revealed simple and dangerous traffic gun configurations, such as a traffic duel, focus, crossfire, and side shot, which bring the grid to complete deadlock at less than 5 % of the grid peak load. Comparably low gun intensity targeted to induce deadlock areas within a grid (network) is a key characteristic of disguised traffic attacks. The aim of future work will be to develop counter-measures for these attacks.
['D. A. Zaitsev', 'Tatiana R. Shmeleva', 'Werner Retschitzegger', 'Birgit Pröll']
Security of grid structures under disguised traffic attacks
817,680
['Kesami Sano', 'Mariko Matsuki', 'Satoko Tsuru', 'Fumiko Wako', 'Junko Yamasaki', 'Satoko Yamaji', 'Satsuki Tanahashi', 'Sawako Kawamura']
The Nursing Care Contents for the Visiting Nursing using PCAPS.
751,500
['Christoph Berkholz', 'Oleg Verbitsky']
Bounds for the quantifier depth in two-variable logic
799,540
['Fabio Leuzzi', 'Stefano Ferilli']
Reasoning by Analogy Using Past Experiences.
738,312
A novel perspective of fuzzy control is introduced, by combining a continuous Takagi-Sugeno (TSFS) with a discrete-time recurrent fuzzy system (RFS). The developed hybrid dynamic recurrent Takagi-Sugeno fuzzy approach enables a recurrent rule base, which leads to a dynamical interpolation law between linear subsystems. The formalism is applicable to switched systems and hybrid automaton models. The stability concerning switched systems is ensured by a common quadratic Lyapunov function. Additionally, two multiple Lyapunov function-based stability relaxation conditions are shown. For the hybrid automaton case, practical stability is analyzed. The performance of the approach is validated by simulating a car distance control system (hybrid automaton) and an experimental application to an inverted pendulum (switched control).
['Klaus Diepold', 'Sebastian J. Pieczona']
Recurrent Takagi-Sugeno fuzzy interpolation for switched linear systems and hybrid automata
65,995
Acceleration signals have a powerful disturbance rejection potential in rigid body motion control, as they carry a measure proportional to the resulting force. Yet, they are seldom used, since measuring, decoupling, and utilizing the dynamic acceleration in the control design is not trivial. This paper discusses these topics and presents a solution for marine vessels building on conventional methods together with a novel control law design, where the dynamic acceleration signals are used to form a dynamic referenceless disturbance feedforward compensation. This replaces conventional integral action and enables unmeasured external loads and unmodel dynamics to be counteracted with low time lag. A case study shows the feasibility of the proposed design using experimental data and closed-loop high fidelity simulations of dynamic positioning in a harsh cold climate environment with sea-ice.
['Øivind Kåre Kjerstad', 'Roger Skjetne']
Disturbance Rejection by Acceleration Feedforward for Marine Surface Vessels
714,460
We study an online job scheduling problem arising in networks with aggregated links. The goal is to schedule n jobs, divided into k disjoint chains, on m identical machines, without preemption, so that the jobs within each chain complete in the order of release times and the maximum flow time is minimized.#R##N##R##N#We present a deterministic online algorithm $\mathsf{Block}$ with competitive ratio $O(\sqrt{n/m})$, and show a matching lower bound, even for randomized algorithms. The performance bound for $\mathsf{Block}$ we derive in the paper is, in fact, more subtle than a standard competitive ratio bound, and it shows that in overload conditions (when many jobs are released in a short amount of time), $\mathsf{Block}$ ’s performance is close to the optimum.#R##N##R##N#We also show how to compute an offline solution efficiently for k=1, and that minimizing the maximum flow time for k,m≥2 is ${ \mathcal {N}\mathcal {P}}$-hard. As by-products of our method, we obtain two offline polynomial-time algorithms for minimizing makespan: an optimal algorithm for k=1, and a 2-approximation algorithm for any k.
['Wojciech Jawor', 'Marek Chrobak', 'Christoph Dürr']
Competitive Analysis of Scheduling Algorithms for Aggregated Links
37,808
['Laszlo Blazovics', 'Tamás Lukovszki', 'Bertalan Forstner']
Surrounding Robots – A Discrete Localized Solution for the Intruder Problem –
741,844
2D Echocardiography is an important diagnostic aid for morphological and functional assessment of the heart. The transducer position is varied during an echo exam to elicit important information about the heart function and its anatomy. The knowledge of the transducer viewpoint is important in automatic cardiac echo interpretation to understand the regions being depicted as well as in the quantification of their attributes. In this paper, we address the problem of inferring the transducer viewpoint from the spatio-temporal information in cardiac echo videos. Unlike previous approaches, we exploit motion of the heart within a cardiac cycle in addition to spatial information to discriminate between viewpoints. Specifically, we use an active shape model (ASM) to model shape and texture information in an echo frame. The motion information derived by tracking ASMs through a heart cycle is then projected into the eigen-motion feature space of the viewpoint class for matching. We report comparison with a re-implementation of state-of-the-art view recognition methods in echos on a large database of patients with various cardiac diseases.
['David Beymer', 'Tanveer Fathima Syeda-Mahmood', 'Fei Wang']
Exploiting spatio-temporal information for view recognition in cardiac echo videos
153,729
In traditional distributed computing systems a few user types are found having rather ``flat" profiles, mainly due to the same administrative domain the users belong to. This is quite different in Computational Grids (CGs) in which several user types should co-exist and make use of resources according to hierarchical nature and the presence of the multiple administrative domains. One implication of the existence of different hierarchical levels in CGs is that it imposes different access and usage policies on resources. In this paper we firstly highlight the most common Grid users types and their relationships and access scenarios in CGs corresponded to old (e.g. performance) and new (e.g security) requirements. Then, we identify and analyze new features arising in users' behavior in Grid scheduling, such as dynamic, selfish, cooperative, trustful, symmetric and asymmetric behaviors. We discuss also how computational economy-based approaches, such as market mechanisms, and computational paradigms, such as Neural Networks, can be used to model user requirements and predict users' behaviors in CGs. As a result of this study we have provided a comprehensive analysis of Grid user scenarios than can serve as a basis for application designers in CGs.
['Joanna Kolodziej', 'Fatos Xhafa']
Modelling of User Requirements and Behaviors in Computational Grids
274,680
['Abe Davis', 'Michael Rubinstein', 'Neal Wadhwa', 'Gautham J. Mysore', 'William T. Freeman', 'Frédo Durand']
The visual microphone: Passive recovery of sound from video
598,832
We describe our efforts to create infrastructure to enable web interfaces for robotics. Such interfaces will enable researchers and users to remotely access robots through the internet as well as expand the types of robotic applications available to users with web-enabled devices. This paper centers on rosjs, a lightweight Javascript binding for ROS, Willow Garage's robot middleware framework. rosjs exposes many of the capabilities of ROS, allowing application developers to write controllers that are executed through a web browser. We discuss how rosjs extends ROS and briefly overview some of the features it provides. rosjs has been instrumental in the creation of remote laboratories featuring the iRobot Create and the PR2. These facilities will be available to the community as experimental resources. We describe the overall goals of this project as well as provide a brief description of how rosjs was used to help create web interfaces for these facilities.
['Sarah Osentoski', 'Graylin Jay', 'Christopher Crick', 'Benjamin Pitzer', 'Charles DuHadway', 'Odest Chadwicke Jenkins']
Robots as web services: Reproducible experimentation and application development using rosjs
92,131
['K. Watanabe', 'Masaru Fukushi', 'Michitaka Kameyama']
Adaptive Group-Based Job Scheduling for High Performance and Reliable Volunteer Computing
230,260
In this paper, a causal optimal controller based on Nonlinear Model Predictive Control (NMPC) is developed for a power-split Hybrid Electric Vehicle (HEV). The global fuel minimization problem is converted to a finite horizon optimal control problem with an approximated cost-to-go, using the relationship between the Hamilton-Jacobi-Bellman (HJB) equation and the Pontryagin's minimum principle. A nonlinear MPC framework is employed to solve the problem online. Different methods for tuning the approximated minimum cost-to-go as a design parameter of the MPC are discussed. Simulation results on a validated high-fidelity closed-loop model of a power-split HEV over multiple driving cycles show that with the proposed strategy, the fuel economies are improved noticeably with respect to those of an available controller in the commercial Powertrain System Analysis Toolkit (PSAT) software and a linear time-varying MPC controller previously developed by the authors.
['H. Ali Borhan', 'Chen Zhang', 'Ardalan Vahidi', 'Anthony Mark Phillips', 'Ming Kuang', 'S. Di Cairano']
Nonlinear Model Predictive Control for power-split Hybrid Electric Vehicles
123,783
The software product line aims at the effective utilization of software assets, reducing the time required to deliver a product, improving the quality, and decreasing the cost of software products. Organizations trying to incorporate this concept require an approach to assess the current maturity level of the software product line process in order to make management decisions. A decision support tool for assessing the maturity of the software product line process is developed to implement the fuzzy logic approach, which handles the imprecise and uncertain nature of software process variables. The proposed tool can be used to assess the process maturity level of a software product line. Such knowledge will enable an organization to make crucial management decisions. Four case studies were conducted to validate the tool, and the results of the studies show that the software product line decision support tool provides a direct mechanism to evaluate the current software product line process maturity level within an organization.
['Faheem Ahmed', 'Luiz Fernando Capretz']
A Decision Support Tool for Assessing the Maturity of Software Product Line Process
617,910
Higher order iterative learning control (HO-ILC) algorithms use past system control information from more than one past iterative cycle. This class of ILC algorithms have been proposed aiming at improving the learning efficiency and performance. This paper addresses the optimality of HO-ILC in the sense of minimizing the trace of the control error covariance matrix in the presence of a class of uncorrelated random disturbances. It is shown that the optimal weighting matrices corresponding to the control information associated with more than one cycle preceding the current cycle are zero. That is, an optimal HO-ILC does not add to the optimality of standard first-order ILC in the sense of minimizing the trace of the control error covariance matrix. The system under consideration is a linear discrete-time varying systems with different relative degree between the input and each output
['Samer S. Saab']
Optimality of first-order ILC among higher order ILC
364,743
We present a scalable and incremental approach for creating interactive image-based walkthroughs from a dynamically growing collection of photographs of a scene. Prior approaches, such as [16], perform a global scene reconstruction as they require the knowledge of all the camera poses. These are recovered via batch processing involving pairwise image matching and structure from motion (Sfm), on collections of photographs. Both steps can become computational bottlenecks for large image collections. Instead of computing a global reconstruction and all the camera poses, our system utilizes several partial reconstructions, each of which is computed from only a small subset of overlapping images. These subsets are efficiently determined using a Bag of Words-based matching technique. Our framework easily allows an incoming stream of new photographs to be incrementally inserted into an existing reconstruction. We demonstrate that an imagebased rendering framework based on only partial scene reconstructions can be used to navigate large collections containing thousands of images without sacrificing the navigation experience. As our system is designed for incremental construction from a stream of photographs, it is well suited for processing the ever-growing photo collections.
['Kumar Srijan', 'Syed Shabib Ahsan', 'C. V. Jawahar', 'Sudipta N. Sinha']
Image-based walkthroughs from incremental and partial scene reconstructions
250,169
['Yanka Todorova', 'Petko Ruskov', 'Elissaveta Gourova', 'Mark Harris']
Applied Pattern for Strategy Management of Technology Entrepreneurship and Innovation MSc Program.
790,656
Energy-aware traffic engineering (ETE) has been gaining increasing research attentions due to the cost reduction benefits that it can offer to network operators and for environmental reasons. While numerous approaches exist which attempt to provide energy reduction benefits by intelligently manipulating network devices and their configurations, most of them suffer from one fundamental shortcoming: however, minor adaptations to a given IP network topology configuration all lead to temporal service disruptions incurred by routing reconvergence, which makes these schemes less appealing to network operators. The more frequently the IP topology reconfigurations take place in order to optimize the network performance against dynamic traffic demands, the more frequently service disruptions will occur to end users. Motivated by the essential requirement for network operators to enable seamless service assurance, we put forward a framework for disruption-free ETE, which leverages on selective link sleeping and wake-up operations in a disruption-free manner. The framework allows for maximizing the opportunities for disruption-free reconfigurations based on intelligent IGP link weight settings, assisted by a dynamic scheme that optimizes the reconfigurations in response to changing traffic conditions. As our simulation-based evaluation show, the framework is capable of achieving significant energy saving gains while at the same time ensuring robustness in terms of disruption avoidance and resilience to congestion.
['Obinna Okonor', 'Ning Wang', 'Stylianos Georgoulas', 'Zhili Sun']
Green Link Weights for Disruption-Free Energy-Aware Traffic Engineering
868,122
A vital extension to partial least squares (PLS) path modeling is introduced: consistency. While maintaining all the strengths of PLS, the consistent version provides two key improvements. Path coefficients, parameters of simultaneous equations, construct correlations, and indicator loadings are estimated consistently. The global goodness-of-fit of the structural model can also now be assessed, which makes PLS suitable for confirmatory research. A Monte Carlo simulation illustrates the new approach and compares it with covariance-based structural equation modeling
['Theo K. Dijkstra', 'Jörg Henseler']
Consistent and asymptotically normal PLS estimators for linear structural equations
329,552
The pervasive use of pointers with complicated patterns in C programs often constrains compiler alias analysis to yield conservative register allocation and promotion. Speculative register promotion with hardware support has the potential to more aggressively promote memory references into registers in the presence of aliases. This paper studies the use of the advanced load address table (ALAT), a data speculation feature defined in the IA-64 architecture, for speculative register promotion. An algorithm for speculative register promotion based on partial redundancy elimination is presented. The algorithm is implemented in Intel's open research compiler (ORC). Experiments on SPEC CPU2000 benchmark programs are conducted to show that speculative register promotion can improve performance of some benchmarks by 1% to 7%.
['Jin Lin', 'Tong Chen', 'Wei-Chung Hsu', 'Pen-Chung Yew']
Speculative register promotion using advanced load address table (ALAT)
384,387
A web service choreography describes a global protocol of interactions among a set of cooperating services. For the dynamic composition, changing interconnections by channel passing between services is necessary. In this paper we use model checking technique for the verifying problems related to channel passing in choreography. We develop a framework: for each kind of property to be verified, we define an abstraction function based on it, which map each basic interaction into a pair of pre- and post-conditions, then propose a compositional approach to translate choreographies into models for model checkers. A number of examples are presented to show how the verification is carried out.
['Liyang Peng', 'Chao Cai', 'Qiu Zongyan', 'Geguang Pu']
Verification of channel passing in choreography with model checking
441,522
['Richard D. Lindsley', 'David G. Long']
Enhanced-Resolution Reconstruction of ASCAT Backscatter Measurements
692,483
Networks with homogeneous routing nodes are constantly at risk as any vulnerability found against a node could be used to compromise all nodes. Introducing diversity among nodes can be used to address this problem. With few variants, the choice of assignment of variants to nodes is critical to the overall network resiliency. We present the Diversity Assignment Problem (DAP), the assignment of variants to nodes in a network, and we show how to compute the optimal solution in medium-size networks. We also present a greedy approximation to DAP that scales well to large networks. Our solution shows that a high level of overall network resiliency can be obtained even from variants that are weak on their own. For real-world systems that grow incrementally over time, we provide an online version of our solution. Lastly, we provide a variation of our solution that is tunable for specific applications (e.g., BFT).
['Andrew Newell', 'Daniel Obenshain', 'Thomas Tantillo', 'Cristina Nita-Rotaru', 'Yair Amir']
Increasing network resiliency by optimally assigning diverse variants to routing nodes
213,693
Conway and Sloane (1990) have listed the possible weight enumerators for extremal self-dual codes up to length 72. In this correspondence, we construct extremal singly-even self-dual [60,30,12] codes whose weight enumerator does not appear in this list. In addition, we present the possible weight enumerators for extremal self-dual codes of length 60.
['T.A. Gulliver', 'Masaaki Harada']
Weight enumerators of extremal singly-even [60,30,12] codes
378,959
In high-order, long-dead-time processes, inferential control systems will generally outperform conventional feedback control systems. To implement inferential control the user must know how to •-tune the inferential controller on line, •-avoid control degradation due to saturation of the control effort, •-smooth manual-automatic switching, •-design a cascade control system. In addition, the user should be able to determine whether the benefits of inferential control justify its selection over a PID control system. This report presents ways to meet these requirements and applies the methods to an industrial autoclave plant and to a laboratory heat exchange system.
['Judy Parrish', 'Coleman B. Brosilow']
Paper: Inferential control applications
637,346
The Whistler text-to-speech engine was designed so that we can automatically construct the model parameters from training data. This paper describes in detail the design issues of constructing the synthesis unit inventory automatically from speech databases. The automatic process includes (1) determining the scaleable synthesis unit which can reflect spectral variations of different allophones; (2) segmenting the recording sentences into phonetic segments; (3) select good instances for each synthesis unit to generate best synthesis sentence during the run time. These processes are all derived through the use of probabilistic learning methods which are aimed at the same optimization criteria. Through this automatic unit generation, Whistler can automatically produce synthetic speech that sounds very natural and resembles the acoustic characteristics of the original speaker.
['Hsiao-Wuen Hon', 'Alex Acero', 'Xuedong Huang', 'Jingsong Liu', 'Mike Plumpe']
Automatic generation of synthesis units for trainable text-to-speech systems
230,781
Graphs are popularly used to model structural relationships between objects. In many application domains such as social networks, sensor networks and telecommunication, graphs evolve over time. In this paper, we study a new problem of discovering the subgraphs that exhibit significant changes in evolving graphs. This problem is challenging since it is hard to define changing regions that are closely related to the actual changes (i.e., additions/deletions of edges/nodes) in graphs. We formalize the problem, and design an efficient algorithm that is able to identify the changing subgraphs incrementally. Our experimental results on real datasets show that our solution is very efficient and the resultant subgraphs are of high quality.
['Zheng Liu', 'Jeffrey Xu Yu', 'Yiping Ke', 'Xuemin Lin', 'Lei Chen']
Spotting Significant Changing Subgraphs in Evolving Graphs
98,422
Radio resource management (RRM) plays an important role in wireless communication systems, especially in more advanced systems with more constraint conditions. In this paper, we first propose a generalized water-filling approach to solve the power allocation problem of minimizing sum power while meeting the target sum rate constraint with weights. Based on this sum power objective function, we extend the proposed method to more complicated RRM problems with more stringent constraints. The proposed algorithms with this generalized approach possess several distinguished features. They provide exact optimal solutions based on non-derivative methods, as the implementation of the proposed algorithms invokes neither the derivative nor the gradient. With geometric interpretation, the proposed algorithms provide more insights into and intuitions of the problems and could be used to efficiently solve a family of the sum power minimization problems. Optimality of the proposed algorithms is strictly proved. Numerical results that illustrate the steps and demonstrate efficiency of the proposed algorithms are presented.
['Peter He', 'Lian Zhao']
Solving a Class of Sum Power Minimization Problems by Generalized Water-Filling
569,210
Scan design has been widely used to ease test generation process for digital circuits. Although full scan approach results in high fault coverage while reducing ATPG effort, it introduces area and performance overheads that are most times unacceptable. Hence, partial scan is a commonly used technique to improve testability of sequential circuits while respecting design constraints. In this paper, we present a method to select sequential elements (flip-flops) to compose a partial scan chain. We use a software engineering technique to identify internal variables or signals of the circuit's behavioral description that have low observability. Experiments demonstrate that our approach achieves a high fault coverage including few flip-flops in the scan chain. Moreover, comparative results show that, for complex circuits, proposed technique is more efficient than some classical methods in selecting flip-flops to compose partial scan.
['Margrit Reni Krug', 'Marcelo de Souza Moraes', 'Marcelo Lubaszewski']
Using a software testing technique to identify registers for partial scan implementation
378,449
['Ngoc Khuyen Le', 'Anaïs Vergne', 'Philippe Martins', 'Laurent Decreusefond']
Distributed Computation of the Cech Complex and Applications in Wireless Networks
968,372
Our goal is to decompose whole slide images (WSI) of histology sections into distinct patches (e.g., viable tumor, necrosis) so that statistics of distinct histopathology can be linked with the outcome. Such an analysis requires a large cohort of histology sections that may originate from different laboratories, which may not use the same protocol in sample preparation. We have evaluated a method based on a variation of the restricted Boltzmann machine (RBM) that learns intrinsic features of the image signature in an unsupervised fashion. Computed code, from the learned representation, is then utilized to classify patches from a curated library of images. The system has been evaluated against a dataset of small image blocks of 1k-by-1k that have been extracted from glioblastoma multiforme (GBM) and clear cell kidney carcinoma (KIRC) from the cancer genome atlas (TCGA) archive. The learned model is then projected on each whole slide image (e.g., of size 20k-by-20k pixels or larger) for characterizing and visualizing tumor architecture. In the case of GBM, each WSI is decomposed into necrotic, transition into necrosis, and viable. In the case of the KIRC, each WSI is decomposed into tumor types, stroma, normal, and others. Evaluation of 1400 and 2500 samples of GBM and KIRC indicates a performance of 84% and 81%, respectively.
['Nandita M. Nayak', 'Hang Chang', 'Alexander D. Borowsky', 'Paul T. Spellman', 'Bahram Parvin']
Classification of tumor histopathology via sparse feature learning
138,394
As the number of cores per chip increases, maintaining cache coherence becomes prohibitive for both power and performance. Non Coherent Cache (NCC) architectures do away with hardware-based cache coherence, but become difficult to program. Some existing architectures provide a middle ground by providing some shared memory in the hardware. Specifically, the 48-core Intel Single-chip Cloud Computer (SCC) provides some off-chip (DRAM) shared memory and some on-chip (SRAM) shared memory. We call such architectures Hybrid Shared Memory, or HSM, manycore architectures. However, how to efficiently execute multi-threaded programs on HSM architectures is an open problem. To be able to execute a multi-threaded program correctly on HSM architectures, the compiler must: i) identify all the shared data and map it to the shared memory, and ii) map the frequently accessed shared data to the on-chip shared memory. In this paper, we present a source-to-source translator written using CETUS (Dave et al. [1]) that identifies a conservative superset of all the shared data in a multi-threaded application, and maps it to the off-chip shared memory such that it enables execution on HSM architectures. This improves the performance of our benchmarks by 32x. Following, we identify and map the frequently accessed shared data to the on-chip shared memory. This further improves the performance of our benchmarks by 8x on average.
['Tushar Rawat', 'Aviral Shrivastava']
Enabling multi-threaded applications on hybrid shared memory manycore architectures
136,466
Clustering based on sparse representation is an important technique in machine learning and data mining fields. However, it is time-consuming because it constructs l 1-graph by solving l 1-minimization with all other samples as dictionary for each sample. This paper is focused on improving the efficiency of clustering based on sparse representation. Specifically, the Spectral Clustering Algorithm Based on Local Sparse Representation SCAL is proposed. For a given sample the algorithm solves l 1-minimization with the local $\emph{k}$ nearest neighborhood as dictionary, constructs the similarity matrix by calculating sparsity induced similarity SIS of the sparse coefficients solution, and then uses spectral clustering with the similarity matrix to cluster the samples. Experiments using face recognition data sets ORL and Extended Yale B demonstrate that the proposed SCAL can get better clustering performance and less time consumption.
['Sen Wu', 'Min Quan', 'Xiaodong Feng']
Spectral Clustering Algorithm Based on Local Sparse Representation
682,145
In this paper connectivity problem in wireless ad-hoc sensor networks is discussed. Asymptotic cases are considered. A relationship between coverage and connectivity in these networks is stated, generalized and used to find some upper bounds for transmitting capability of sensors. We introduce some concepts as multiconnectivity, infinite-connectivity and discuss their relation to fault tolerance in a sensor network. A sufficient condition for the order of connectivity to grow faster than the number of nodes is declared and proven.
['M.A. Khajehnejad', 'Anoosheh Heidarzadeh', 'Said Nader Esfahani']
A sufficient condition for infinite connectivity in two dimensional ad-hoc wireless sensor networks
376,133
This paper is devoted to efficient algorithms for real-time rendering of seashore using programmable Graphics Processing Unit (GPU). The scene of seashore is a usual component of virtual environment in simulators or games and should be realistic and real-time. We realized the realtime seashore simulation in three steps: first the ocean wave generation, using a simple but high-efficiency model which can describe both shallow and deep ocean; second the optical effects imitation and third the interaction of ocean waves with the coast, including the coastline simulation designed mathematically and the breaking waves simulation designed by 3D Be´zier curved surface via metamorphosing and key-frame animation. Scenes under different atmospheric conditions are also presented in this paper.
['Minzhi Luo', 'Guanghong Gong', 'Abdelkader El Kamel']
GPU-based real-time virtual reality modeling and simulation of seashore
390,309
In radio frequency (RF) energy harvesting (EH) cognitive radio network (CRN), the EH secondary users (SUs) access an idle channel to transmit data and an occupied channel to harvest energy. Therefore, the EH SUs can reliably transmit data only if sufficient energy and an idle channel are available. In this paper, we analyze the probability that the EH SUs may completely run out of energy and the achievable throughput of the EH SUs is derived accordingly. To improve the throughput of the SUs, we consider a 2-channel sensing scheme that the EH SUs are allowed to sequentially sense up to 2 channels to further search for the opportunities for data transmission. Consequently, the opportunities for data transmission increase, while less opportunities are available for energy harvesting. To validate the proposed analysis, we use the Monte-Carlo simulation to show an agreement between the analytical and simulated values, and the simulation results turn out to be reasonably acceptable.
['Shanai Wu', 'Yoan Shin', 'Jin Young Kim', 'Dong In Kim']
Energy outage and achievable throughput in RF energy harvesting cognitive radio networks
963,716
Applications that require the stringent reliability with delay constraint such as industrial process automation and patient monitoring have been emerged as new area of applications of wireless sensor networks (WSNs). This paper presents a per-flow reliability- and delay-aware scheduling algorithm for industrial WSNs. The proposed method adds time slots for retransmission to each link in order to satisfy required reliability. Furthermore, by sharing retransmission slots among different flows, the proposed method achieves the required reliability and forwarding delay while suppressing increase in the number of allocated slots. Through simulation-based evaluations, we show that the proposed method can reduce the number of slots allocated to neighbor links of a sink by up to 70% compared with conventional methods, which contributes to shorten the slotframe size, while satisfying the requirements of end-to-end reliability and forwarding delay.
['Masafumi Hashimoto', 'Naoki Wakamiya', 'Masayuki Murata', 'Yasutaka Kawamoto', 'Kiyoshi Fukui']
End-to-end reliability- and delay-aware scheduling with slot sharing for wireless sensor networks
695,639
Summary A theoretical basis for the evaluation of the eciency of quarantine measure is developed in a SIR model with time delay. In this model, the eectiveness of the closure of public places such as schools in disease control, modeled as a high degree node in a social network, is evaluated by considering the eect of the time delay in the identification of the infected. In the context of the SIR model, the relation between the number of infectious individuals who are identified with time delay and then quarantined and those who are not identified and continue spreading the virus are investigated numerically. The social network for the simulation is modeled by a scale free network. Closure measures are applied to those infected nodes with high degrees. The eectiveness of the measure can be controlled by the present value of the critical degree KC: only those nodes with degree higher than KC will be quarantined. The cost CQ incurred for the closure measure is assumed to be proportional to the total links rendered inactive as a result of the measure, and generally decreases with KC, while the medical cost CQ incurred for virus spreading increases with KC. The total social cost (CM +CQ) will have a minimum at a critical K , which depends on the ratio of medical cost coecient M and closure cost coecient Q. Our simulation results demonstrate a mathematical procedure to evaluate the eciency of quarantine measure. Although the numerical work is based on a scale free network, the procedure can be readily generalized and applied to a more realistic social network to determine the proper closure measure in future epidemics.
['Zhenggang Wang', 'Kwok Yip Szeto', 'Frederick C. Leung']
Effectiveness of closure of public places with time delay in disease control.
316,412
Designing and implementing civic technologies can be hard within public organizations such as government and nonprofit organizations. My dissertation research attempt to contribute towards understanding the socio-technical factors that influence the design and implementation of civic technologies using a practice lens. By studying three different cases of civic technologies design and implementation process (nonprofit social media use, civic hacking and data dive events), I aim to identify opportunity and challenges in designing and using civic technologies to support civic engagement.
['Youyang Hou']
Understand the Design and Implementation of Civic Technologies in Public Organizations
687,343
Si W i est le i-eme ensemble recursivement enumerable dans une enumeration standard, alors le theoreme de Kleene affirme que, pour toute fonction recursive f, il existe i qui est un point fixe de f dans le sens que W i =W f(i) . Ici on s'interesse aux degres de fonctions sans points fixes, en considerant aussi des points fixes modulo des relations d'equivalence diverses sur des ensembles recursivement enumerables
['G Carl Jockusch', 'Manuel Lerman', 'Robert I. Soare', 'Robert M. Solovay']
Recursively Enumerable Sets Modulo Iterated Jumps and Extensions of Arslanov's Completeness Criterion
513,372
Many evidence-based trust models require the adjustment of parameters such as aging- or exploration-factors. What the literature often does not address is the systematic choice of these parameters. In our work, we propose a generic procedure for finding trust model parameters that maximize the expected utility to the trust model user. The procedure is based on gametheoretic considerations and uses a genetic algorithm to cope with the vast number of possible attack strategies. To demonstrate the feasibility of the approach, we apply our procedure to a concrete trust model and optimize the parameters of this model.
['Eugen Staab', 'Thomas Engel']
Tuning Evidence-Based Trust Models
43,252
['Stanislav Bulygin']
Algebraic cryptanalysis of the round-reduced and side channel analysis of the full PRINTCipher-48.
753,338
The adequateness of IIR models for acoustic echo cancellation is a long-standing question, and the answers found in the literature are conflicting. We use the results from rational Hankel norm and least-squares approximation, and we recall a test that provides a priori performance levels for FIR and IIR models. We apply this test to the measured acoustic impulse responses. Upon comparing the performance levels of FIR and IIR models with the same number of free parameters, we do not observe any significant gain from the use of IIR models. We attribute this phenomenon to the shape of the energy spectra of the acoustic impulse responses so tested, which possess many strong and sharp peaks. Faithful modeling of these peaks requires many parameters, irrespective of the type of the model.
['Athanasios P. Liavas', 'Phillip A. Regalia']
Acoustic echo cancellation: do IIR models offer better modeling capabilities than their FIR counterparts?
65,940
Combined active and semi-supervised learning to reduce an amount of manual labeling when training a spoken language understanding model classifier. The classifier may be trained with human-labeled utterance data. Ones of a group of unselected utterance data may be selected for manual labeling via active learning. The classifier may be changed, via semi-supervised learning, based on the selected ones of the unselected utterance data.
['Gokhan Tur', 'Dilek Hakkani-Tür', 'Robert E. Schapire']
Combining active and semi-supervised learning for spoken language understanding
436,246
One of the most challenging problems in mining gene expression data is to identify how the expression of any particular gene affects the expression of other genes. To elucidate the relationships between genes, an association rule mining (ARM) method has been applied to microarray gene expression data. A conventional ARM method, however, has a limit on extracting temporal dependencies between genes, though the temporal information is indispensable to discover underlying regulation mechanisms in biological pathways. In this paper, therefore, we propose a novel method, referred to as temporal association rule mining (TARM), which can extract temporal dependencies among related genes. A temporal association rule has the form [gene A ↑, gene B↓] → (7 min)[gene C] , which represents that high expression level of gene A and significant repression of gene B followed by significant expression of gene C after 7 minutes. The proposed TARM method is tested with Saccharomyces cerevisiae cell cycle time-series microarray gene expression data set. In the parameter fitting phase of TARM, the best parameter set [threshold = ±0.8, support cutoff = 3 transactions, confidence cutoff = 90%], which extracted the most number of correct associations in KEGG cell cycle pathway, has been chosen for rule mining phase. Furthermore, comparing the precision scores of TARM (0.38) and Bayesian network (0.16), TARM method showed better accuracy. With the best parameter set, numbers of temporal association rules with five transcriptional time delays (0, 7, 14, 21, 28 minutes) are extracted from gene expression data of 799 genes which are pre-identified cell cycle relevant genes, while comparably small number of rules are extracted from random shuffled gene expression data of 799 genes. From the extracted temporal association rules, associated genes which play same role of biological processes within short transcriptional time delay and some temporal dependencies between genes with specific biological processes are identified.
['Hojung Nam', 'Ki Young Lee', 'Doheon Lee']
Identification of temporal association rules from time-series microarray data set: temporal association rules
143,697
This paper presents DARX, our framework for building applications that provide adaptive fault tolerance. It relies on the fact that multi-agent platforms constitute a very strong basis for decentralized software that is both flexible and scalable, and makes the assumption that the relative importance of each agent varies during the course of the computation. DARX regroups solutions which facilitate the creation of multi-agent applications in a large-scale context. Its most important feature is adaptive replication: replication strategies are applied on a per-agent basis with respect to transient environment characteristics such as the importance of the agent for the computation, the network load or the mean time between failures. Firstly, the interwoven concerns of multi-agent systems and fault-tolerant solutions are put forward. An overview of the DARX architecture follows, as well as an evaluation of its performances. We conclude, after outlining the promising outcomes, by presenting prospective work.
['Olivier Marin', 'Marin Bertier', 'Pierre Sens']
DARX - a framework for the fault-tolerant support of agent software
475,185
['Hansang Lee', 'Jiwhan Kim', 'Junmo Kim']
Automatic Salient Object Detection Using Principal Component Analysis
652,369
['Antonio Faonio', 'Jesper Buus Nielsen', 'Daniele Venturi']
Mind Your Coins: Fully Leakage-Resilient Signatures with Graceful Degradation.
738,545
['Anatol W. Holt', 'Thomas W. Malone', 'Ronald K. Stamper', 'Terry Winograd', 'Paul M. Cashman']
From theories to systems
286,457
As the ever growing mobile data traffic challenges the economic viability and performance of cellular networks, innovative solutions that harvest idle user-owned network resources are gaining increasing interest. In this work, we propose leasing wireless bandwidth and cache space of residential 802.11 (WiFi) access points (APs) for offloading mobile data. This solution not only reduces cellular network congestion, but, due to caching, improves also the user-perceived network performance without overloading the backhaul links of the APs. To encourage residential users to contribute their bandwidth and cache resources, we design monetary incentive (reimbursement) schemes. The offered reimbursements directly determine the amounts of available bandwidth and cache space in every AP, which in turn affect the caching policy (where to cache each content file) and the routing policy (where to route each mobile data request). In order to reduce operator’s total cost for serving mobile data requests and leasing resources, we introduce a framework for the joint optimization of incentive, caching, and routing policies. Using a novel WiFi usage dataset collected from 167 residences, we show that in densely populated areas with relatively costly network capacity upgrades, our proposal can halve operator’s total cost, while reimbursing up to 9€ per month each residential user.
['Konstantinos Poularakis', 'George Iosifidis', 'Ioannis Pefkianakis', 'Leandros Tassiulas', 'Martin May']
Mobile Data Offloading Through Caching in Residential 802.11 Wireless Networks
671,342
In spaceborne synthetic aperture radar, undersampling at the rate of the pulse repetition frequency causes azimuth ambiguity, which induces ghost into the images. This paper introduces compressed sensing for azimuth ambiguity suppression and presents two novel methods from the perspectives of system design and image formation, known as azimuth random sampling and ambiguity separation, respectively. The first method makes the imaging results for the ambiguity zones as disperse as possible while ensuring that the imaging results for the main scene are affected as little as possible. The second method separates the ambiguity signals from the echoes and achieves imaging results without the ambiguity effect. Simulation results show that the two methods can reduce the ambiguity levels by about 16 dB and 99.37%, respectively.
['Ze Yu', 'Min Liu']
Suppressing azimuth ambiguity in spaceborne SAR images based on compressed sensing
630,301
An algorithm for joint depth estimation and segmentation from multi-view images is presented. The distribution of the luminance of each image pixel is modeled as a random variable, which is approximated by a "mixture of Gaussians model". After recovering 3D motion, a reference image is segmented into a fixed number of regions, each characterized by a distinct affine depth model with three parameters. The estimated depth parameters and segmentation masks are iteratively estimated using an expectation-maximization algorithm, similar to that proposed in Sawhney et al. (1996). In addition, the proposed algorithm is extended for cases where more than two images are available.
['Nikos Grammalidis', 'Leonidas Bleris', 'Michael G. Strintzis']
Using the expectation-maximization algorithm for depth estimation and segmentation of multi-view images
18,486
Numerous conceptual frameworks, different tools and methodologies on service-oriented architecture (SOA) business value figuring have been developed to estimate the IT/SOA investments effectively, but most of them substantially lack an accuracy. Conventional investment valuation methods need to be combined with other modern techniques to reflect SOA's long-term strategic investment nature, the inherent uncertainty and the managerial discretion. Thus, in this paper, we perform a comparative analysis of industrial, software-based SOA value assessment applications/tools developed by selected leading IT vendors. We expose their similarities and potential strengths/weaknesses. The goal is achieved through a systematic examination of available SOA/IT "business value" assessment tools provided by key IT-vendors. We also present a consolidated view of alternatives in form featured matrixes.
['Lukas Auer', 'Natalia Kryvinska', 'Christine Strauss', 'E.B. Belov']
Software-based Business Applications/Tools to Assess Complex SOA Investments - The Cross-Vendor Comparative Analysis
325,797
Building on an analysis of the Sarbanes-Oxley Act, literature, as well as private and public interviews, this paper hopes to shed light on the potential impact of Sarbanes-Oxley for IT governance, IT budgets, and relationships with vendors and outsourcers. Findings have implications for research as well as practical lessons-learned for American firms, and for IT vendors or other companies doing business with American companies.
['Michelle L. Kaarst-Brown', 'Shirley Kelly']
IT Governance and Sarbanes-Oxley: The Latest Sales Pitch or Real Challenges for the IT Function?
331,924
Moore's law continues to the engine of growth for the global electronics industry. The understanding of IC degradation mechanisms has resulted in rapid reliability improvements that have enabled the rapid rate technology progression we have experienced. Going forward it is clear that the reliability margins the industry has enjoyed in the past will shrink. The question is now whether reliability will pose a constraint on Moore's law. In this talk we will discuss reliability issues that can most directly impact the industry's capability to maintain the pace of technology progression required by Moore's law.
['Anthony S. Oates']
Will reliability limit Moore's law?
211,870
Emerging flexible hybrid electronics paradigm integrates traditional rigid integrated circuits and printed electronics on a flexible substrate. This hybrid approach aims to combine the physical benefits of flexible electronics with the computational advantages of the silicon technology. In this paper, we discuss the possibility to implement a physically flexible system capable of sensing, computation and communication. We argue that this capability can transform personalized computing by enabling the next big leap forward in the form factor design, similar to the shift from desktop and laptop computers to hand-held devices. Designing this type of a comprehensive system requires integrating many flexible and rigid resources on the same substrate. As a result, efficient interconnection network design rises as one of the major challenges similar to the system-on-chip experience. Therefore, we also discuss the interconnect design challenges and promising solutions for flexible hybrid systems.
['Ujjwal Gupta', 'Umit Y. Ogras']
Extending networks from chips to flexible and stretchable electronics
901,036
In this paper, the bulge test is used to determine the mechanical properties of very thin dielectric membranes. Commonly, this experimental method permits to determine the residual stress (sigma 0 ) and biaxial Youngpsilas modulus (E/(1- upsi)). Associating square and rectangular membranes with different length to width ratios, the Poissonpsilas ratio (upsi) can also be determined. LPCVD Si 3 N 4 monolayer and Si 3 N 4 /SiO 2 bilayer membranes, with thicknesses down to 104 nm, have been characterized giving results in agreement with literature for Si 3 N 4 , E = 212 plusmn 114 GPa, sigma 0 = 420 plusmn 8 and upsi = 0.29.
['Philippe Martins', 'C. Malhaire', 'S. Brida', 'D. Barbier']
On the determination of poisson’s ratio of stressed monolayer and bilayer submicron thick films
286,816
To unveil the potential of reconfigurable systems, strong tool support is required. In particular the modeling and implementation of designs to be executed using the partially reconfigurable capability of modern FPGAs is challenging and hardly supported by current design tools. A key problem thereby is the generation of partial bitstreams. The presented design tool Part-E tackles these challenges in a comprehensive manner. Basically, it is embedded as several plug-ins into the Eclipse platform and thus can exploit the strength and openness of Eclipse. Part-E assembles a design into one single model, making modeling and iterative design of partial reconfigurable systems easy and transparent. Moreover, several extensions have been implemented, for example a facility to generate several partial bitstreams in parallel on several remote computers. Finally, the modeling framework used for the implementation of Part-E facilitates to accept input from various other design environments, offering a well-defined interface to the automated partial bitstream generation of Part-E.
['Elmar Weber', 'Florian Dittmann', 'Norma Montealegre']
Part-E - A Tool for Reconfigurable System Design
145,376
Describes a new method for the architectural synthesis of timed asynchronous systems. Due to the variable delays associated with asynchronous resources, implicit schedules are created by the addition of supplementary constraints between resources. Since the number of schedules grows exponentially with respect to the size of the given data flow graph, pruning techniques are introduced which dramatically improve the run-time without significantly affecting the quality of the results. Using a combination of data and resource constraints, as well as an analysis of bounded delay information, our method determines the minimum number of resources and registers needed to implement a given schedule. Results are demonstrated using some high-level synthesis benchmark circuits and an industrial example.
['Brandon M. Bachman', 'Hao Zheng', 'Chris J. Myers']
Architectural synthesis of timed asynchronous systems
504,491
Object#R##N#Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test.#R##N#Methods#R##N#Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset.#R##N#Results#R##N#The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766–0.825] for the GBDT, 0.784 [95% CI: 0.752–0.815] for the RF, and 0.785 [95% CI: 0.752–0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models.#R##N#Conclusions#R##N#We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power.
['Daisuke Ichikawa', 'Toki Saito', 'Waka Ujita', 'Hiroshi Oyama']
How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach ☆
892,595