Dataset Viewer
Auto-converted to Parquet
abstract
stringlengths
186
9.79k
title
stringlengths
6
274
authors
stringlengths
3
1.88k
venue
stringlengths
3
106
year
stringdate
1953-01-01 00:00:00
2017-01-01 00:00:00
domain
stringclasses
1 value
link
stringclasses
1 value
prompt
stringlengths
56
8.9k
prompt_type
stringclasses
4 values
prompt_comparison
stringclasses
9 values
The new video coding standard HEVC (High Efficiency Video Coding) offers the desired compression performance in the era of HDTV and UHDTV, as it achieves nearly 50 bit rate saving compared to H. 264/AVC. To leverage the involved computational overhead, HEVC offers three parallelization potentials namely: wavefront parallelization, tile-based and slice-based. In this paper we study slice-based parallelization of HEVC using OpenMP on the encoding part. In particular we delve on the problem of proper slice sizing to reduce load imbalances among threads. Capitalizing on existing ideas for H. 264/AVC we develop a fast dynamic approach to decide on load distribution and compare it against an alternative in the HEVC literature. Through experiments with commonly used video sequences, we highlight the merits and drawbacks of the tested heuristics. We then improve upon them for the case of Low-Delay by exploiting GOP structure. The resulting algorithm is shown to clearly outperform its counterparts achieving less than 10 load imbalance in many cases.
Slice-based parallelization in HEVC encoding: Realizing the potential through efficient load balancing
['Maria G. Koziri', 'Panos Papadopoulos', 'Nikos Tziritas', 'Antonios N. Dadaliaris', 'Thanasis Loukopoulos', 'Samee Ullah Khan']
multimedia signal processing
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 161-words sample abstract on the following topic based on following title 'Slice-based parallelization in HEVC encoding: Realizing the potential through efficient load balancing'; multimedia signal processing
gen_full_metadata
abstract
One of the most difficult tasks in the design of information systems is how to control the behaviour of the back-end storage engine, usually a relational database. As the load on the database increases, the longer issued transactions will take to execute, mainly because the presence of a high number of locks required to provide isolation and concurrency. In this paper we present MIDAS, a middleware designed to manage the behaviour of database servers, focusing primarily on guaranteeing transaction execution within an specified amount of time (deadline). MIDAS was developed for Java applications that connects to storage engines through JDBC. It provides a transparent QoS layer and can be adopted with very few code modifications. All transactions issued by the application are captured, forcing them to pass through an Admission Control (AC) mechanism. To accomplish such QoS constraints, we propose a novel AC strategy, called 2-Phase Admission Control (2PAC), that minimizes the amount of transactions that exceed the established maximum time by accepting only those transactions that are not expected to miss their deadlines. We also implemented an enhancement over 2PAC, called diffserv - which gives priority to small transactions and can adopted when their occurrences are not often.
MIDAS: A Middleware for Information Systems with QoS Concerns
['Luís Fernando Orleans', 'Geraldo Zimbrão']
international conference on enterprise information systems
2009
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 84-words of the given paper text with the title 'MIDAS: A Middleware for Information Systems with QoS Concerns': All transactions issued by the application are captured, forcing them to pass through an Admission Control (AC) mechanism. To accomplish such QoS constraints, we propose a novel AC strategy, called 2-Phase Admission Control (2PAC), that minimizes the amount of transactions that exceed the established maximum time by accepting only those transactions that are not expected to miss their deadlines. We also implemented an enhancement over 2PAC, called diffserv - which gives priority to small transactions and can adopted when their occurrences are not often.
continue
2
An intelligent, automatically controlled camera based on visual feedback. A system for acquisition, processing image analysis and a camera driver are implemented in the FPGA Xilinx Spartan-6 device. The FPGA device is connected directly to the eight independently operated SRAM memory banks. A prototype device has been constructed with a real-time tracking algorithm. The camera is able to keep a tracked object close to the center of its field of view. This paper presents an intelligent, automatically controlled camera based on visual feedback. The camera housing contains actuators that change the orientation of the camera - enabling a full rotation around the vertical axis (pan) and 90? around the horizontal axis (tilt). A system for acquisition, processing image analysis and a camera driver are implemented in the FPGA Xilinx Spartan-6 device. An original, innovative reconfigurable system architecture has been developed. The FPGA device is connected directly to the eight independently operated SRAM memory banks. A prototype device has been constructed with a real-time tracking algorithm, enabling an automatically control of the position of the camera. The device has been tested indoors and outdoors. The camera is able to keep a tracked object close to the center of its field of view. The power consumption of the control system is 2W. A reconfigurable part reaches the computing performance of 3200 MOPS.
Automatically controlled pan-tilt smart camera with FPGA based image analysis system dedicated to real-time tracking of a moving object
['Artur Zawadzki', 'Marek Gorgon']
Journal of Systems Architecture
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 113-words sample abstract on the following topic based on following title 'Automatically controlled pan-tilt smart camera with FPGA based image analysis system dedicated to real-time tracking of a moving object'; Journal of Systems Architecture
gen_full_metadata
abstract
This article proposes an approach for visual track ing using multiple cameras with overlapping fields of view. A spatial and temporal recursive Bayesian filtering approach using particle filter is proposed to fuse image sequences of multiple cameras to optimally estimate the state of the system, i. e. , the targetis location. An approximation method for importance sampling function and weight update function is also proposed. Our results show that our algorithm is effective when complete occlusions occur. This method can be used for data fusion for multiple measurements in dynamic systems.
Particle Filter for Visual Tracking Using Multiple Cameras.
['Yadong Wang', 'Jiankang Wu', 'Ashraf A. Kassim']
Journal of Machine Vision and Applications
2005
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more academic in tone: This article proposes an approach for visual track ing using multiple cameras with overlapping fields of view. A spatial and temporal recursive Bayesian filtering approach using particle filter is proposed to fuse image sequences of multiple cameras to optimally estimate the state of the system, i. e. , the targetis location. An approximation method for importance sampling function and weight update function is also proposed. Our results show that our algorithm is effective when complete occlusions occur. This method can be used for data fusion for multiple measurements in dynamic systems.
enhance
0
Online and mobile crowdsourcing services call for thorough definition of components and attributes. As cloud-based services have become widely adopted, a cloudified reference model has been emergent for crowdsourcing platforms and applications. This paper, introduces for the first time a cloudified four-phase reference model for crowdsourcing along with a generic workflow for crowdsourcing development utilizing the facilities offered by cloud service providers. Moreover, useful insights are presented for the evolution of today's online crowdsourcing applications and platforms towards the concept of crowdsourcing as a service.
A reference model for crowdsourcing as a service
['Arber Murturi', 'Burak Kantarci', 'Sema Oktug']
nan
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 85-words of the given paper text with the title 'A reference model for crowdsourcing as a service': Online and mobile crowdsourcing services call for thorough definition of components and attributes. As cloud-based services have become widely adopted, a cloudified reference model has been emergent for crowdsourcing platforms and applications. This paper, introduces for the first time a cloudified four-phase reference model for crowdsourcing along with a generic workflow for crowdsourcing development utilizing the facilities offered by cloud service providers. Moreover, useful insights are presented for the evolution of today's online crowdsourcing applications and platforms towards the concept of crowdsourcing as a service.
continue
1
A new Landweber algorithm for 3D microscopy deconvolution is introduced in this paper. The algorithm is formulated from the Fredholm equation of the first kind. Artificial 3D images are used to test this algorithm and the restored results are compared with a nonlinear iterative deconvolution algorithm (IDA). The experimental results show that the Landweber algorithm can effectively suppress background noise and remove asymmetric point spread function (PSF) degradation. Finally, a typical real 3D confocal image is restored by the Landweber algorithm and the results are compared with IDA.
A Landweber algorithm for 3D confocal microscopy restoration
['Daan Zhu', 'Moe Razaz', 'Richard A. Lee']
international conference on pattern recognition
2004
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 88-words sample abstract on the following topic based on following title 'A Landweber algorithm for 3D confocal microscopy restoration'; international conference on pattern recognition
gen_full_metadata
abstract
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e. g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i. e. , ones relaxing group cardinality constraints.
Computing group cardinality constraint solutions for logistic regression problems.
['Yong Zhang', 'Dongjin Kwon', 'Kilian M. Pohl']
Medical Image Analysis
2017
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 169-words sample abstract on the following topic based on following title 'Computing group cardinality constraint solutions for logistic regression problems.'; Medical Image Analysis
gen_full_metadata
abstract
One charge of the United States Environmental Protection Agency is to study the risk of infection for microbial agents that can be disseminated through drinking water systems, and to recommend water treatment policy to counter that risk. Recently proposed dynamical system models quantify indirect risks due to secondary transmission, in addition to primary infection risk from the water supply considered by standard assessments. Unfortunately, key parameters that influence water treatment policy are unknown, in part because of lack of data and effective inference methods. This paper develops inference methods for those parameters by using stochastic process models to better incorporate infection dynamics into the inference process. Our use of endemic data provides an alternative to waiting for, identifying, and measuring an outbreak. Data both from simulations and from New York City illustrate the approach.
Inferring Infection Transmission Parameters That Influence Water Treatment Decisions
['Stephen E. Chick', 'Sada Soorapanth', 'James S. Koopman']
Management Science
2003
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 134-words of the given paper text with the title 'Inferring Infection Transmission Parameters That Influence Water Treatment Decisions': One charge of the United States Environmental Protection Agency is to study the risk of infection for microbial agents that can be disseminated through drinking water systems, and to recommend water treatment policy to counter that risk. Recently proposed dynamical system models quantify indirect risks due to secondary transmission, in addition to primary infection risk from the water supply considered by standard assessments. Unfortunately, key parameters that influence water treatment policy are unknown, in part because of lack of data and effective inference methods. This paper develops inference methods for those parameters by using stochastic process models to better incorporate infection dynamics into the inference process. Our use of endemic data provides an alternative to waiting for, identifying, and measuring an outbreak. Data both from simulations and from New York City illustrate the approach.
continue
1
In this paper, we propose a scheduling framework and related algorithms for processing large-scale, computation-intensive divisible loads. The framework is organized into a two-level tree architecture. Based on this framework, admission test and load partitioning and distribution algorithms are designed to ensure that the multi-dimensional QoS requirements, i. e. processing deadline, security and reliability, of admitted loads can be satisfied. We take a novel approach to incorporate resource reservation and time step-size adaptive scheduling schemes into the optimal solution that makes computation nodes finish computing at the same time instant. We provide an implementation of the framework a top of a distributed communication middleware extended with QoS-aware resource management facilities. Prototype implementation and preliminary experimental results demonstrate the engineering feasibility and good performance of the proposed framework and algorithms.
Scheduling Framework and Algorithms for Large-Scale Divisible Load Processing with Multi-Dimensional QoS Constraints
['Kaibo Wang', 'Xingshe Zhou', 'Shandan Zhou']
international conference for young computer scientists
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on international conference for young computer scientists with title 'Scheduling Framework and Algorithms for Large-Scale Divisible Load Processing with Multi-Dimensional QoS Constraints', write a 129-words section 'Introduction': In this paper, we propose a scheduling framework and related algorithms for processing large-scale, computation-intensive divisible loads. The framework is organized into a two-level tree architecture. Based on this framework, admission test and load partitioning and distribution algorithms are designed to ensure that the multi-dimensional QoS requirements, i. e. processing deadline, security and reliability, of admitted loads can be satisfied. We take a novel approach to incorporate resource reservation and time step-size adaptive scheduling schemes into the optimal solution that makes computation nodes finish computing at the same time instant. We provide an implementation of the framework a top of a distributed communication middleware extended with QoS-aware resource management facilities. Prototype implementation and preliminary experimental results demonstrate the engineering feasibility and good performance of the proposed framework and algorithms.
gen_section
0
Mobile ad hoc networks (MANET) consist of some mobile nodes, but do not have the requirement of infrastructure or centralized management. As a result, there may have a lot of critical issues occurred in mobile ad hoc networks, such as the security of data transmission, routing, and so on. The misbehavior of nodes which are due to selfish or malicious may lead to the losing of packets, rejecting of services, etc. In this paper, a new scheme based on reputation, bandwidth and hop count is proposed to solve these problems. This scheme employs the fuzzy logic to perform the routing path decision in order to choose the best and feasible routing path. Finally, the SMPL, a simulation tool, is utilized to estimate the efficiency of the proposed scheme. From the results, the proposed scheme is superior to other existing schemes.
Fuzzy Logic Based Reputation System for Mobile Ad Hoc Networks
['Jin-Long Wang', 'Shih-Ping Huang']
nan
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 69-words of the given paper text with the title 'Fuzzy Logic Based Reputation System for Mobile Ad Hoc Networks': In this paper, a new scheme based on reputation, bandwidth and hop count is proposed to solve these problems. This scheme employs the fuzzy logic to perform the routing path decision in order to choose the best and feasible routing path. Finally, the SMPL, a simulation tool, is utilized to estimate the efficiency of the proposed scheme. From the results, the proposed scheme is superior to other existing schemes.
continue
2
Augmented Reality (AR) applications are becoming popular recently. Among other things, AR requires a precise real-time tracking to work properly. Simultaneous Localization and Mapping (SLAM) is one way to perform this task. Commonly used in robotics applications, SLAM creates a map of the environment to use it as input to compute the pose while uses the pose to increment the map. At the same time, mobile devices are evolving faster lately. Along with more processing power and memory capabilities, they are being embedded with several powerful resources, such as depth sensors. Taking this into account, this work introduces STAM, a Simple Tracking and Mapping system that was developed in desktop and evaluated in a challenging scenario. Additionally, STAM was ported to a mobile version, using the Android platform and Google's Tango tablet device. Finally, the system was evaluated concerning its desktop version. The desktop version presented better tracking performance in simple scenarios with respect to reprojection error, but it presented a few drawbacks when dealing with the most complex ones. Regarding the mobile version, it proved to be slower than its desktop counterpart. However, it was more precise.
Life Cycle of a SLAM System: Implementation, Evaluation and Port to the Project Tango Device
['Thulio Araujo', 'Rafael Alves Roberto', 'João Marcelo X. N. Teixeira', 'Francisco Simões', 'Veronica Teichrieb', 'João Paulo Silva do Monte Lima', 'Ermano Arruda']
nan
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on nan with title 'Life Cycle of a SLAM System: Implementation, Evaluation and Port to the Project Tango Device', write a 142-words section 'Conclusion': Augmented Reality (AR) applications are becoming popular recently. Among other things, AR requires a precise real-time tracking to work properly. Simultaneous Localization and Mapping (SLAM) is one way to perform this task. Commonly used in robotics applications, SLAM creates a map of the environment to use it as input to compute the pose while uses the pose to increment the map. At the same time, mobile devices are evolving faster lately. Along with more processing power and memory capabilities, they are being embedded with several powerful resources, such as depth sensors. Taking this into account, this work introduces STAM, a Simple Tracking and Mapping system that was developed in desktop and evaluated in a challenging scenario. Additionally, STAM was ported to a mobile version, using the Android platform and Google's Tango tablet device. Finally, the system was evaluated concerning its desktop version. The desktop version presented better tracking performance in simple scenarios with respect to reprojection error, but it presented a few drawbacks when dealing with the most complex ones. Regarding the mobile version, it proved to be slower than its desktop counterpart. However, it was more precise.
gen_section
0
This paper proposes an object-level rate control algorithm to jointly controlling the bit rates of multiple video objects. Utilizing noncooperative game theory, the proposed rate control algorithm mimics the behaviors of players representing video objects. Each player competes for available bits to optimize its visual quality. The algorithm finds an? optimal solution? in that it conforms to the mixed strategy Nash equilibrium, which is the probability distribution of the actions carried by the players that maximizes their expected payoffs (the number of bits). The game is played iteratively, and the expected payoff of each play is accumulated. The game terminates when all of the available bits for the specific time instant have been distributed to video object planes (VOPs). The advantage of the proposed scheme is that the bidding objects divide the bits among themselves automatically and fairly, according to their encoding complexity, and with an overall solution that is strategically optimal under the given circumstances. To minimize buffer fluctuation and avoid buffer overflow and underflow, a proportional-integral-derivative (PID) control based buffer policy is utilized.
Controlling the Bit Rate of Multi-Object Videos With Noncooperative Game Theory
['Jiancong Luo', 'Ishfaq Ahmad', 'Yu Sun']
IEEE Transactions on Multimedia
2010
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on IEEE Transactions on Multimedia with title 'Controlling the Bit Rate of Multi-Object Videos With Noncooperative Game Theory', write a 175-words section 'Introduction': This paper proposes an object-level rate control algorithm to jointly controlling the bit rates of multiple video objects. Utilizing noncooperative game theory, the proposed rate control algorithm mimics the behaviors of players representing video objects. Each player competes for available bits to optimize its visual quality. The algorithm finds an? optimal solution? in that it conforms to the mixed strategy Nash equilibrium, which is the probability distribution of the actions carried by the players that maximizes their expected payoffs (the number of bits). The game is played iteratively, and the expected payoff of each play is accumulated. The game terminates when all of the available bits for the specific time instant have been distributed to video object planes (VOPs). The advantage of the proposed scheme is that the bidding objects divide the bits among themselves automatically and fairly, according to their encoding complexity, and with an overall solution that is strategically optimal under the given circumstances. To minimize buffer fluctuation and avoid buffer overflow and underflow, a proportional-integral-derivative (PID) control based buffer policy is utilized.
gen_section
0
In this paper, we have investigated the fusion of surface data obtained by two different surface recovery methods. In particular, we have fused the depth data obtainable by shape from contours and local surface orientation data obtainable by photometric stereo. It has been found that the surface obtained by fusing orientation and depth data is able to yield more precision when compared with the surfaces obtained by either type of data alone.
Integration of photometric stereo and shape from occluding contours by fusing orientation and depth data
['Chia-Yen Chen', 'Radim Sara']
Lecture Notes in Computer Science
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 72-words sample abstract on the following topic based on following title 'Integration of photometric stereo and shape from occluding contours by fusing orientation and depth data'; Lecture Notes in Computer Science
gen_full_metadata
abstract
Agent-based models are a powerful tool for explaining the emergence of social phenomena in a society. In such models, individual agents typically have little cognitive ability. In this paper, we model agents with the cognitive ability to make use of theory of mind. People use this ability to reason explicitly about the beliefs, desires, and goals of others. They also take this ability further, and expect other people to have access to theory of mind as well. To explain the emergence of this higher-order theory of mind, we place agents capable of theory of mind in a particular negotiation game known as Colored Trails, and determine to what extent theory of mind is beneficial to computational agents. Our results show that the use of first-order theory of mind helps agents to offer better trades. We also find that second-order theory of mind allows agents to perform better than first-order colleagues, by taking into account competing offers that other agents may make. Our results suggest that agents experience diminishing returns on orders of theory of mind higher than level two, similar to what is seen in people. These findings corroborate those in more abstract settings.
Agent-Based Models for Higher-Order Theory of Mind
['Harmen de Weerd', 'Rineke Verbrugge', 'Bart Verheij']
Spring
2014
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on Spring with title 'Agent-Based Models for Higher-Order Theory of Mind', write a 194-words section 'Literature Review': Agent-based models are a powerful tool for explaining the emergence of social phenomena in a society. In such models, individual agents typically have little cognitive ability. In this paper, we model agents with the cognitive ability to make use of theory of mind. People use this ability to reason explicitly about the beliefs, desires, and goals of others. They also take this ability further, and expect other people to have access to theory of mind as well. To explain the emergence of this higher-order theory of mind, we place agents capable of theory of mind in a particular negotiation game known as Colored Trails, and determine to what extent theory of mind is beneficial to computational agents. Our results show that the use of first-order theory of mind helps agents to offer better trades. We also find that second-order theory of mind allows agents to perform better than first-order colleagues, by taking into account competing offers that other agents may make. Our results suggest that agents experience diminishing returns on orders of theory of mind higher than level two, similar to what is seen in people. These findings corroborate those in more abstract settings.
gen_section
0
Inkball models provide a tool for matching and comparison of spatially structured markings such as handwritten characters and words. Hidden Markov models offer a framework for decoding a stream of text in terms of the most likely sequence of causal states. Prior work with HMM has relied on observation of features that are correlated with underlying characters, without modeling them directly. This paper proposes to use the results of inkball-based character matching as a feature set input directly to the HMM. Experiments indicate that this technique outperforms other tested methods at handwritten word recognition on a common benchmark when applied without normalization or text deslanting.
Inkball Models as Features for Handwriting Recognition
['Nicholas R. Howe', 'Andreas Fischer', 'Baptiste Wicht']
international conference on frontiers in handwriting recognition
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on international conference on frontiers in handwriting recognition with title 'Inkball Models as Features for Handwriting Recognition', write a 105-words section 'Literature Review': Inkball models provide a tool for matching and comparison of spatially structured markings such as handwritten characters and words. Hidden Markov models offer a framework for decoding a stream of text in terms of the most likely sequence of causal states. Prior work with HMM has relied on observation of features that are correlated with underlying characters, without modeling them directly. This paper proposes to use the results of inkball-based character matching as a feature set input directly to the HMM. Experiments indicate that this technique outperforms other tested methods at handwritten word recognition on a common benchmark when applied without normalization or text deslanting.
gen_section
0
Project AURORA aims at the development of an unmanned airship capable of autonomous flight over user-defined locations for aerial inspection and imagery acquisition. In this article the authors report a successful autonomous flight achieved through a set of pre-defined points, one of the first of its kind in the literature. The guidance control strategy is based on a path tracking error generation methodology that takes into account both the distance and the angular errors of the airship with respect to the desired trajectory. The control strategy uses a PI controller for the tail surfaces' deflection.
Autonomous flight experiment with a robotic unmanned airship
['Josué J. G. Ramos', 'E.C. de Paiva', 'José Raul Azinheira', 'Samuel Siqueira Bueno', 'Silvio M. Maeta', 'Luiz G. B. Mirisola', 'Marcel Bergerman', 'Bruno G. Faria']
international conference on robotics and automation
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on international conference on robotics and automation with title 'Autonomous flight experiment with a robotic unmanned airship', write a 95-words section 'Methodology': Project AURORA aims at the development of an unmanned airship capable of autonomous flight over user-defined locations for aerial inspection and imagery acquisition. In this article the authors report a successful autonomous flight achieved through a set of pre-defined points, one of the first of its kind in the literature. The guidance control strategy is based on a path tracking error generation methodology that takes into account both the distance and the angular errors of the airship with respect to the desired trajectory. The control strategy uses a PI controller for the tail surfaces' deflection.
gen_section
0
In a context of economic crisis and ageing population arises the idea of social investment as the preparation of citizens throughout their lives to confront risks. It supposes a better use of public resources, so it can result in considerable savings later on. In this context, a number of innovations are emerging, most enabled by Information and Communication Technologies (ICTs). The aim of this article is to characterize the phenomenon and its underlying elements so to advance a proposal of a conceptual and analytical framework to analyse initiatives promoting social investment with regard to integrated approaches to social services provision
ICT-Enabled Social Innovation in Support of Public Sector Reform: the Potential of Integrated Approaches to Social Services Delivery to Promote Social Investment Policies in Europe
['Gianluca Misuraca', 'Clelia Colombo']
nan
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 100-words sample abstract on the following topic based on following title 'ICT-Enabled Social Innovation in Support of Public Sector Reform: the Potential of Integrated Approaches to Social Services Delivery to Promote Social Investment Policies in Europe'; nan
gen_full_metadata
abstract
We develop an improved grey QFD method by integrating interval grey numbers, QFD and TRIZ techniques. The proposed TRIZ method can effectively resolve contradiction problems between conflicting EC pairs. The input values and output results of the proposed method are interval grey numbers. A new grey ranking method is designed to precisely rate interval grey numbers. Quality function deployment (QFD) can simultaneously consider both product functions and consumer needs during the product design and manufacturing stages. Traditional QFD often relies on market research or customer questionnaires to collect customer opinions in order to establish customer requirements. However, market research results (or those of customer questionnaires) usually contain a good deal of uncertain and incomplete information. Moreover, there is a practical problem in implementing QFD as experts in specific fields are often rare and difficult to find. In order to resolve these issues, this study integrated interval grey numbers, QFD and TRIZ techniques to develop an improved grey quality function deployment (GQFD) method. GQFD can assist product developers in identifying important engineering characteristics and can provide suggestions for possible improvements in engineering characteristics. Furthermore, this study developed a new grey ranking method to determine the ranking order of interval grey numbers. Finally, a real-world case study in Taiwan was used to explain the research process of the GQFD method and validate the practicality of the proposed method.
An improved grey quality function deployment approach using the grey TRIZ technique
['Hao-Tien Liu', 'Hung-Sheng Cheng']
Computers & Industrial Engineering
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 44-words sample abstract on the following topic based on following title 'An improved grey quality function deployment approach using the grey TRIZ technique'; Computers & Industrial Engineering
gen_full_metadata
abstract
Static rankings of papers play a key role in the academic search setting. Many features are commonly used in the literature to produce such rankings, some examples are citation-based metrics, distinct applications of PageRank, among others. More recently, learning to rank techniques have been successfully applied to combine sets of features producing effective results. In this work, we propose the metric S-RCR, which is a simplified version of a metric called Relative Citation Ratio --- both based on the idea of a co-citation network. When compared to the classical version, our simplification S-RCR leads to improved efficiency with a reasonable effectiveness. We use S-RCR to rank over 120 million papers in the Microsoft Academic Graph dataset. By using this single feature, which has no parameters and does not need to be tuned, our team was able to reach the 3rd position in the first phase of the WSDM Cup 2016.
Simplified Relative Citation Ratio for Static Paper Ranking: UFMG/LATIN at WSDM Cup 2016
['Sabir Ribas', 'Alberto Ueda', 'Rodrygo L. T. Santos', 'Berthier A. Ribeiro-Neto', 'Nivio Ziviani']
arXiv: Information Retrieval
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 66-words sample abstract on the following topic based on following title 'Simplified Relative Citation Ratio for Static Paper Ranking: UFMG/LATIN at WSDM Cup 2016'; arXiv: Information Retrieval
gen_full_metadata
abstract
We present a perception-based paradigm for image retrieval. The central component of this paradigm is a query-concept learner, which can learn users' subjective query concepts through an intelligent sampling process. We show that the learner can collect user feedback and use it to perform collaborative image annotation in addition to learning subjective query concepts. On the one hand, the improved annotation can help provide better initial keyword-search results to seed perception-based image retrieval. On the other hand, the more effective image-research results can further refine annotation quality. The users of the system collaboratively help improve search quality through the query-concept learner. Our empirical results show that an image retrieval system powered by this perception-based paradigm performs significantly better than traditional systems in search accuracy, in multimodal integration, and in capability for personalization.
An Architecture of a Web-Based Collaborative Image Search Engine
['Wei-Cheng Lai', 'Gerard Sychay', 'Edward Y. Chang']
cooperative information systems
2002
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 45-words sample abstract on the following topic based on following title 'An Architecture of a Web-Based Collaborative Image Search Engine'; cooperative information systems
gen_full_metadata
abstract
Network slicing is a major trend in the design of future 5G networks that will enable operators to effectively service multiple industry verticals with a single network infrastructure. Thus, network slicing will shape all segments of the future 5G networks, including the radio access, the transport network and the core network. In this paper we introduce the control plane design produced by the 5G-XHaul project. 5G-XHaul envisions a future 5G transport network composed of heterogeneous technology domains, including wireless and optical segments, which will be able to transport end user and operational services. Consequently, 5G-XHaul proposes a hierarchical SDN control plane where each controller is responsible for a limited network domain, and proposes a multi-technology virtualization framework that enables a scalable slicing of the transport network by operating at the edge of the network.
5G-XHaul: Enabling Scalable Virtualization for Future 5G Transport Networks
['Daniel Camps Mur', 'Paris Flegkas', 'Dimitris Syrivelis', 'Qing Wei', 'Jesus Gutierrez']
ubiquitous computing
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more academic in tone: Network slicing is a major trend in the design of future 5G networks that will enable operators to effectively service multiple industry verticals with a single network infrastructure. Thus, network slicing will shape all segments of the future 5G networks, including the radio access, the transport network and the core network. In this paper we introduce the control plane design produced by the 5G-XHaul project. 5G-XHaul envisions a future 5G transport network composed of heterogeneous technology domains, including wireless and optical segments, which will be able to transport end user and operational services. Consequently, 5G-XHaul proposes a hierarchical SDN control plane where each controller is responsible for a limited network domain, and proposes a multi-technology virtualization framework that enables a scalable slicing of the transport network by operating at the edge of the network.
enhance
0
This paper investigates the network completion problem, where it is assumed that only a small sample of a network (e. g. , a complete or partially observed subgraph of a social graph) is observed and we would like to infer the unobserved part of the network. In this paper, we assume that besides the observed subgraph, side information about the nodes such as the pairwise similarity between them is also provided. In contrast to the original network completion problem where the standard methods such as matrix completion is inapplicable due the non-uniform sampling of observed links, we show that by effectively exploiting the side information, it is possible to accurately predict the unobserved links. In contrast to existing matrix completion methods with side information such as shared subsapce learning and matrix completion with transduction, the proposed algorithm decouples the completion from transduction to effectively exploit the similarity information. This crucial difference greatly boosts the performance when appropriate similarity information is used. The recovery error of the proposed algorithm is theoretically analyzed based on the richness of the similarity information and the size of the observed submatrix. To the best of our knowledge, this is the first algorithm that addresses the network completion with similarity of nodes with provable guarantees. Experiments on synthetic and real networks from Facebook and Google+ show that the proposed two-stage method is able to accurately reconstruct the network and outperforms other methods.
Network Completion with Node Similarity: A Matrix Completion Approach with Provable Guarantees
['Farzan Masrour', 'Iman Barjesteh', 'Rana Forsati', 'Abdol Hossein Esfahanian', 'Hayder Radha']
advances in social networks analysis and mining
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more casual in tone: In contrast to existing matrix completion methods with side information such as shared subsapce learning and matrix completion with transduction, the proposed algorithm decouples the completion from transduction to effectively exploit the similarity information. This crucial difference greatly boosts the performance when appropriate similarity information is used. The recovery error of the proposed algorithm is theoretically analyzed based on the richness of the similarity information and the size of the observed submatrix. To the best of our knowledge, this is the first algorithm that addresses the network completion with similarity of nodes with provable guarantees. Experiments on synthetic and real networks from Facebook and Google+ show that the proposed two-stage method is able to accurately reconstruct the network and outperforms other methods.
enhance
1
Particle swarm optimization has become a common heuristic technique in the optimization community, with many researchers exploring the concepts, issues, and applications of the algorithm. In spite of this attention, there has as yet been no standard definition representing exactly what is involved in modern implementations of the technique. A standard is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community
Defining a Standard for Particle Swarm Optimization
['Daniel Bratton', 'James Kennedy']
nan
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 113-words sample abstract on the following topic based on following title 'Defining a Standard for Particle Swarm Optimization'; nan
gen_full_metadata
abstract
We first discuss roles of universities in promoting the innovation in services through research and education. As examples of successful collaboration of university and industry, we present a university- originated venture company for health care service and a project for innovating the fan service of a pro- baseball team. We also present an educational course on services science in the MBA Program of the University of Tsukuba.
University-Initiated Services Innovation
['Hideaki Takagi']
congress on evolutionary computation
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more casual in tone: We first discuss roles of universities in promoting the innovation in services through research and education. As examples of successful collaboration of university and industry, we present a university- originated venture company for health care service and a project for innovating the fan service of a pro- baseball team. We also present an educational course on services science in the MBA Program of the University of Tsukuba.
enhance
0
We propose new heuristic procedures for the maximally diverse grouping problem (MDGP). This NP-hard problem consists of forming maximally diverse groupsâof equal or different sizeâfrom a given set of elements. The most general formulation, which we address, allows for the size of each group to fall within specified limits. The MDGP has applications in academics, such as creating diverse teams of students, or in training settings where it may be desired to create groups that are as diverse as possible. Search mechanisms, based on the tabu search methodology, are developed for the MDGP, including a strategic oscillation that enables search paths to cross a feasibility boundary. We evaluate construction and improvement mechanisms to configure a solution procedure that is then compared to state-of-the-art solvers for the MDGP. Extensive computational experiments with medium and large instances show the advantages of a solution method that includes strategic oscillation.
Tabu Search with Strategic Oscillation for the Maximally Diverse Grouping Problem
['Micael Gallego', 'Manuel Laguna', 'Rafael Martí', 'Abraham Duarte']
Journal of the Operational Research Society
2013
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 146-words sample abstract on the following topic based on following title 'Tabu Search with Strategic Oscillation for the Maximally Diverse Grouping Problem'; Journal of the Operational Research Society
gen_full_metadata
abstract
We describe a novel application for sonic events namely their generation via mathematical functions implemented on a universal all purpose Java platform. Their design is driven by a set of requirements that arise in recognition-based authentication systems. We show that our approach has potential advantages as compared with traditional alphanumeric and other password systems. Our intention is to demonstrate that by leveraging familiar musical dimension and aesthetics human memorability, pleasure and pragmatics are enhanced. We demonstrate and briefly discuss one exemplar generative approach that has been specifically designed in order to fulfill the requirements implied by authentication systems. It is hoped that this work serves to stimulate debate and further activity in the field of computer generated sonics. .
A Pragmatic and Musically Pleasing Production System for Sonic Events
['Marc Conrad', 'Tim French', 'Marcia Gibson']
nan
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more professional in tone: It is hoped that this work serves to stimulate debate and further activity in the field of computer generated sonics. .
enhance
1
We study the problem of synopsis construction of massive graph streams arriving in real-time. Many graphs such as those formed by the activity on social networks, communication networks, and telephone networks are defined dynamically as rapid edge streams on a massive domain of nodes. In these rapid and massive graph streams, it is often not possible to estimate the frequency of individual items (e. g. , edges, nodes) with complete accuracy. Nevertheless, sketch-based stream summaries such as Count-Min can preserve frequency information of high-frequency items with a reasonable accuracy. However, these sketch summaries lose the underlying graph structure unless one keeps information about start and end nodes of all edges, which is prohibitively expensive. For example, the existing methods can identify the high-frequency nodes and edges, but they are unable to answer more complex structural queries such as reachability defined by high-frequency edges. To this end, we design a 3-dimensional sketch, gMatrix that summarizes massive graph streams in real-time, while also retaining information about the structural behavior of the underlying graph dataset. We demonstrate how gMatrix, coupled with a onetime reverse hash mapping, is able to estimate important structural properties, e. g. , reachability over high frequency edges in an online manner and with theoretical performance guarantees. Our experimental results using large-scale graph streams attest that gMatrix is capable of answering both frequency-based and structural queries with high accuracy and efficiency.
Query-friendly compression of graph streams
['Arijit Khan', 'Charu C. Aggarwal']
advances in social networks analysis and mining
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 114-words sample abstract on the following topic based on following title 'Query-friendly compression of graph streams'; advances in social networks analysis and mining
gen_full_metadata
abstract
In this talk I will provide a description of recent uses Intel has made of cryptography in our platforms, including providing a hardware random number generator, using anonymous signatures, and improving performance of cryptographic algorithms. I will discuss how processor capabilities could be used more effectively by cryptographic algorithms. I will then discuss research questions in cryptographic protocols and platform security that are motivated by our goals.
Recent Advances and Existing Research Questions in Platform Security
['Ernie Brickell']
international cryptology conference
2012
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 67-words sample abstract on the following topic based on following title 'Recent Advances and Existing Research Questions in Platform Security'; international cryptology conference
gen_full_metadata
abstract
As Information Technologies (IT) play a growingly strategic role in several industries, Small and Medium Enterprises (SMEs) adopt IT solutions to trigger Digital Innovation supporting their processes and improving their products and services. SMEs' scarce resources and inadequate IT competencies forces them to demand support from IT suppliers in the IT adoption and Digital Innovation journey, however, little attention was paid to the business models and strategies of IT suppliers in the academic and professional literature, and SMEs find it difficult to assess and select IT suppliers that best responds to their needs and aims. This study's goal is to provide a detailed picture of the IT Sales Channel and its players in the European market. A classification framework is proposed and eleven different business models are identified. The study leverages multiple case studies relying on semi-standardized interviews with Chief Executive Officers and Marketing Managers of leading European IT suppliers.
Disclosing the Role of IT Suppliers as Digital Innovation Enablers for SMEs: A Strategy Analysis of the European IT Sales Channel
['Antonio Ghezzi', 'Raffaello Balocco']
hawaii international conference on system sciences
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 150-words sample abstract on the following topic based on following title 'Disclosing the Role of IT Suppliers as Digital Innovation Enablers for SMEs: A Strategy Analysis of the European IT Sales Channel'; hawaii international conference on system sciences
gen_full_metadata
abstract
This paper addresses bird song analysis based on semi-automatic annotation. Research in animal behavior, especially with birds, would be aided by automated (or semiautomated) systems that can localize sounds, measure their timing, and identify their source. This is difficult to achieve in real environments where several birds may be singing from different locations and at the same time. Analysis of recordings from the wild has in the past typically required manual annotation. Such annotation is not always accurate or even consistent, as it may vary both within or between observers. Here we propose a system that uses automated methods from robot audition, including sound source detection, localization, separation and identification. In robot audition these technologies have typically been studied separately; combining them often leads to poor performance in real-time application from the wild. We suggest that integration is aided by placing a primary focus on spatial cues, then combining other features within a Bayesian framework. A second problem has been that supervised machine learning methods typically requires a pre-trained model that may require a large training set of annotated labels. We have employed a semi-automatic annotation approach that requires much less pre-annotation. Preliminary experiments with recordings of bird songs from the wild revealed that for identification accuracy our system outperformed a method based on conventional robot audition.
Semi-automatic bird song analysis by spatial-cue-based integration of sound source detection, localization, separation, and identification
['Ryosuke Kojima', 'Osamu Sugiyama', 'Reiji Suzuki', 'Kazuhiro Nakadai', 'Charles E. Taylor']
intelligent robots and systems
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 155-words of the given paper text with the title 'Semi-automatic bird song analysis by spatial-cue-based integration of sound source detection, localization, separation, and identification': This paper addresses bird song analysis based on semi-automatic annotation. Research in animal behavior, especially with birds, would be aided by automated (or semiautomated) systems that can localize sounds, measure their timing, and identify their source. This is difficult to achieve in real environments where several birds may be singing from different locations and at the same time. Analysis of recordings from the wild has in the past typically required manual annotation. Such annotation is not always accurate or even consistent, as it may vary both within or between observers. Here we propose a system that uses automated methods from robot audition, including sound source detection, localization, separation and identification. In robot audition these technologies have typically been studied separately; combining them often leads to poor performance in real-time application from the wild. We suggest that integration is aided by placing a primary focus on spatial cues, then combining other features within a Bayesian framework.
continue
1
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on partial least squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called sparse maximal alignment (SMA) and sparse maximal covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a support vector machine.
Efficient Sparse Kernel Feature Extraction Based on Partial Least Squares
['Charanpal Dhanjal', 'Steve R. Gunn', 'John Shawe-Taylor']
IEEE Transactions on Pattern Analysis and Machine Intelligence
2009
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 128-words sample abstract on the following topic based on following title 'Efficient Sparse Kernel Feature Extraction Based on Partial Least Squares'; IEEE Transactions on Pattern Analysis and Machine Intelligence
gen_full_metadata
abstract
Studies have shown that device drivers and extensions contain 3-7 times more bugs than other operating system code and thus are more likely to fail. Therefore, we present a failure-resilient operating system design that can recover from dead drivers and other critical components - primarily through monitoring and replacing malfunctioning components on the fly - transparent to applications and without user intervention. This paper focuses on the post-mortem recovery procedure. We explain the working of our defect detection mechanism, the policy-driven recovery procedure, and post-restart reintegration of the components. Furthermore, we discuss the concrete steps taken to recover from network, block device, and character device driver failures. Finally, we evaluate our design using performance measurements, software fault-injection experiments, and an analysis of the reengineering effort.
Failure Resilience for Device Drivers
['Jorrit N. Herder', 'Herbert Bos', 'Ben Gras', 'Philip Homburg', 'Andrew S. Tanenbaum']
dependable systems and networks
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on dependable systems and networks with title 'Failure Resilience for Device Drivers', write a 18-words section 'Methodology': Studies have shown that device drivers and extensions contain 3-7 times more bugs than other operating system code and thus are more likely to fail. Therefore, we present a failure-resilient operating system design that can recover from dead drivers and other critical components - primarily through monitoring and replacing malfunctioning components on the fly - transparent to applications and without user intervention. This paper focuses on the post-mortem recovery procedure. We explain the working of our defect detection mechanism, the policy-driven recovery procedure, and post-restart reintegration of the components. Furthermore, we discuss the concrete steps taken to recover from network, block device, and character device driver failures. Finally, we evaluate our design using performance measurements, software fault-injection experiments, and an analysis of the reengineering effort.
gen_section
0
When using triangle meshes in numerical simulations or other sophisticated downstream applications, we have to guarantee that no degenerate faces are present since they have, e. g. , no well defined normal vectors. In this paper we present a simple but effective algorithm to remove such artifacts from a given triangle mesh. The central problem is to make this algorithm numerically robust because degenerate triangles are usually the source for all kinds of numerical instabilities. Our algorithm is based on a slicing technique that cuts a set of planes through the given polygonal model. The mesh slicing operator only uses numerically stable predicates and therefore is able to split faces in a controlled manner. In combination with a custom tailored mesh decimation scheme we are able to remove the degenerate faces from meshes like those typically generated by tesselation units in CAD systems.
A Robust Procedure to Eliminate Degenerate Faces from Triangle Meshes
['Mario Botsch', 'Leif Kobbelt']
vision modeling and visualization
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 143-words sample abstract on the following topic based on following title 'A Robust Procedure to Eliminate Degenerate Faces from Triangle Meshes'; vision modeling and visualization
gen_full_metadata
abstract
In HSDPA, mobile terminals measure their own radio channel status and report the quality to base station using a channel quality indicator (CQI). The main purpose of the CQI feedback is to utilize this information for scheduling and adaptive modulation and coding (AMC) in a base station. However, since CQI is another version of signal-to-interference and noise ratio (SINR), we can extract more information about radio channel states of mobile terminals. In this paper, we propose an effective two-dimensional radio channel estimation scheme from the current and past CQI information for each mobile terminal. We show the effectiveness of the proposed scheme by evaluating the performance through mathematical analysis and simulations.
A Radio Channel Estimation Scheme Using the CQI Feedback Information in High Speed Downlink Packet Access
['Junsu Kim', 'Young-Jun Hong', 'Dan Keun Sung']
international conference on communications
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on international conference on communications with title 'A Radio Channel Estimation Scheme Using the CQI Feedback Information in High Speed Downlink Packet Access', write a 111-words section 'Literature Review': In HSDPA, mobile terminals measure their own radio channel status and report the quality to base station using a channel quality indicator (CQI). The main purpose of the CQI feedback is to utilize this information for scheduling and adaptive modulation and coding (AMC) in a base station. However, since CQI is another version of signal-to-interference and noise ratio (SINR), we can extract more information about radio channel states of mobile terminals. In this paper, we propose an effective two-dimensional radio channel estimation scheme from the current and past CQI information for each mobile terminal. We show the effectiveness of the proposed scheme by evaluating the performance through mathematical analysis and simulations.
gen_section
0
Optimal scheduling of steelmaking production contributes to boosting productivity, reducing costs and achieving sustainable manufacturing for an integrated steel company. However, the optimal schedule is always difficult to implement in the real-world production system, because its optimality and feasibility are affected by various uncertain factors. In this paper, we study an uncertain scheduling problem arising from the steelmaking-continuous casting (SCC) production process which considers the cost and penalty objectives. To solve this problem, we propose a prediction-based online soft scheduling (OLSS) algorithm which belongs to predictive-reactive approach. In the proposed algorithm, a surrogate model named Gaussian process regression (GPR) is used to predict the characteristic index, slack ratio, which is able to trade off the objectives between the cost and the penalty of cast-breaks. When new batches are released to the shop floor, the soft schedule including critical decisions and characteristic indexes is determined by a dynamic optimization algorithm based on the predicted value. In the reactive phase, a heuristic method is presented to determine other non-critical decisions. Finally, the computational results show that the OLSS outperforms other algorithms in penalty objective, and obtains approximate effects in cost objective.
A prediction-based online soft scheduling algorithm for the real-world steelmaking-continuous casting production
['Shenglong Jiang', 'Min Liu', 'Jianhua Lin', 'Huaxing Zhong']
Knowledge Based Systems
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 35-words sample abstract on the following topic based on following title 'A prediction-based online soft scheduling algorithm for the real-world steelmaking-continuous casting production'; Knowledge Based Systems
gen_full_metadata
abstract
The fundamental concepts used to evaluate multipath effects date back over 50 years. Today's technology can support wide-bandwidth communications and radar systems that were not available or considered when these multipath concepts were being formulated. This paper presents a slightly modified version of the original analytical approaches for evaluating multipath effects and compares the predicted multipath with data collected from a wideband instrumentation radar. The multipath model presented herein covers both the specular (coherent) and diffuse (noncoherent) components of multipath. The test data were collected for conditions strongly favoring diffuse multipath, but the experimental technique supported detection of any unanticipated specular contributions. Because the purpose of this validation effort was to perform an in-depth examination of multipath effects, the demanding test conditions revealed a couple of real-world effects that had to be addressed. After incorporating these effects into the analytical multipath formulations, we were able to show very close agreement between the predicted and observed multipath.
Comparison of Predicted and Measured Multipath Impulse Responses
['Kent Haspert', 'Michael Tuley']
IEEE Transactions on Aerospace and Electronic Systems
2011
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 133-words sample abstract on the following topic based on following title 'Comparison of Predicted and Measured Multipath Impulse Responses'; IEEE Transactions on Aerospace and Electronic Systems
gen_full_metadata
abstract
In the past, although mobile devices were equipped with multiple radio interfaces, for the sake of power saving, only one was activated for data transmission. The idea of concurrent transmission via multiple radio interfaces has not been seriously studied. However, nowadays, power consumption no longer is a problem in many application scenarios, e. g. VANETs. In this work, we investigate the performance improvement of concurrent file transfer over two heterogeneous radio networks, e. g. WiFi and 3G. The traditional File Transfer Protocol is modified to utilize two heterogeneous radio connections and experiments are executed over the Internet and a 3G data network to measure the latency and average bandwidth. Our results show that it is possible to integrate the bandwidth of both radio networks.
File Transfer for Mobile Devices in Heterogeneous Radio Networks
['Chih-Wei Yi', 'Shau-Shiuan Yang', 'Yi-Bing Lin', 'Yi-Ta Chuang', 'Pin-Chuan Liu']
vehicular technology conference
2010
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more professional in tone: In the past, although mobile devices were equipped with multiple radio interfaces, for the sake of power saving, only one was activated for data transmission. The idea of concurrent transmission via multiple radio interfaces has not been seriously studied. However, nowadays, power consumption no longer is a problem in many application scenarios, e. g. VANETs. In this work, we investigate the performance improvement of concurrent file transfer over two heterogeneous radio networks, e. g. WiFi and 3G. The traditional File Transfer Protocol is modified to utilize two heterogeneous radio connections and experiments are executed over the Internet and a 3G data network to measure the latency and average bandwidth.
enhance
0
In this letter, we consider the problem of maximizing the number of lightpaths that may be established in a wavelength routed optical network (WRON), given a connection matrix, i. e. , a static set of demands, and the number of wavelengths the fiber supports. The problem of establishing all the connections of the connection matrix using the fewest number of wavelengths has been investigated in Banerjee and Mukherjee (1996) and Baroni et al. (1998). We call the former problem max-RWA (problem of maximizing the number of lightpaths) and the latter problem min-RWA (minimizing the number of wavelengths). In this letter, we only consider WRONs with no wavelength conversion capabilities. We formulate the max-RWA problem when no wavelength conversion is allowed as an integer linear programme (ILP) which may be solved to obtain an optimum solution. We hope to solve the ILP exactly for small size networks (few nodes). For moderately large networks (tens of nodes) we develop algorithms based on solutions obtained by solving the LP-relaxation of the ILP formulation. Results obtained for networks such as NSFNET and EONNET are presented.
Algorithms for routing and wavelength assignment based on solutions of LP-relaxations
['Rajesh M. Krishnaswamy', 'Kumar N. Sivarajan']
IEEE Communications Letters
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more casual in tone: In this letter, we consider the problem of maximizing the number of lightpaths that may be established in a wavelength routed optical network (WRON), given a connection matrix, i. e. , a static set of demands, and the number of wavelengths the fiber supports. The problem of establishing all the connections of the connection matrix using the fewest number of wavelengths has been investigated in Banerjee and Mukherjee (1996) and Baroni et al. (1998). We call the former problem max-RWA (problem of maximizing the number of lightpaths) and the latter problem min-RWA (minimizing the number of wavelengths). In this letter, we only consider WRONs with no wavelength conversion capabilities.
enhance
0
Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere, and because many are domain specific, not to be found in bilingual dictionaries. We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources. We report on the application and evaluation of this algorithm in translating Arabic named entities to English. We also compare our results with the results obtained from human translations and a commercial system for the same task.
Translating Named Entities Using Monolingual and Bilingual Resources
['Yaser Al-Onaizan', 'Kevin Knight']
meeting of the association for computational linguistics
2002
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on meeting of the association for computational linguistics with title 'Translating Named Entities Using Monolingual and Bilingual Resources', write a 86-words section 'Methodology': Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere, and because many are domain specific, not to be found in bilingual dictionaries. We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources. We report on the application and evaluation of this algorithm in translating Arabic named entities to English. We also compare our results with the results obtained from human translations and a commercial system for the same task.
gen_section
0
Language classification is a preliminary step for most natural-language related processes. The significant quantity of multilingual documents poses a problem for traditional language-classification schemes and requires segmentation of the document to monolingual sections. This phenomenon is characteristic of classical and medieval Jewish literature, which frequently mixes Hebrew, Aramaic, Judeo-Arabic and other Hebrew-script languages. We propose a method for classification and segmentation of multi-lingual texts in the Hebrew character set, using bigram statistics. For texts, such as the manuscripts found in the Cairo Genizah, we are also forced to deal with a significant level of noise in OCR-processed text.
Language Classification and Segmentation of Noisy Documents in Hebrew Scripts
['Alex Zhicharevich', 'Nachum Dershowitz']
nan
2012
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 98-words sample abstract on the following topic based on following title 'Language Classification and Segmentation of Noisy Documents in Hebrew Scripts'; nan
gen_full_metadata
abstract
This version includes new aspects that improve the computation of the counting efficiency for each one of the three available atomic rearrangement detection models (i. e. , KLM, KL1L2L3M and KLMN). The first modification involves a correction algorithm that simulates the non-linear response of the detector to photoionization for low-energy X-ray photons. Although this correction has the inconvenience of substantially increasing the number of atomic rearrangement detection pathways, the computed counting efficiency for low-Z nuclides is reduced by 2 for moderate quenching in agreement with experiment. The program also simulates how the addition of extra components, such as a quencher or aqueous solutions, affects the counting efficiency. Since the CIEMAT/NIST method requires identical ionization quench functions for the electron-capture nuclide and the tracer, the computation of the counting efficiency for 3H, the low-energy beta-ray emitter commonly used as tracer, is included in the program as an option. summary of program:EMILIA identifier:ADWK summary URL: obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland previsions: none: revisions: any IBM PC compatible with 80386 or higher Intel processors systems under which the program has been tested:MS-DOS and higher systems language used:FORTRAN 77 required to execute with typical data: 253 kword. of bits in a word: 32. of lines in distributed program, including test data, etc. :7147. of bytes in distributed program, including test data, etc. :74â776 format:tar. gz of the physical problem: The determination of radioactivity in liquid samples of electron-capture nuclides is demanded in radiation physics, radiation protection, dosimetry, radiobiology and nuclear medicine. The CIEMAT/NIST method has proved to be suitable for radionuclide standardizations when the counting efficiency of the liquid-scintillation spectrometer is sufficiently high. Although the method has widely been applied to beta-ray nuclides, its applicability to electron-capture nuclides nowadays has not the required degree of accuracy. The inaccuracies of the method are mainly induced by the huge number of low-energy electrons and X-ray photons emitted by the atomic rearrangement cascade after the electron-capture process, which are efficiently detected by liquid-scintillation counting, but are of difficult modeling due to the inherent complexity of the atomic rearrangement process and the non-linear response of the spectrometer in the low-energy range. method: A detailed simulation of the non-linear response of the spectrometer to photoionization must include the radiation emitted by the atomic rearrangement cascade. However, a model considering all possible scintillator de-excitations at atomic level increases exponentially the number of atomic rearrangement detection pathways subsequent to capture. Since the contribution of the non-linear effects to the counting efficiency are only corrective, we can approximate the reduced energy involved in the photoionization process to a sum of only three terms: the photoelectron energy, the mean energy of the KXY Auger electrons and the global energy contribution of the remaining radiation (electrons and X-rays) emitted by the atomic rearrangement cascade. The value of each term depends on the nature of the atom in which photoabsorption is produced and on the atom shell from where the photoelectron is ejected. The non-linear correction required to simulate low-energy X-ray photoabsorption is basically important when the scintillator cocktail contains elements of high atomic numbers. For heavy atoms, the K- and L-shell binding energies can be slightly less than the energy of the colliding photon. For such cases, the non-linear effects can play an important role. on the complexity of the problem: The simulation of all possible detection pathways of atomic rearrangement that follow to photoionization complicates the problem unnecessary. To correct the non-linear effects we consider only significative photoelectric interactions with the K- and L-shells. Also we assume that K-shell photoionizations only generate three types of entities: the photoelectron itself, KXY Auger electrons and a radiation group that includes the remaining emitted particles. For L-shell photoelectrons the radiation emission subsequent to photoionization is considered as a whole.
EMILIA, the LS counting efficiency for electron-capture and capture-gamma emitters
['A. Grau Carles']
Computer Physics Communications
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on Computer Physics Communications with title 'EMILIA, the LS counting efficiency for electron-capture and capture-gamma emitters', write a 14-words section 'Introduction': This version includes new aspects that improve the computation of the counting efficiency for each one of the three available atomic rearrangement detection models (i. e. , KLM, KL1L2L3M and KLMN). The first modification involves a correction algorithm that simulates the non-linear response of the detector to photoionization for low-energy X-ray photons. Although this correction has the inconvenience of substantially increasing the number of atomic rearrangement detection pathways, the computed counting efficiency for low-Z nuclides is reduced by 2 for moderate quenching in agreement with experiment. The program also simulates how the addition of extra components, such as a quencher or aqueous solutions, affects the counting efficiency. Since the CIEMAT/NIST method requires identical ionization quench functions for the electron-capture nuclide and the tracer, the computation of the counting efficiency for 3H, the low-energy beta-ray emitter commonly used as tracer, is included in the program as an option. summary of program:EMILIA identifier:ADWK summary URL: obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland previsions: none: revisions: any IBM PC compatible with 80386 or higher Intel processors systems under which the program has been tested:MS-DOS and higher systems language used:FORTRAN 77 required to execute with typical data: 253 kword. of bits in a word: 32. of lines in distributed program, including test data, etc. :7147. of bytes in distributed program, including test data, etc. :74â776 format:tar. gz of the physical problem: The determination of radioactivity in liquid samples of electron-capture nuclides is demanded in radiation physics, radiation protection, dosimetry, radiobiology and nuclear medicine. The CIEMAT/NIST method has proved to be suitable for radionuclide standardizations when the counting efficiency of the liquid-scintillation spectrometer is sufficiently high. Although the method has widely been applied to beta-ray nuclides, its applicability to electron-capture nuclides nowadays has not the required degree of accuracy. The inaccuracies of the method are mainly induced by the huge number of low-energy electrons and X-ray photons emitted by the atomic rearrangement cascade after the electron-capture process, which are efficiently detected by liquid-scintillation counting, but are of difficult modeling due to the inherent complexity of the atomic rearrangement process and the non-linear response of the spectrometer in the low-energy range. method: A detailed simulation of the non-linear response of the spectrometer to photoionization must include the radiation emitted by the atomic rearrangement cascade. However, a model considering all possible scintillator de-excitations at atomic level increases exponentially the number of atomic rearrangement detection pathways subsequent to capture. Since the contribution of the non-linear effects to the counting efficiency are only corrective, we can approximate the reduced energy involved in the photoionization process to a sum of only three terms: the photoelectron energy, the mean energy of the KXY Auger electrons and the global energy contribution of the remaining radiation (electrons and X-rays) emitted by the atomic rearrangement cascade. The value of each term depends on the nature of the atom in which photoabsorption is produced and on the atom shell from where the photoelectron is ejected. The non-linear correction required to simulate low-energy X-ray photoabsorption is basically important when the scintillator cocktail contains elements of high atomic numbers. For heavy atoms, the K- and L-shell binding energies can be slightly less than the energy of the colliding photon. For such cases, the non-linear effects can play an important role. on the complexity of the problem: The simulation of all possible detection pathways of atomic rearrangement that follow to photoionization complicates the problem unnecessary. To correct the non-linear effects we consider only significative photoelectric interactions with the K- and L-shells. Also we assume that K-shell photoionizations only generate three types of entities: the photoelectron itself, KXY Auger electrons and a radiation group that includes the remaining emitted particles. For L-shell photoelectrons the radiation emission subsequent to photoionization is considered as a whole.
gen_section
0
Emerging 3D die-stacked DRAM technology is one of the most promising solutions for future memory architectures to satisfy the ever-increasing demands on performance, power, and cost. This paper introduces CACTI-3DD, the first architecture-level integrated power, area, and timing modeling framework for 3D die-stacked off-chip DRAM main memory. CACTI-3DD includes TSV models, improves models for 2D off-chip DRAM main memory over current versions of CACTI, and includes 3D integration models that enable the analysis of a full spectrum of 3D DRAM designs from coarse-grained rank-level 3D stacking to bank-level 3D stacking. CACTI-3DD enables an in-depth study of architecture-level tradeoffs of power, area, and timing for 3D die-stacked DRAM designs. We demonstrate the utility of CACTI-3DD in analyzing design trade-offs of emerging 3D die-stacked DRAM main memories. We find that a coarse-grained 3D DRAM design that stacks canonical DRAM dies can only achieve marginal benefits in power, area, and timing compared to the original 2D design. To fully leverage the huge internal bandwidth of TSVs, DRAM dies must be re-architected, and system implications must be considered when building 3D DRAMs with redesigned 2D planar DRAM dies. Our results show that the 3D DRAM with re-architected DRAM dies achieves significant improvements in power and timing compared to the coarse-grained 3D die-stacked DRAM.
CACTI-3DD: architecture-level modeling for 3D die-stacked DRAM main memory
['Ke Chen', 'Sheng Li', 'Naveen Muralimanohar', 'Jung Ho Ahn', 'Jay B. Brockman', 'Norman P. Jouppi']
design, automation, and test in europe
2012
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more academic in tone: Emerging 3D die-stacked DRAM technology is one of the most promising solutions for future memory architectures to satisfy the ever-increasing demands on performance, power, and cost. This paper introduces CACTI-3DD, the first architecture-level integrated power, area, and timing modeling framework for 3D die-stacked off-chip DRAM main memory. CACTI-3DD includes TSV models, improves models for 2D off-chip DRAM main memory over current versions of CACTI, and includes 3D integration models that enable the analysis of a full spectrum of 3D DRAM designs from coarse-grained rank-level 3D stacking to bank-level 3D stacking. CACTI-3DD enables an in-depth study of architecture-level tradeoffs of power, area, and timing for 3D die-stacked DRAM designs. We demonstrate the utility of CACTI-3DD in analyzing design trade-offs of emerging 3D die-stacked DRAM main memories. We find that a coarse-grained 3D DRAM design that stacks canonical DRAM dies can only achieve marginal benefits in power, area, and timing compared to the original 2D design. To fully leverage the huge internal bandwidth of TSVs, DRAM dies must be re-architected, and system implications must be considered when building 3D DRAMs with redesigned 2D planar DRAM dies. Our results show that the 3D DRAM with re-architected DRAM dies achieves significant improvements in power and timing compared to the coarse-grained 3D die-stacked DRAM.
enhance
0
It is usual to consider data protection and learnability as conflicting objectives. This is not always the case: we show how to jointly control causal inference --- seen as the attack --- extitand learnability by a noise-free process that mixes training examples, the Crossover Process (cp). One key point is that the cpis typically able to alter joint distributions without touching on marginals, nor altering the sufficient statistic for the class. In other words, it saves (and sometimes improves) generalization for supervised learning, but can alter the relationship between covariates --- and therefore fool statistical measures of (nonlinear) independence and causal inference into misleading ad-hoc conclusions. Experiments on a dozen readily available domains validate the theory.
The Crossover Process: Learnability meets Protection from Inference Attacks
['Richard Nock', 'Giorgio Patrini', 'Finnian Lattimore', 'Tibério S. Caetano']
arXiv: Learning
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more casual in tone: It is usual to consider data protection and learnability as conflicting objectives. This is not always the case: we show how to jointly control causal inference --- seen as the attack --- extitand learnability by a noise-free process that mixes training examples, the Crossover Process (cp). One key point is that the cpis typically able to alter joint distributions without touching on marginals, nor altering the sufficient statistic for the class. In other words, it saves (and sometimes improves) generalization for supervised learning, but can alter the relationship between covariates --- and therefore fool statistical measures of (nonlinear) independence and causal inference into misleading ad-hoc conclusions. Experiments on a dozen readily available domains validate the theory.
enhance
0
Data privacy is a stringent need when sharing and processing data on a distributed environment or in Internet of Things. Collaborative privacy-preserving data mining based on secured multiparty computation incur high communication and computational cost. Data anonymization is a promising technique in the field of privacy-preserving data mining used to protect the data against identity disclosure. Information loss and common attacks possible on the anonymized data are serious challenges of anonymization. Recently, data anonymization using data mining techniques has showed significant improvement in data utility. Still the existing techniques lack in effective handling of attacks. Hence in this paper, an anonymization algorithm based on clustering and resilient to similarity attack and probabilistic inference attack is proposed. The anonymized data is distributed on Hadoop Distributed File System. The method achieves a better trade-off between privacy and utility. In our work the data utility is measured in terms of accuracy and FMeasure with respect to different classifiers. Experiments show that the accuracy, FMeasure and the execution time of the classification algorithms on the privacy-preserved data sets formed by the proposed clustering algorithms are better than the existing algorithms.
Privacy and utility preserving data clustering for data anonymization and distribution on Hadoop
['J. Jesu Vedha Nayahi', 'V. Kavitha']
Future Generation Computer Systems
2017
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 126-words sample abstract on the following topic based on following title 'Privacy and utility preserving data clustering for data anonymization and distribution on Hadoop'; Future Generation Computer Systems
gen_full_metadata
abstract
NodeTrix representations are a popular way to visualize clustered graphs; they represent clusters as adjacency matrices and inter-cluster edges as curves connecting the matrix boundaries. We study the complexity of constructing NodeTrix representations focusing on planarity testing problems, and we show several NP-completeness results and some polynomial-time algorithms. Building on such algorithms we develop a JavaScript library for NodeTrix representations aimed at reducing the crossings between edges incident to the same matrix.
Computing NodeTrix Representations of Clustered Graphs
['Giordano Da Lozzo', 'Giuseppe Di Battista', 'Fabrizio Frati', 'Maurizio Patrignani']
graph drawing
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 72-words sample abstract on the following topic based on following title 'Computing NodeTrix Representations of Clustered Graphs'; graph drawing
gen_full_metadata
abstract
An important condition that has to be satisfied when implementing the gain-transfer method is that the fields in the test zone of the measurement facility should be as close to the plane wave as possible. In this paper, the effect of amplitude and phase deviation from a perfect plane wave, on gain measurements for microwave aperture antennas, conducted via the gain-transfer method, is determined and quantified. The pyramidal horn antenna is used as a basis for all calculations as it is the universal standard for microwave antenna gain measurements. Coupling, between the antenna being illuminated and the test zone fields, is evaluated by means of the reciprocity theorem. Test zone field variations are simulated and the effect thereof, on the predicted measured gain, is illustrated.
Horn antenna analysis as applied to the evaluation of the gain-transfer method
['Gordon Mayhew-Ridgers', 'Johann W. Odendaal', 'Johan Joubert']
IEEE Transactions on Instrumentation and Measurement
2000
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 108-words sample abstract on the following topic based on following title 'Horn antenna analysis as applied to the evaluation of the gain-transfer method'; IEEE Transactions on Instrumentation and Measurement
gen_full_metadata
abstract
The Chien search process is the most complex block in the decoding of Bose-Chaudhuri-Hochquenghem (BCH) codes. Since the BCH codes conduct the bit-by-bit error correction, they often need a parallel implementation for high throughput applications. The parallel implementation obviously needs much increased hardware. In this paper, we propose a strength reduced architecture for the parallel Chien search process. The proposed method transforms the expensive modulo- f ( x ) multiplications into shift operations, by which not only the hardware for multiplications but also that for additions are much reduced. One example shows that the hardware complexity is reduced by 90 in the implementation of binary BCH (8191, 7684, 39) code with the parallel factor of 64. Consequently, it is possible to achieve a speedup of 64 with only 13 times of the hardware complexity when compared with the serial processing.
Strength-Reduced Parallel Chien Search Architecture for Strong BCH Codes
['Junho Cho', 'Wonyong Sung']
IEEE Transactions on Circuits and Systems Ii-express Briefs
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on IEEE Transactions on Circuits and Systems Ii-express Briefs with title 'Strength-Reduced Parallel Chien Search Architecture for Strong BCH Codes', write a 24-words section 'Introduction': The Chien search process is the most complex block in the decoding of Bose-Chaudhuri-Hochquenghem (BCH) codes. Since the BCH codes conduct the bit-by-bit error correction, they often need a parallel implementation for high throughput applications. The parallel implementation obviously needs much increased hardware. In this paper, we propose a strength reduced architecture for the parallel Chien search process. The proposed method transforms the expensive modulo- f ( x ) multiplications into shift operations, by which not only the hardware for multiplications but also that for additions are much reduced. One example shows that the hardware complexity is reduced by 90 in the implementation of binary BCH (8191, 7684, 39) code with the parallel factor of 64. Consequently, it is possible to achieve a speedup of 64 with only 13 times of the hardware complexity when compared with the serial processing.
gen_section
0
This paper presents a new method for inverse geometric reconstruction of conics in 3D space using a rayâsurface intersection. The perspective views of the conic in both the image planes are used as the input of the reconstruction algorithm. Least-square curve fitting is used in one of the 2D image planes to obtain the algebraic equation of the projected conic. The rayâsurface intersection is performed using a second-order method, where a new criterion is given to provide the unique intersection. A plane is fitted through the evolved intersection points. The constructed plane cuts the conical surface to the desired conic. The proposed method does not require to establish correspondence between the two perspective views. Moreover, it requires only three intersection points. Various experiments are presented to support the validity of the proposed algorithm. Simulation studies are also performed to observe the effect of noise on errors of reconstruction. Effect of quantization errors are also considered in the final reconstruction.
Stereo Vision-Based Conic Reconstruction Using a Ray-Quadric Intersection
['Deepika Saini', 'Sanjeev Kumar']
International Journal of Image and Graphics
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on International Journal of Image and Graphics with title 'Stereo Vision-Based Conic Reconstruction Using a Ray-Quadric Intersection', write a 133-words section 'Literature Review': This paper presents a new method for inverse geometric reconstruction of conics in 3D space using a rayâsurface intersection. The perspective views of the conic in both the image planes are used as the input of the reconstruction algorithm. Least-square curve fitting is used in one of the 2D image planes to obtain the algebraic equation of the projected conic. The rayâsurface intersection is performed using a second-order method, where a new criterion is given to provide the unique intersection. A plane is fitted through the evolved intersection points. The constructed plane cuts the conical surface to the desired conic. The proposed method does not require to establish correspondence between the two perspective views. Moreover, it requires only three intersection points. Various experiments are presented to support the validity of the proposed algorithm. Simulation studies are also performed to observe the effect of noise on errors of reconstruction. Effect of quantization errors are also considered in the final reconstruction.
gen_section
0
Social media has enabled a new breed of soft sensors that enriches the IoT paradigm with new forms of data. The present work introduces a novel approach for personal mobility mining that combines these new data-sources with built-in sensors of a smart-phone in order to timely extract personal mobility pattens by means of the Complex Event Processing (CEP) approach. Unlike previous solutions, the present work profits from both the textual and location data of social-network sites by also dealing with the actual scarcity of geo-tagged documents in those sites. Finally, a preliminary study of the feasibility of our proposal is stated.
Towards human mobility extraction based on social media with Complex Event Processing
['Fernando Terroso-Saenz', 'Mercedes Valdes-Vela', 'Antonio F. Skarmeta-Gomez']
the internet of things
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 101-words of the given paper text with the title 'Towards human mobility extraction based on social media with Complex Event Processing': Social media has enabled a new breed of soft sensors that enriches the IoT paradigm with new forms of data. The present work introduces a novel approach for personal mobility mining that combines these new data-sources with built-in sensors of a smart-phone in order to timely extract personal mobility pattens by means of the Complex Event Processing (CEP) approach. Unlike previous solutions, the present work profits from both the textual and location data of social-network sites by also dealing with the actual scarcity of geo-tagged documents in those sites. Finally, a preliminary study of the feasibility of our proposal is stated.
continue
1
A model-based probabilistic method of human action recognition is presented in this paper. We employ supervised neighborhood preserving embedding (NPE) to preserve the underlying structure of articulated action space during dimensionality reduction. Generative recognition structures like Hidden Markov Models often have to make unrealistic assumptions on the conditional independence and can not accommodate long term contextual dependencies. Moreover, generative models usually require a considerable number of observations for certain gesture classes and may not uncover the distinctive configuration that sets one gesture class uniquely against others. In this work, we adopt hidden conditional random fields (HCRF) to model and classify actions in a discriminative formulation. Experiments on a recent database have demonstrated that our approach can recognize human actions accurately with temporal, intra- and inter-person variations.
Human Action Recognition Using Manifold Learning and Hidden Conditional Random Fields
['Fawang Liu', 'Yunde Jia']
international conference for young computer scientists
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more technical in tone: A model-based probabilistic method of human action recognition is presented in this paper. We employ supervised neighborhood preserving embedding (NPE) to preserve the underlying structure of articulated action space during dimensionality reduction. Generative recognition structures like Hidden Markov Models often have to make unrealistic assumptions on the conditional independence and can not accommodate long term contextual dependencies. Moreover, generative models usually require a considerable number of observations for certain gesture classes and may not uncover the distinctive configuration that sets one gesture class uniquely against others. In this work, we adopt hidden conditional random fields (HCRF) to model and classify actions in a discriminative formulation. Experiments on a recent database have demonstrated that our approach can recognize human actions accurately with temporal, intra- and inter-person variations.
enhance
0
In this letter, a wireless local multipoint distribution system (LMDS) at millimeter waves for the last-mile broad-band distribution to users of interactive services is investigated. The system analyzed employs a coded orthogonal frequency-division multiplexing transmission scheme with frequency-division multiplexing (FDM) and/or time-division multiplexing techniques and adaptive carrier allocation to counteract the effects of the wireless communication channel. The idea of deploying the reverse channel (exploited by the user for interactivity purposes) to provide channel information to the broadcasting transmitter is introduced. System performance is evaluated for an urban microcellular radio system in an actual propagation environment and the choices of the optimum multiplexing technique and carrier allocation algorithm are discussed in the case of ideal feedback. It is found that a pure FDM technique combined with an adaptive carrier allocation algorithm giving priority to users having the largest path loss leads to the best performance.
Adaptive time and frequency resource assignment with COFDM for LMDS systems
['Velio Tralli', 'A. Vaccari', 'Roberto Verdone', 'Oreste Andrisano']
IEEE Transactions on Communications
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 116-words of the given paper text with the title 'Adaptive time and frequency resource assignment with COFDM for LMDS systems': In this letter, a wireless local multipoint distribution system (LMDS) at millimeter waves for the last-mile broad-band distribution to users of interactive services is investigated. The system analyzed employs a coded orthogonal frequency-division multiplexing transmission scheme with frequency-division multiplexing (FDM) and/or time-division multiplexing techniques and adaptive carrier allocation to counteract the effects of the wireless communication channel. The idea of deploying the reverse channel (exploited by the user for interactivity purposes) to provide channel information to the broadcasting transmitter is introduced. System performance is evaluated for an urban microcellular radio system in an actual propagation environment and the choices of the optimum multiplexing technique and carrier allocation algorithm are discussed in the case of ideal feedback.
continue
1
A token-based distributed mutual exclusion algorithm is presented. The algorithm assumes a fully connected, reliable physical network and a directed acyclic graph (DAG) structured logical network. The number of messages required to provide mutual exclusion is dependent upon the logical topology imposed on the nodes. Using the best topology, the algorithm attains comparable performance to a centralized mutual exclusion algorithm; i. e. , three messages per critical section entry. The algorithm achieves minimal heavy-load synchronization delay and imposes very little storage overhead.
A DAG-based algorithm for distributed mutual exclusion
['Mitchell L. Neilsen', 'Masaaki Mizuno']
international conference on distributed computing systems
1991
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 82-words sample abstract on the following topic based on following title 'A DAG-based algorithm for distributed mutual exclusion'; international conference on distributed computing systems
gen_full_metadata
abstract
The predictive hierarchical level of detail optimization algorithm of Mason and Blake is experimentally evaluated in the form of a practical application to hierarchical radiosity. In a novel approach the recursively subdivided patch hierarchy generated by a perceptually refined hierarchical radiosity algorithm is treated as a hierarchical level of detail scene description. In this way we use the Mason-Blake algorithm to successfully maintain constant frame rates during the interactive rendering of the radiosity-generated scene. We establish that the algorithm is capable of maintaining uniform frame rendering times, but that the execution time of the optimization algorithm itself is significant and is strongly dependent on frame-to-frame coherence and the granularity of the level of detail description. To compensate we develop techniques which effectively reduce and limit the algorithm execution time: We restrict the execution times of the algorithm to guard against pathological situations and propose simplification transforms that increase the granularity of the scene description, at minimal cost to visual quality. We demonstrate that using these techniques the algorithm is capable of maintaining interactive frame rates for scenes of arbitrary complexity. Furthermore we provide guidelines for the appropriate use of predictive level of detail optimization
Hierarchical Level of Detail Optimization for Constant Frame Rate Rendering of Radiosity Scenes
['Shaun Nirenstein', 'Edwin H. Blake', 'Simon Winberg', 'Ashton E. W. Mason']
South African Computer Journal
2002
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on South African Computer Journal with title 'Hierarchical Level of Detail Optimization for Constant Frame Rate Rendering of Radiosity Scenes', write a 160-words section 'Conclusion': The predictive hierarchical level of detail optimization algorithm of Mason and Blake is experimentally evaluated in the form of a practical application to hierarchical radiosity. In a novel approach the recursively subdivided patch hierarchy generated by a perceptually refined hierarchical radiosity algorithm is treated as a hierarchical level of detail scene description. In this way we use the Mason-Blake algorithm to successfully maintain constant frame rates during the interactive rendering of the radiosity-generated scene. We establish that the algorithm is capable of maintaining uniform frame rendering times, but that the execution time of the optimization algorithm itself is significant and is strongly dependent on frame-to-frame coherence and the granularity of the level of detail description. To compensate we develop techniques which effectively reduce and limit the algorithm execution time: We restrict the execution times of the algorithm to guard against pathological situations and propose simplification transforms that increase the granularity of the scene description, at minimal cost to visual quality. We demonstrate that using these techniques the algorithm is capable of maintaining interactive frame rates for scenes of arbitrary complexity. Furthermore we provide guidelines for the appropriate use of predictive level of detail optimization
gen_section
0
This paper presents the design and implementation of a parallel two-dimensional/three-dimensional (2-D/3-D) image registration method for computer-assisted surgery. Our method exploits data and speculative parallelism, aiming at making computation time short enough to carry out registration tasks during surgery. Our experiments show that exploiting both parallelisms reduces computation time on a cluster of 64 PCs from a few tens of minutes to less than a few tens of seconds
A Parallel Implementation of 2-D/3-D Image Registration for Computer-Assisted Surgery
['Fumihiko Ino', 'Y. Kawasaki', 'Takahito Tashiro', 'Yoshikazu Nakajima', 'Yoshinobu Sato', 'Shinichi Tamura', 'Kenichi Hagihara']
international conference on parallel and distributed systems
2005
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 69-words of the given paper text with the title 'A Parallel Implementation of 2-D/3-D Image Registration for Computer-Assisted Surgery': This paper presents the design and implementation of a parallel two-dimensional/three-dimensional (2-D/3-D) image registration method for computer-assisted surgery. Our method exploits data and speculative parallelism, aiming at making computation time short enough to carry out registration tasks during surgery. Our experiments show that exploiting both parallelisms reduces computation time on a cluster of 64 PCs from a few tens of minutes to less than a few tens of seconds
continue
1
Designing interfaces that suit human pointing precision in freehand space can improve the smoothness and naturalness of gestural interaction. However, only few studies focus on the proper pointing precision for a comfortable target acquisition to provide suggestions for user-centered interface design for such kind of techniques. This paper presents work on studying and estimating human pointing precision in three separate dimensions in different motion ranges when performing precision movement in freehand space. Human pointing precision was estimated to be about 1. 67â3. 0 cm within a motion range about 40 cm. Participantsâ performances for small target acquisition were close in horizontal and vertical dimensions, but worst in depth dimension. The effects of task amplitude on the pointing precision became prominent in depth dimension, and minimal in vertical dimension. The work also indicated that precise movement of freehand induced side effect to make hand stiff and gesture like âputting forwardâ physically exhausting, especially to reach small targets. This work provides a deeper insight into freehand interaction and contributes to the user-centered design of freehand-like interfaces.
The smallest target size for a comfortable pointing in freehand space: human pointing precision of freehand interaction
['Zhuorui Liang', 'Xiangmin Xu', 'Shaolin Zhou']
Universal Access in The Information Society
2017
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 156-words of the given paper text with the title 'The smallest target size for a comfortable pointing in freehand space: human pointing precision of freehand interaction': Designing interfaces that suit human pointing precision in freehand space can improve the smoothness and naturalness of gestural interaction. However, only few studies focus on the proper pointing precision for a comfortable target acquisition to provide suggestions for user-centered interface design for such kind of techniques. This paper presents work on studying and estimating human pointing precision in three separate dimensions in different motion ranges when performing precision movement in freehand space. Human pointing precision was estimated to be about 1. 67â3. 0 cm within a motion range about 40 cm. Participantsâ performances for small target acquisition were close in horizontal and vertical dimensions, but worst in depth dimension. The effects of task amplitude on the pointing precision became prominent in depth dimension, and minimal in vertical dimension. The work also indicated that precise movement of freehand induced side effect to make hand stiff and gesture like âputting forwardâ physically exhausting, especially to reach small targets.
continue
1
This paper presents two problem formulations for scheduling the maintenance of a fighter aircraft fleet under conflict operating conditions. In the first formulation, the average availability of aircraft is maximized by choosing when to start the maintenance of each aircraft. In the second formulation, the availability of aircraft is preserved above a specific target level by choosing to either perform or not perform each maintenance activity. Both formulations are cast as semi-Markov decision problems (SMDPs) that are solved using reinforcement learning (RL) techniques. As the solution, maintenance policies dependent on the states of the aircraft are obtained. Numerical experiments imply that RL is a viable approach for considering conflict time maintenance policies. The obtained solutions provide knowledge of efficient maintenance decisions and the level of readiness that can be maintained by the fleet.
Scheduling fighter aircraft maintenance with reinforcement learning
['Ville Mattila', 'Kai Virtanen']
winter simulation conference
2011
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on winter simulation conference with title 'Scheduling fighter aircraft maintenance with reinforcement learning', write a 50-words section 'Conclusion': This paper presents two problem formulations for scheduling the maintenance of a fighter aircraft fleet under conflict operating conditions. In the first formulation, the average availability of aircraft is maximized by choosing when to start the maintenance of each aircraft. In the second formulation, the availability of aircraft is preserved above a specific target level by choosing to either perform or not perform each maintenance activity. Both formulations are cast as semi-Markov decision problems (SMDPs) that are solved using reinforcement learning (RL) techniques. As the solution, maintenance policies dependent on the states of the aircraft are obtained. Numerical experiments imply that RL is a viable approach for considering conflict time maintenance policies. The obtained solutions provide knowledge of efficient maintenance decisions and the level of readiness that can be maintained by the fleet.
gen_section
0
We describe and evaluate an information-theoretic algorithm for data-driven induction of classification models based on a minimal subset of available features. The relationship between input (predictive) features and the target (classification) attribute is modeled by a tree-like structure termed an information network (IN). Unlike other decision-tree models, the information network uses the same input attribute across the nodes of a given layer (level). The input attributes are selected incrementally by the algorithm to maximize a global decrease in the conditional entropy of the target attribute. We are using the prepruning approach: when no attribute causes a statistically significant decrease in the entropy, the network construction is stopped. The algorithm is shown empirically to produce much more compact models than other methods of decision-tree learning while preserving nearly the same level of classification accuracy.
A compact and accurate model for classification
['Oded Maimon']
IEEE Transactions on Knowledge and Data Engineering
2004
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more technical in tone: We describe and evaluate an information-theoretic algorithm for data-driven induction of classification models based on a minimal subset of available features. The relationship between input (predictive) features and the target (classification) attribute is modeled by a tree-like structure termed an information network (IN). Unlike other decision-tree models, the information network uses the same input attribute across the nodes of a given layer (level). The input attributes are selected incrementally by the algorithm to maximize a global decrease in the conditional entropy of the target attribute. We are using the prepruning approach: when no attribute causes a statistically significant decrease in the entropy, the network construction is stopped. The algorithm is shown empirically to produce much more compact models than other methods of decision-tree learning while preserving nearly the same level of classification accuracy.
enhance
0
The velocity control of kinematically redundant manipulators has been addressed through a variety of approaches. Though they differ widely in their purpose and method of implementation, most are optimizations that can be characterized by Liegeois's (1977) method. This characterization is used in this article to develop a single framework for implementing different methods by simply selecting a scalar, a function of configuration, and a joint-rate weighting matrix. These quantities are used to form a fully constrained linear system by row augmenting the manipulator Jacobian with a weighted basis of its nullspace and augmenting the desired hand motion with a vector function of the nullspace basis. The framework is shown to be flexible, computationally efficient, and accurate.
On the implementation of velocity control for kinematically redundant manipulators
['James D. English', 'Anthony A. Maciejewski']
systems man and cybernetics
2000
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 105-words of the given paper text with the title 'On the implementation of velocity control for kinematically redundant manipulators': The velocity control of kinematically redundant manipulators has been addressed through a variety of approaches. Though they differ widely in their purpose and method of implementation, most are optimizations that can be characterized by Liegeois's (1977) method. This characterization is used in this article to develop a single framework for implementing different methods by simply selecting a scalar, a function of configuration, and a joint-rate weighting matrix. These quantities are used to form a fully constrained linear system by row augmenting the manipulator Jacobian with a weighted basis of its nullspace and augmenting the desired hand motion with a vector function of the nullspace basis.
continue
1
The authors study the problem of on-line non-preemptive scheduling of multiple segment real-time tasks. Task segments alternate between using CPU and I/O resources. A task model is proposed which encompasses a wider class of tasks than models proposed earlier. Instead of developing new scheduling algorithms, the authors develop a class of slack distribution policies which use varying degrees of information about task structure and device utilization to budget task slack. Slack distribution policies are shown to improve the performance of all scheduling algorithms studied. Two key observations are: slack distribution is helpful beyond a certain threshold of task arrival rate, and algorithms which normally perform poorly are helped to a greater degree by slack distribution. A study of various scheduling algorithms for a constant value function reveals that all of them favor tasks with a large number of small segments to tasks with a small number of large segments. It is shown that the Moore ordering algorithm is not optimal for multiple segment tasks.
Real-time scheduling of multiple segment tasks
['Kamhing Ho', 'James H. Rice', 'Jaideep Srivastava']
computer software and applications conference
1990
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 115-words of the given paper text with the title 'Real-time scheduling of multiple segment tasks': The authors study the problem of on-line non-preemptive scheduling of multiple segment real-time tasks. Task segments alternate between using CPU and I/O resources. A task model is proposed which encompasses a wider class of tasks than models proposed earlier. Instead of developing new scheduling algorithms, the authors develop a class of slack distribution policies which use varying degrees of information about task structure and device utilization to budget task slack. Slack distribution policies are shown to improve the performance of all scheduling algorithms studied. Two key observations are: slack distribution is helpful beyond a certain threshold of task arrival rate, and algorithms which normally perform poorly are helped to a greater degree by slack distribution.
continue
1
Substantial technology advances have been made in areas of autonomous and connected vehicles, which opens a wide landscape for future transportation systems. We propose a new type of transportation system, Public Vehicle (PV) system, to provide effective, comfortable, and convenient service. The PV system is to improve the efficiency of current transportation systems, eg, taxi system. Meanwhile, the design of such a system targets on significant reduction in energy consumption, traffic congestion, and provides solutions with affordable cost. The key issue of implementing an effective PV system is to design efficient scheduling algorithms. We formulate it as the PV Path (PVP) problem, and prove it is NP-Complete. Then we introduce a real time approach, which is based on solutions of the Traveling Salesman Problem (TSP) and it can serve people efficiently with lower costs. Our results show that to achieve the same performance (e. g. , the total time: waiting and travel time), the number of vehicles can be reduced by 47-69, compared with taxis. The number of vehicles on roads is reduced, thus traffic congestion is relieved.
A Public Vehicle System with Multiple Origin-Destination Pairs on Traffic Networks
['Ming Zhu', 'Linghe Kong', 'Xiao-Yang Liu', 'Ruimin Shen', 'Wei Shu', 'Min-You Wu']
global communications conference
2014
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on global communications conference with title 'A Public Vehicle System with Multiple Origin-Destination Pairs on Traffic Networks', write a 178-words section 'Literature Review': Substantial technology advances have been made in areas of autonomous and connected vehicles, which opens a wide landscape for future transportation systems. We propose a new type of transportation system, Public Vehicle (PV) system, to provide effective, comfortable, and convenient service. The PV system is to improve the efficiency of current transportation systems, eg, taxi system. Meanwhile, the design of such a system targets on significant reduction in energy consumption, traffic congestion, and provides solutions with affordable cost. The key issue of implementing an effective PV system is to design efficient scheduling algorithms. We formulate it as the PV Path (PVP) problem, and prove it is NP-Complete. Then we introduce a real time approach, which is based on solutions of the Traveling Salesman Problem (TSP) and it can serve people efficiently with lower costs. Our results show that to achieve the same performance (e. g. , the total time: waiting and travel time), the number of vehicles can be reduced by 47-69, compared with taxis. The number of vehicles on roads is reduced, thus traffic congestion is relieved.
gen_section
0
In this paper, we propose two energy models based on a statistical analysis of a server's operational behavior in order to minimize the energy consumption in data centers at cloud computing providers. Based on these models, the Energy Savings Engine (ESE) in the cloud provider decides either to migrate the virtual machines (VMs) from a lightly-loaded server and then turn it off or put it in a sleep mode, or to keep the current server running and ready to receive any new load requests. The main difference between the two models is the energy and time required to put the server in operational mode from a sleep mode or from an off state. Therefore, the decision is a tradeoff between the energy savings and the required performance according to the SLA between the client and the cloud provider. We show results based on actual power measurements taken at the server's AC input, to determine the energy consumed in the idle state, the sleep state, the off state and in the case of switching between any two of these states. In addition, we measured the power consumed by the source and the destination servers during the migration of a VM.
CloudESE: Energy efficiency model for cloud computing environments
['Imad Sarji', 'Cesar Ghali', 'Ali Chehab', 'Ayman I. Kayssi']
nan
2011
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on nan with title 'CloudESE: Energy efficiency model for cloud computing environments', write a 199-words section 'Introduction': In this paper, we propose two energy models based on a statistical analysis of a server's operational behavior in order to minimize the energy consumption in data centers at cloud computing providers. Based on these models, the Energy Savings Engine (ESE) in the cloud provider decides either to migrate the virtual machines (VMs) from a lightly-loaded server and then turn it off or put it in a sleep mode, or to keep the current server running and ready to receive any new load requests. The main difference between the two models is the energy and time required to put the server in operational mode from a sleep mode or from an off state. Therefore, the decision is a tradeoff between the energy savings and the required performance according to the SLA between the client and the cloud provider. We show results based on actual power measurements taken at the server's AC input, to determine the energy consumed in the idle state, the sleep state, the off state and in the case of switching between any two of these states. In addition, we measured the power consumed by the source and the destination servers during the migration of a VM.
gen_section
0
Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.
Information-geometric Markov Chain Monte Carlo methods using Diffusions
['Samuel Livingstone', 'Mark A. Girolami']
Entropy
2014
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 108-words of the given paper text with the title 'Information-geometric Markov Chain Monte Carlo methods using Diffusions': Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.
continue
1
Collocations are of great importance for second language learners, and a learnerâs knowledge of them plays a key role in producing language fluently (Nation, 2001 : 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way that helps students use collocations more effectively in their writing. Instead of live search we use an off-line corpus of short sequences of words, along with their frequencies. They are preprocessed, filtered, and organized into a searchable digital library collection containing 380 million five-word sequences drawn from a vocabulary of 145, 000 words. Although the phrases are short, learners can browse more extended contexts because the system automatically locates sample sentences that contain them, either on the Web or in the British National Corpus. Two evaluations were conducted: an expert user tested the system to see if it could generate suitable alternatives for given text fragments, and students used it for a particular exercise. Both suggest that, even within the constraints of a limited study, the system could and did help students improve their writing.
Utilizing lexical data from a web-derived corpus to expand productive collocation knowledge
['Shaoqun Wu', 'Ian H. Witten', 'Margaret Franken']
ReCALL
2010
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 192-words of the given paper text with the title 'Utilizing lexical data from a web-derived corpus to expand productive collocation knowledge': Collocations are of great importance for second language learners, and a learnerâs knowledge of them plays a key role in producing language fluently (Nation, 2001 : 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way that helps students use collocations more effectively in their writing. Instead of live search we use an off-line corpus of short sequences of words, along with their frequencies. They are preprocessed, filtered, and organized into a searchable digital library collection containing 380 million five-word sequences drawn from a vocabulary of 145, 000 words. Although the phrases are short, learners can browse more extended contexts because the system automatically locates sample sentences that contain them, either on the Web or in the British National Corpus. Two evaluations were conducted: an expert user tested the system to see if it could generate suitable alternatives for given text fragments, and students used it for a particular exercise. Both suggest that, even within the constraints of a limited study, the system could and did help students improve their writing.
continue
1
Moving objects is an important task in 3D user interfaces. We describe two new techniques for 3D positioning, designed for a mouse, but usable with other input devices. The techniques enable rapid, yet easy-to-use positioning of objects in 3D scenes. With sliding, the object follows the cursor and moves on the surfaces of the scene. Our techniques enable precise positioning of constrained objects. Sliding assumes that by default objects stay in contact with the scene's front surfaces, are always at least partially visible, and do not interpenetrate other objects. With our new Shift-Sliding method the user can override these default assumptions and lift objects into the air or make them collide with other objects. Shift-Sliding uses the local coordinate system of the surface that the object was last in contact with, which is a new form of context-dependent manipulation. We also present Depth-Pop, which maps mouse wheel actions to all object positions along the mouse ray, where the object meets the default assumptions for sliding. For efficiency, both methods use frame buffer techniques. Two user studies show that the new techniques significantly speed up common 3D positioning tasks.
SHIFT-Sliding and DEPTH-POP for 3D Positioning
['Junwei Sun', 'Wolfgang Stuerzlinger', 'Dmitri Shuralyov']
nan
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 114-words sample abstract on the following topic based on following title 'SHIFT-Sliding and DEPTH-POP for 3D Positioning'; nan
gen_full_metadata
abstract
It is difficult to learn good classifiers when training data is missing attribute values. Conventional techniques for dealing with such omissions, such as mean imputation, generally do not significantly improve the performance of the resulting classifier. We proposed imputation-helped classifiers, which use accurate imputation techniques, such as Bayesian multiple imputation (BMI), predictive mean matching (PMM), and Expectation Maximization (EM), as preprocessors for conventional machine learning algorithms. Our empirical results show that EM-helped and BMI-helped classifiers work effectively when the data is "missing completely at random", generally improving predictive performance over most of the original machine learned classifiers we investigated.
Using Imputation Techniques to Help Learn Accurate Classifiers
['Xiaoyuan Su', 'Taghi M. Khoshgoftaar', 'Russell Greiner']
international conference on tools with artificial intelligence
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 99-words of the given paper text with the title 'Using Imputation Techniques to Help Learn Accurate Classifiers': It is difficult to learn good classifiers when training data is missing attribute values. Conventional techniques for dealing with such omissions, such as mean imputation, generally do not significantly improve the performance of the resulting classifier. We proposed imputation-helped classifiers, which use accurate imputation techniques, such as Bayesian multiple imputation (BMI), predictive mean matching (PMM), and Expectation Maximization (EM), as preprocessors for conventional machine learning algorithms. Our empirical results show that EM-helped and BMI-helped classifiers work effectively when the data is "missing completely at random", generally improving predictive performance over most of the original machine learned classifiers we investigated.
continue
1
The Functional Mock-up Interface (FMI) is the result of a research program initiated by Daimler AG, further industrial companies, and several institutes, with the aim of enabling the exchange of dynamic simulation models between different simulation tools as well as their co-simulation. The upcoming release of the second version of this standard in 2014 is the motivation for the current investigation into possible uses in the process industry. The standard is therefore presented and discussed in general, but with focus on the needs of the process industry. This article also illustrates how a Functional Mock-up Unit (FMU) can be coupled to widely-used simulation tools and presents a concept for its coupling and implementation. The concept uses the open and well-defined Shared-Memory-Gateway provided by the SIMIT Simulation Framework.
Re-use of existing simulation models for DCS engineering via the Functional Mock-up Interface
['Lukas Exel', 'Georg Frey', 'Gerrit Wolf', 'Mathias Oppelt']
emerging technologies and factory automation
2014
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 127-words sample abstract on the following topic based on following title 'Re-use of existing simulation models for DCS engineering via the Functional Mock-up Interface'; emerging technologies and factory automation
gen_full_metadata
abstract
A simple polygon P is said to be weakly extrenally visible from a line segment L if it lies outside P and for every point p on the boundary of P there is a point q on L such that no point in the interior of lies inside P. In this paper, a linear time algorithm is proposed for computing a shortest line segment from which P is weakly externally visible. This is done by a suitable generalization of a linear time algorithm which solves the same problem for a convex polygon.
COMPUTING A SHORTEST WEAKLY EXTERNALLY VISIBLE LINE SEGMENT FOR A SIMPLE POLYGON
['Binay K. Bhattacharya', 'Asish Mukhopadhyay', 'Godfried T. Toussaint']
International Journal of Computational Geometry and Applications
1999
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 92-words sample abstract on the following topic based on following title 'COMPUTING A SHORTEST WEAKLY EXTERNALLY VISIBLE LINE SEGMENT FOR A SIMPLE POLYGON'; International Journal of Computational Geometry and Applications
gen_full_metadata
abstract
The effects of project failure on IT professionals have not received much attention in IT research. A failed project evokes negative emotions and therefore could trigger turnover, which has negative influences from the perspective of IT human resource management. However, the failure of IT projects could also have positive influences as professionals might learn from the failed project. This paper focuses on analyzing this dual-sided effect of project failure on IT professionals. We develop hypotheses that will be tested with a large data set from an IT service provider in future research. We expect to contribute to theory by analyzing whether project failure triggers turnover and by analyzing whether IT professionals learn from failed projects and perform better in the future.
The Dual-sided Effect of Project Failure on IT Professionals
['Christoph Pflügler', 'Manuel Wiesche', 'Helmut Krcmar']
nan
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 121-words sample abstract on the following topic based on following title 'The Dual-sided Effect of Project Failure on IT Professionals'; nan
gen_full_metadata
abstract
This paper presents a practical approach to accurately measure the in-circuit power loss of an integrated power electronics module. Based on several reasonable assumptions, the heat flux out of the module is related to the temperature drop across the thermal resistance material in the heat dissipation path. By performing a calibration experiment, the total dissipated heat can be identified. The experimental results prove the high accuracy of this measurement method. This measurement method can be extended to other situations when the assumptions that are applicable to this experiment are satisfied.
In-Circuit Loss Measurement of a High-Frequency Integrated Power Electronics Module
['Wenduo Liu', 'J.D. van Wyk', 'Bing Lu']
IEEE Transactions on Instrumentation and Measurement
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on IEEE Transactions on Instrumentation and Measurement with title 'In-Circuit Loss Measurement of a High-Frequency Integrated Power Electronics Module', write a 90-words section 'Conclusion': This paper presents a practical approach to accurately measure the in-circuit power loss of an integrated power electronics module. Based on several reasonable assumptions, the heat flux out of the module is related to the temperature drop across the thermal resistance material in the heat dissipation path. By performing a calibration experiment, the total dissipated heat can be identified. The experimental results prove the high accuracy of this measurement method. This measurement method can be extended to other situations when the assumptions that are applicable to this experiment are satisfied.
gen_section
0
Tracking of left ventricles in 3D echocardiography is a challenging topic because of the poor quality of ultrasound images and the speed consideration. In this paper, a fast and accurate learning based 3D tracking algorithm is presented. A novel one-step forward prediction is proposed to generate the motion prior using motion manifold learning. Collaborative trackers are introduced to achieve both temporal consistence anproc biomedical optical ima image segmentation cellular biophys fluores biology comd tracking robustness. The algorithm is completely automatic and computationally efficient. The mean point-to-mesh error of our algorithm is 1. 28 mm. It requires less than 1. 5 seconds to process a 3D volume (160 x 148 x 208 voxels).
A fast and accurate tracking algorithm of left ventricles in 3D echocardiography
['Lin Yang', 'B. Georgescifi', 'Yefeng Zheng', 'David J. Foran', 'Dorin Comaniciu']
international symposium on biomedical imaging
2008
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 20-words of the given paper text with the title 'A fast and accurate tracking algorithm of left ventricles in 3D echocardiography': 28 mm. It requires less than 1. 5 seconds to process a 3D volume (160 x 148 x 208 voxels).
continue
2
We propose a vehicle make and model recognition method for the smart phone, and implemented it onto them. Our method identifies the make and model from the variety of viewpoints while the conventional methods for VMMR work only for the frontal or rear view images. This method enables the users to take the pictures from a certain range of angles. Our method uses SIFT, that has scale and rotation invariance to solve the alignment issue. It creates the pseudo frontal view images by homography matrix and extracts the keypoints. Homography matrix is calculated with the position of the license plate. Our experimental result shows our method enables to recognize up to 60-degree angle.
Vehicle make and model recognition by keypoint matching of pseudo frontal view
['Yukiko Shinozuka', 'Ruiko Miyano', 'Takuya Minagawa', 'Hideo Saito']
international conference on multimedia and expo
2013
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on international conference on multimedia and expo with title 'Vehicle make and model recognition by keypoint matching of pseudo frontal view', write a 38-words section 'Methodology': We propose a vehicle make and model recognition method for the smart phone, and implemented it onto them. Our method identifies the make and model from the variety of viewpoints while the conventional methods for VMMR work only for the frontal or rear view images. This method enables the users to take the pictures from a certain range of angles. Our method uses SIFT, that has scale and rotation invariance to solve the alignment issue. It creates the pseudo frontal view images by homography matrix and extracts the keypoints. Homography matrix is calculated with the position of the license plate. Our experimental result shows our method enables to recognize up to 60-degree angle.
gen_section
0
We consider the issue of how to read out the information from nonstationary spike train ensembles. Based on the theory of censored data in statistics, we propose a âcensoredâ maximum-likelihood estimator (CMLE) for decoding the input in an unbiased way when the spike activity is observed over time windows of finite length. Compared with a rate-based, moment estimator, the CMLE is proved consistently more efficient, particularly with nonstationary inputs. Using our approach, we show that a dynamical input to a group of neurons can be inferred accurately and with high temporal resolution (50Â ms) using as few as about one spike per neuron within each decoding window. By applying our theoretical results to a population coding setting, we then demonstrate that a spiking neural network can encode spatial information in such a way to allow fast and precise tracking of a moving target.
Decoding spike train ensembles: tracking a moving stimulus
['Enrico Rossoni', 'Jianfeng Feng']
Biological Cybernetics
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 143-words sample abstract on the following topic based on following title 'Decoding spike train ensembles: tracking a moving stimulus'; Biological Cybernetics
gen_full_metadata
abstract
Local dominance and equivalence relationships for a single fault type have been exploited to reduce test set size and test generation time. However, these relationships have not been explored for multiple fault types. Using fault tuples, we describe how local dominance and equivalence relationships across various fault types can be derived. We also describe how the derived relationships can be used to order the faults efficiently for test generation in order to reduce test set size. Initial results using our ordered fault lists for ISCAS85 and ITC99 benchmark circuits reveals that test set size can be reduced by as much as 19.
Exploiting dominance and equivalence using fault tuples
['Kumar N. Dwarakanath', 'Ronald D. Blanton']
vlsi test symposium
2002
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 102-words sample abstract on the following topic based on following title 'Exploiting dominance and equivalence using fault tuples'; vlsi test symposium
gen_full_metadata
abstract
We study the multiproduct price optimization problem under the multilevel nested logit model, which includes the multinomial logit and the two-level nested logit models as special cases. When the price sensitivities are identical within each primary nest, that is, within each nest at level 1, we prove that the profit function is concave with respect to the market share variables. We proceed to show that the markup, defined as price minus cost, is constant across products within each primary nest, and that the adjusted markup, defined as price minus cost minus the reciprocal of the product between the scale parameter of the root nest and the price-sensitivity parameter of the primary nest, is constant across primary nests at optimality. This allows us to reduce this multidimensional pricing problem to an equivalent single-variable maximization problem involving a unimodal function. Based on these findings, we investigate the oligopolistic game and characterize the Nash equilibrium. We also develop a dimension reduction technique which can simplify price optimization problems with flexible price-sensitivity structures.
Multiproduct Price Optimization Under the Multilevel Nested Logit Model
['Hai Jiang', 'Rui Chen', 'He Sun']
Annals of Operations Research
2017
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 169-words of the given paper text with the title 'Multiproduct Price Optimization Under the Multilevel Nested Logit Model': We study the multiproduct price optimization problem under the multilevel nested logit model, which includes the multinomial logit and the two-level nested logit models as special cases. When the price sensitivities are identical within each primary nest, that is, within each nest at level 1, we prove that the profit function is concave with respect to the market share variables. We proceed to show that the markup, defined as price minus cost, is constant across products within each primary nest, and that the adjusted markup, defined as price minus cost minus the reciprocal of the product between the scale parameter of the root nest and the price-sensitivity parameter of the primary nest, is constant across primary nests at optimality. This allows us to reduce this multidimensional pricing problem to an equivalent single-variable maximization problem involving a unimodal function. Based on these findings, we investigate the oligopolistic game and characterize the Nash equilibrium. We also develop a dimension reduction technique which can simplify price optimization problems with flexible price-sensitivity structures.
continue
1
Location-based channel access protocols have been proposed as a means to broadcast safety related messages through inter-vehicle communications. The protocols divide the road into fixed-size cells and assign a channel to each cell. To broadcast, a vehicle would use the channel assigned to the cell it is currently traveling within. To improve bandwidth utilization, a vehicle may acquire channels dynamically from adjacent cells that are not occupied by other vehicles. In a TDMA setting where each channel is a time slot, message delay occurs as the vehicle must wait for the arrival of the next time slot it owns. This message delay time depends heavily on the adopted cell-to-channel mapping function. We examine an existed naive channel allocation scheme and proposed three new ones. An analysis shows that the proposed schemes may reduce the delay by 50 to 90.
Minimizing Broadcast Delay in Location-Based Channel Access Protocols
['Shan-Hung Wu', 'Chung-Min Chen']
international conference on computer communications and networks
2011
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more professional in tone: Location-based channel access protocols have been proposed as a means to broadcast safety related messages through inter-vehicle communications. The protocols divide the road into fixed-size cells and assign a channel to each cell. To broadcast, a vehicle would use the channel assigned to the cell it is currently traveling within. To improve bandwidth utilization, a vehicle may acquire channels dynamically from adjacent cells that are not occupied by other vehicles. In a TDMA setting where each channel is a time slot, message delay occurs as the vehicle must wait for the arrival of the next time slot it owns. This message delay time depends heavily on the adopted cell-to-channel mapping function. We examine an existed naive channel allocation scheme and proposed three new ones.
enhance
0
Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results areâescrow serializableâ and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
Transaction Processing in Mobile Computing Using Semantic Properties
['Fritz Laux', 'Tim Lessner']
databases knowledge and data applications
2009
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 104-words of the given paper text with the title 'Transaction Processing in Mobile Computing Using Semantic Properties': Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results areâescrow serializableâ and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
continue
1
In recent years, there has been rapid growth in the volume of research output on the topic of e-government. To understand this research better, we used content analysis of eighty-four papers in e-government-specific research outlets (two journals and one conference series). Our analytical focus took in five main aspects: perspectives on the impacts of e-government, research philosophy, use of theory, methodology and method, and practical recommendations. Normative evaluation identified some positive features, such as recognition of contextual factors beyond technology, and a diversity of referent domains and ideas. Alongside this, though, research draws mainly from a weak or confused positivism and is dominated by over-optimistic, a-theoretical work that has done little to accumulate either knowledge or practical guidance for e-government. Worse, there is a lack of clarity and lack of rigor about research methods alongside poor treatment of generalization. We suggest ways of strengthening e-government research but also draw out some deeper issues, such as the role of research philosophy and theory, and the institutional factors â particularly pressures of competition and time â that may constrain development of e-government as a research field.
Analyzing e-government research: Perspectives, philosophies, theories, methods, and practice
['Richard Heeks', 'Savita Bailur']
Government Information Quarterly
2007
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more technical in tone: Alongside this, though, research draws mainly from a weak or confused positivism and is dominated by over-optimistic, a-theoretical work that has done little to accumulate either knowledge or practical guidance for e-government. Worse, there is a lack of clarity and lack of rigor about research methods alongside poor treatment of generalization. We suggest ways of strengthening e-government research but also draw out some deeper issues, such as the role of research philosophy and theory, and the institutional factors â particularly pressures of competition and time â that may constrain development of e-government as a research field.
enhance
1
Consider a mobile robot exploring an initially unknown school building and assume that it has already discovered some classrooms, offices, and bathrooms. What can the robot infer about the presence and the locations of other classrooms and offices in the school building? This paper makes a step toward providing an answer to the above question by proposing a system based on a generative model that is able to represent the topological structures and the semantic labeling schemas of buildings and to predict the structure and the schema for unexplored portions of these environments. We represent the buildings as undirected graphs, whose nodes are rooms and edges are physical connections between them. Given an initial knowledge base of graphs, our approach, relying on a spectral analysis of these graphs, segments each graph for finding significant subgraphs and clusters them according to their similarity. A graph representing a new building or an unvisited part of a building is eventually generated by sampling subgraphs from clusters and connecting them.
A generative spectral model for semantic mapping of buildings
['Matteo Luperto', "Leone D'Emilio", 'Francesco Amigoni']
intelligent robots and systems
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 111-words of the given paper text with the title 'A generative spectral model for semantic mapping of buildings': Consider a mobile robot exploring an initially unknown school building and assume that it has already discovered some classrooms, offices, and bathrooms. What can the robot infer about the presence and the locations of other classrooms and offices in the school building? This paper makes a step toward providing an answer to the above question by proposing a system based on a generative model that is able to represent the topological structures and the semantic labeling schemas of buildings and to predict the structure and the schema for unexplored portions of these environments. We represent the buildings as undirected graphs, whose nodes are rooms and edges are physical connections between them.
continue
1
Recently many topic models such as Latent Dirich-let Allocation (LDA) have made important progress towards generating high-level knowledge from a large corpus. They assume that a text consists of a mixture of topics, which is usually the case for regular articles but may not hold for a short text that usually contains only one topic. In practice, a corpus may include both short texts and long texts, in this case neither methods developed for only long texts nor methods for only short texts can generate satisfying results. In this paper, we present an innovative method to discover latent topics from a heterogeneous corpus including both long and short texts. A new topic model based on collapsed Gibbs sampling algorithm is developed for modeling such heterogeneous texts. The experiments on real-world datasets validate the effectiveness of the proposed model in comparison with other state-of-the-art models.
Topic Discovery from Heterogeneous Texts
['Jipeng Qiang', 'Ping Chen', 'Wei Ding', 'Tong Wang', 'Fei Xie', 'Xindong Wu']
international conference on tools with artificial intelligence
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 57-words of the given paper text with the title 'Topic Discovery from Heterogeneous Texts': In this paper, we present an innovative method to discover latent topics from a heterogeneous corpus including both long and short texts. A new topic model based on collapsed Gibbs sampling algorithm is developed for modeling such heterogeneous texts. The experiments on real-world datasets validate the effectiveness of the proposed model in comparison with other state-of-the-art models.
continue
2
We address the problem of protecting some sensitive knowledge in transactional databases. The challenge is on protecting actionable knowledge for strategic decisions, but at the same time not losing the great benefit of association rule mining. To accomplish that, we introduce a new, efficient one-scan algorithm that meets privacy protection and accuracy in association rule mining, without putting at risk the effectiveness of the data mining per se.
Protecting sensitive knowledge by data sanitization
['Stanley R. M. Oliveira', 'Osmar R. Zaïane']
international conference on data mining
2003
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 68-words of the given paper text with the title 'Protecting sensitive knowledge by data sanitization': We address the problem of protecting some sensitive knowledge in transactional databases. The challenge is on protecting actionable knowledge for strategic decisions, but at the same time not losing the great benefit of association rule mining. To accomplish that, we introduce a new, efficient one-scan algorithm that meets privacy protection and accuracy in association rule mining, without putting at risk the effectiveness of the data mining per se.
continue
1
In this work we introduce a low-level system that could be employed by a social robot like a robotic wheelchair or a humanoid, for approaching a group of interacting humans, in order to become a part of the interaction. Taking into account an interaction space that is created when at least two humans interact, a meeting point can be calculated where the robot should reach in order to equitably share space among the interacting group. We propose a sensor-based control task which uses the position and orientation of the humans with respect to the sensor as inputs, to reach the said meeting point while respecting spatial social constraints. Trials in simulation demonstrate the convergence of the control task and its capability as a low-level system for human-aware navigation.
On equitably approaching and joining a group of interacting humans
['Vishnu Karakkat Narayanan', 'Anne Spalanzani', 'François Pasteau', 'Marie Babel']
intelligent robots and systems
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more academic in tone: In this work we introduce a low-level system that could be employed by a social robot like a robotic wheelchair or a humanoid, for approaching a group of interacting humans, in order to become a part of the interaction. Taking into account an interaction space that is created when at least two humans interact, a meeting point can be calculated where the robot should reach in order to equitably share space among the interacting group. We propose a sensor-based control task which uses the position and orientation of the humans with respect to the sensor as inputs, to reach the said meeting point while respecting spatial social constraints.
enhance
0
This paper presents a mixed recovery scheme for robust distributed speech recognition (DSR) implemented over a packet channel which suffers packet losses. The scheme combines media-specific forward error correction (FEC) and error concealment (EC). Media-specific FEC is applied at the client side, where FEC bits representing strongly quantized versions of the speech vectors are introduced. At the server side, the information provided by those FEC bits is used by the EC algorithm to improve the recognition performance. We investigate the adaptation of two different EC techniques, namely minimum mean square error (MMSE) estimation, which operates at the decoding stage, and weighted Viterbi recognition (WVR), where EC is applied at the recognition stage, in order to be used along with FEC. The experimental results show that a significant increase in recognition accuracy can be obtained with very little bandwidth increase, which may be null in practice, and a limited increase in latency, which in any case is not so critical for an application such as DSR
Combining Media-Specific FEC and Error Concealment for Robust Distributed Speech Recognition Over Loss-Prone Packet Channels
['Angel M. Gomez', 'Antonio M. Peinado', 'Victoria E. Sánchez', 'Antonio J. Rubio']
IEEE Transactions on Multimedia
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 165-words of the given paper text with the title 'Combining Media-Specific FEC and Error Concealment for Robust Distributed Speech Recognition Over Loss-Prone Packet Channels': This paper presents a mixed recovery scheme for robust distributed speech recognition (DSR) implemented over a packet channel which suffers packet losses. The scheme combines media-specific forward error correction (FEC) and error concealment (EC). Media-specific FEC is applied at the client side, where FEC bits representing strongly quantized versions of the speech vectors are introduced. At the server side, the information provided by those FEC bits is used by the EC algorithm to improve the recognition performance. We investigate the adaptation of two different EC techniques, namely minimum mean square error (MMSE) estimation, which operates at the decoding stage, and weighted Viterbi recognition (WVR), where EC is applied at the recognition stage, in order to be used along with FEC. The experimental results show that a significant increase in recognition accuracy can be obtained with very little bandwidth increase, which may be null in practice, and a limited increase in latency, which in any case is not so critical for an application such as DSR
continue
1
To achieve the software fault tolerance at runtime, based on runtime verification techniques, this paper proposes a runtime model of running program, which is used to define the actions and constrains for runtime software fault management. This model contains the descriptions of event, path, scope and adjustment. A runtime fault management system prototype, which mainly includes the rule description, event acquisition, fault diagnosis and handling, is implemented to verify the model. Two test cases are used to estimate the effect of the prototype, and the results show that this method can handle faults successfully at runtime.
Modelling software fault management with runtime verification
['Xingjun Zhang', 'Yan Yang', 'Endong Wang', 'Ilsun You', 'Xiaoshe Dong']
ubiquitous computing
2015
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on ubiquitous computing with title 'Modelling software fault management with runtime verification', write a 96-words section 'Literature Review': To achieve the software fault tolerance at runtime, based on runtime verification techniques, this paper proposes a runtime model of running program, which is used to define the actions and constrains for runtime software fault management. This model contains the descriptions of event, path, scope and adjustment. A runtime fault management system prototype, which mainly includes the rule description, event acquisition, fault diagnosis and handling, is implemented to verify the model. Two test cases are used to estimate the effect of the prototype, and the results show that this method can handle faults successfully at runtime.
gen_section
0
In water resources optimization problems, the objective function usually presumes to first run a simulation model and then evaluate its outputs. However, long simulation times may pose significant barriers to the procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required. A promising strategy to address these shortcomings is the use of surrogate modeling techniques. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SEEAS) algorithm that couples the strengths of surrogate modeling with the effectiveness and efficiency of the evolutionary annealing-simplex method. SEEAS combines three different optimization approaches (evolutionary search, simulated annealing, downhill simplex). Its performance is benchmarked against other surrogate-assisted algorithms in several test functions and two water resources applications (model calibration, reservoir management). Results reveal the significant potential of using SEEAS in challenging optimization problems on a budget. Display Omitted The novel Surrogate-Enhanced Evolutionary Annealing Simplex algorithm (SEEAS) is proposed. Surrogate model is used as global search routine and for identifying promising transitions within simplex-based operators. SEEAS outperforms alternative methods in 6 test functions, in 15 30 dimensions and for 500 1000 function evaluations. SEEAS handles typical peculiarities of water optimization in hydrological calibration and multi-reservoir management.
Surrogate-enhanced evolutionary annealing simplex algorithm for effective and efficient optimization of water resources problems on a budget
['Ioannis Tsoukalas', 'Panagiotis Kossieris', 'Andreas Efstratiadis', 'Christos Makropoulos']
Environmental Modelling and Software
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Write a 74-words sample abstract on the following topic based on following title 'Surrogate-enhanced evolutionary annealing simplex algorithm for effective and efficient optimization of water resources problems on a budget'; Environmental Modelling and Software
gen_full_metadata
abstract
To study the MultiSearch problem and complete the Ad Hoc Task of the 2006 TREC Terabyte Track, the Gov2 collection was divided according to web domain and for each topic, the results from each domain were merged into single ranked list. The mean average precision scores of the results from two dierent merge algorithms applied to the domain-divided Gov2 collection and a randomized domain-divided collection are compared with a 2-way analysis of variance.
Partitioning the Gov2 Corpus by Internet Domain Name: A Result-set Merging Experiment
['Christopher T. Fallen', 'Gregory B. Newby']
text retrieval conference
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more professional in tone: To study the MultiSearch problem and complete the Ad Hoc Task of the 2006 TREC Terabyte Track, the Gov2 collection was divided according to web domain and for each topic, the results from each domain were merged into single ranked list. The mean average precision scores of the results from two dierent merge algorithms applied to the domain-divided Gov2 collection and a randomized domain-divided collection are compared with a 2-way analysis of variance.
enhance
0
The proliferation of mobile and pervasive computing devices has brought energy constraints into the limelight, together with performance considerations. Energy-conscious design is important at all levels of the system architecture, and the software has a key role to play in conserving the battery energy on these devices. With the increasing popularity of spatial database applications, and their anticipated deployment on mobile devices (such as road atlases and GPS based applications), it is critical to examine the energy implications of spatial data storage and access methods for memory resident datasets. While there has been extensive prior research on spatial access methods on resource-rich environments, this is, perhaps, the first study to examine their suitability for resource-constrained environments. Using a detailed cycle-accurate energy estimation framework and four different datasets, this paper examines the pros and cons of three previously proposed spatial indexing alternatives from both the energy and performance angles. The results from this study can be beneficial to the design and implementation of embedded spatial databases, accelerating their deployment on numerous mobile devices.
Analyzing energy behavior of spatial access methods for memory-resident data
['Ning An', 'Anand Sivasubramaniam', 'Narayanan Vijaykrishnan', 'Mahmut T. Kandemir', 'Mary Jane Irwin', 'Sudhanva Gurumurthi']
very large data bases
2001
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on very large data bases with title 'Analyzing energy behavior of spatial access methods for memory-resident data', write a 83-words section 'Methodology': The proliferation of mobile and pervasive computing devices has brought energy constraints into the limelight, together with performance considerations. Energy-conscious design is important at all levels of the system architecture, and the software has a key role to play in conserving the battery energy on these devices. With the increasing popularity of spatial database applications, and their anticipated deployment on mobile devices (such as road atlases and GPS based applications), it is critical to examine the energy implications of spatial data storage and access methods for memory resident datasets. While there has been extensive prior research on spatial access methods on resource-rich environments, this is, perhaps, the first study to examine their suitability for resource-constrained environments. Using a detailed cycle-accurate energy estimation framework and four different datasets, this paper examines the pros and cons of three previously proposed spatial indexing alternatives from both the energy and performance angles. The results from this study can be beneficial to the design and implementation of embedded spatial databases, accelerating their deployment on numerous mobile devices.
gen_section
0
In this paper, we present a new record linkage approach that uses entity behavior to decide if potentially different entities are in fact the same. An entity's behavior is extracted from a transaction log that records the actions of this entity with respect to a given data source. The core of our approach is a technique that merges the behavior of two possible matched entities and computes the gain in recognizing behavior patterns as their matching score. The idea is that if we obtain a well recognized behavior after merge, then most likely, the original two behaviors belong to the same entity as the behavior becomes more complete after the merge. We present the necessary algorithms to model entities' behavior and compute a matching score for them. To improve the computational efficiency of our approach, we precede the actual matching phase with a fast candidate generation that uses a "quick and dirty" matching method. Extensive experiments on real data show that our approach can significantly enhance record linkage quality while being practical for large transaction logs.
Behavior based record linkage
['Mohamed Yakout', 'Ahmed K. Elmagarmid', 'Hazem Elmeleegy', 'Mourad Ouzzani', 'Alan Qi']
very large data bases
2010
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 127-words of the given paper text with the title 'Behavior based record linkage': In this paper, we present a new record linkage approach that uses entity behavior to decide if potentially different entities are in fact the same. An entity's behavior is extracted from a transaction log that records the actions of this entity with respect to a given data source. The core of our approach is a technique that merges the behavior of two possible matched entities and computes the gain in recognizing behavior patterns as their matching score. The idea is that if we obtain a well recognized behavior after merge, then most likely, the original two behaviors belong to the same entity as the behavior becomes more complete after the merge. We present the necessary algorithms to model entities' behavior and compute a matching score for them.
continue
1
The constant evolution of technologies involves a large amount of problems during and after Flash memory manufacturing. In this context, manufacturers must develop methods and design solutions to improve reliability especially for automotive applications. For this purpose, ECC and BISR are probably the most efficient concepts to enhance memory reliability. However, such techniques are limited to correct errors occurring punctually within a word whereas in memories the stress of peripheral circuit can lead to an entire faulty bit or word line. This phenomenon is referred as Clustering Effect. This work proposes an on-line testing structure for clustering effects according to the word line plan. This test structure allows achieving a test time acceptable and is shown as low cost in term of surface overhead (3 HV transistors, 1 XOR, 1 MUX and 1 DFF). Adding our solution to recent ECC and BISR techniques, spatial or automotive applications could be easily targeted.
An on-line testing scheme for repairing purposes in Flash memories
['Olivier Ginez', 'Jean Michel Portal', 'Hassen Aziza']
design and diagnostics of electronic circuits and systems
2009
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Enhance the following text to be more professional in tone: The constant evolution of technologies involves a large amount of problems during and after Flash memory manufacturing. In this context, manufacturers must develop methods and design solutions to improve reliability especially for automotive applications. For this purpose, ECC and BISR are probably the most efficient concepts to enhance memory reliability. However, such techniques are limited to correct errors occurring punctually within a word whereas in memories the stress of peripheral circuit can lead to an entire faulty bit or word line. This phenomenon is referred as Clustering Effect. This work proposes an on-line testing structure for clustering effects according to the word line plan. This test structure allows achieving a test time acceptable and is shown as low cost in term of surface overhead (3 HV transistors, 1 XOR, 1 MUX and 1 DFF).
enhance
0
In this work we want to present a novel application of Neural Networks as a Black-Box model, which allows representing the nonlinear behavior of a vast number of RF electronic devices with the Volterra series approximation. We propose a simple approach for the generation of the Volterra model for a device, even in the case of a nonlinearity that depends on more than one variable, which allows obtaining a general model, independent of the physical circuit. In particular, we will show the results obtained for the analysis of a transistor and the generation of its analytical Volterra series model, built using a standard Neural Network model and its parameters.
Novel Neural Network application to nonlinear electronic devices: building a Volterra series model
['Georgina Stegmayer', 'Omar Chiotti']
Inteligencia Artificial,revista Iberoamericana De Inteligencia Artificial
2006
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 109-words of the given paper text with the title 'Novel Neural Network application to nonlinear electronic devices: building a Volterra series model': In this work we want to present a novel application of Neural Networks as a Black-Box model, which allows representing the nonlinear behavior of a vast number of RF electronic devices with the Volterra series approximation. We propose a simple approach for the generation of the Volterra model for a device, even in the case of a nonlinearity that depends on more than one variable, which allows obtaining a general model, independent of the physical circuit. In particular, we will show the results obtained for the analysis of a transistor and the generation of its analytical Volterra series model, built using a standard Neural Network model and its parameters.
continue
1
Using a combination of the minimum description length (MDL) criterion and overdetermined instrumental variable method, a computationally efficient method for AR order determination of an ARMA process is proposed. It is shown that the well-known singular value decomposition (SVD) method for AR order determination is a special case of the proposed method. Numerical simulations are presented to show the effectiveness of this method.
A new method for AR order determination of an ARMA process
['Chaung-Bai Xiao', 'Xian-Da Zhang', 'Yanda Li']
IEEE Transactions on Signal Processing
1996
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Continue the next paragraph in about 63-words of the given paper text with the title 'A new method for AR order determination of an ARMA process': Using a combination of the minimum description length (MDL) criterion and overdetermined instrumental variable method, a computationally efficient method for AR order determination of an ARMA process is proposed. It is shown that the well-known singular value decomposition (SVD) method for AR order determination is a special case of the proposed method. Numerical simulations are presented to show the effectiveness of this method.
continue
1
One of issue in generative approach for visual tracking is relates to computation time. It is because generative approach uses particle filter for modeling the motion and as a method to predict the state in the current frame. The system will be more accurate but slower computation if many particles are used. Recently, the combination between particle filter and sparse model is proposed to handle appearance variations and occlusion in visual tracking. Unfortunately, the issue about computation time still remains. This paper presents fast method for sparse generative approach in visual tracking. In this method, l1 minimization is used to calculate sparse coefficient vector for each candidate sample. Then, the maximum weighted is selected to represent the result. Based on simulations, our proposed method demonstrate good result in area under curve parameter and achieve four times faster than other methods with only use fifty particles.
Fast Generative Approach Based on Sparse Representation for Visual Tracking
['Suryo Adhi Wibowo', 'Hansoo Lee', 'Eun Kyeong Kim', 'Sungshin Kim']
soft computing
2016
Peer-Reviewed Research
https://www.kaggle.com/datasets/nechbamohammed/research-papers-dataset
Given following abstract on soft computing with title 'Fast Generative Approach Based on Sparse Representation for Visual Tracking', write a 145-words section 'Methodology': One of issue in generative approach for visual tracking is relates to computation time. It is because generative approach uses particle filter for modeling the motion and as a method to predict the state in the current frame. The system will be more accurate but slower computation if many particles are used. Recently, the combination between particle filter and sparse model is proposed to handle appearance variations and occlusion in visual tracking. Unfortunately, the issue about computation time still remains. This paper presents fast method for sparse generative approach in visual tracking. In this method, l1 minimization is used to calculate sparse coefficient vector for each candidate sample. Then, the maximum weighted is selected to represent the result. Based on simulations, our proposed method demonstrate good result in area under curve parameter and achieve four times faster than other methods with only use fifty particles.
gen_section
0
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
6