FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0262885616300026
We present a deeply integrated method of exploiting low-cost gyroscopes to improve general purpose feature tracking. Most previous methods use gyroscopes to initialize and bound the search for features. In contrast, we use them to regularize the tracking energy function so that they can directly assist in the tracking of ambiguous and poor-quality features. We demonstrate that our simple technique offers significant improvements in performance over conventional template-based tracking methods, and is in fact competitive with more complex and computationally expensive state-of-the-art trackers, but at a fraction of the computational cost. Additionally, we show that the practice of initializing template-based feature trackers like KLT (Kanade–Lucas–Tomasi) using gyro-predicted optical flow offers no advantage over using a careful optical-only initialization method, suggesting that some deeper level of integration, like the method we propose, is needed in order to realize a genuine improvement in tracking performance from these inertial sensors.
Enhancing feature tracking with gyro regularization
S0262885616300038
An analysis of the relative motion and point feature model configurations leading to solution degeneracy is presented, for the case of a Simultaneous Localization and Mapping system using multicamera clusters with non-overlapping fields-of-view. The SLAM optimization system seeks to minimize image space reprojection error and is formulated for a cluster containing any number of component cameras, observing any number of point features over two keyframes. The measurement Jacobian is transformed to expose a reduced-dimension representation such that the degeneracy of the system can be determined by the rank of a dense submatrix. A set of relative motions sufficient for degeneracy are identified for certain cluster configurations, independent of target model geometry. Furthermore, it is shown that increasing the number of cameras within the cluster and observing features across different cameras over the two keyframes reduces the size of the degenerate motion sets significantly.
Degenerate motions in multicamera cluster SLAM with non-overlapping fields of view
S0262885616300051
This work deals with the challenging task of activity recognition in unconstrained videos. Standard methods are based on video encoding of low-level features using Fisher Vectors or Bag of Features. However, these approaches model every sequence into a single vector with fixed dimensionality that lacks any long-term temporal information, which may be important for recognition, especially of complex activities. This work proposes a novel framework with two main technical novelties: First, a video encoding method that maintains the temporal structure of sequences and second a Time Flexible Kernel that allows comparison of sequences of different lengths and random alignment. Results on challenging benchmarks and comparison to previous work demonstrate the applicability and value of our framework.
A Time Flexible Kernel framework for video-based activity recognition
S0262885616300063
Nowadays, with so many surveillance cameras having been installed, the market demand for intelligent violence detection is continuously growing, while it is still a challenging topic in research area. Therefore, we attempt to make some improvements of existing violence detectors. The primary contributions of this paper are two-fold. Firstly, a novel feature extraction method named Oriented VIolent Flows (OViF), which takes full advantage of the motion magnitude change information in statistical motion orientations, is proposed for practical violence detection in videos. The comparison of OViF and baseline approaches on two public databases demonstrates the efficiency of the proposed method. Secondly, feature combination and multi-classifier combination strategies are adopted and excellent results are obtained. Experimental results show that using combined features with AdaBoost+Linear-SVM achieves improved performance over the state-of-the-art on the Violent-Flows benchmark.
Violence detection using Oriented VIolent Flows
S0262885616300075
In many wide area surveillance applications, tracking objects is usually accomplished by using network of cameras. A common approach to any multi-objects tracking algorithm in a network of cameras comprises of two main steps. First, the movement trajectory of each object, within the field of view of a camera, is extracted and is called object tracklet. Then, the set of tracklets are used to determine the persistent trace of each object. In this paper, we assume that the tracklets are extracted by a conventional tracking algorithm. The occurrence of occlusion between objects, within the viewing scene, leads to various types of errors on the extracted tracklets. If these erroneous tracklets are used in a multi-object tracking algorithm and ignoring the correction phase, then the errors are propagated and affect the results of tracking algorithm. Therefore the true tracklets have to be estimated from the erroneous tracklets. In this paper, we propose a variational model for estimating the true tracklets. The variational principle proposed in this model is established by first introducing a variational energy function. Then the erroneous tracklets are used to estimate the true tracklets through optimizing the energy function. The proposed method is evaluated on two well known datasets and a synthetic dataset which is particularly developed to demonstrate the performance of our algorithm under challenging scenarios. The 10 common metrics, which are used in other multi-objects tracking applications, are used for quantitative evaluations. Our experimental results illustrate that our proposed model estimates the true tracklets which improves the overall association performances.
A variational based model for estimating true tracklets in wide area surveillance
S0262885616300099
This paper investigates the problem of cross-domain action recognition. Specifically, we present a cross-domain action recognition framework by utilizing some labeled data from other data sets as the auxiliary source domain. It is a challenging task as data from different domains may have different feature distribution. To map data from different domains into the same abstract space and boost the action recognition performance, we propose a method named collective matrix factorization with graph Laplacian regularization (CMFGLR). Our approach is built upon the technique of collective matrix factorization, which simultaneously learns a common latent space, linear projection matrices for obtaining semantic representations, and an optimal linear classifier. Moreover, we explore the label consistency across different domain and the local geometric consistency in each domain and obtain a graph Laplacian regularization term to enhance the discrimination of learned features. Experimental results verify that CMFGLR significantly outperforms several state-of-the-art methods.
Cross-domain action recognition via collective matrix factorization with graph Laplacian regularization
S0262885616300105
We propose a real-time multi-view landmark detector based on Deformable Part Models (DPM). The detector is composed of a mixture of tree based DPMs, each component describing landmark configurations in a specific range of viewing angles. The usage of view specific DPMs allows to capture a large range of poses and to deal with the problem of self-occlusions. Parameters of the detector are learned from annotated examples by the Structured Output Support Vector Machines algorithm. The learning objective is directly related to the performance measure used for detector evaluation. The tree based DPM allows to find a globally optimal landmark configuration by the dynamic programming. We propose a coarse-to-fine search strategy which allows real-time processing by the dynamic programming also on high resolution images. Empirical evaluation on “in the wild” images shows that the proposed detector is competitive with the state-of-the-art methods in terms of speed and accuracy yet it keeps the guarantee of finding a globally optimal estimate in contrast to other methods.
Multi-view facial landmark detector learned by the Structured Output SVM
S0262885616300117
Learning-based hashing methods are becoming the mainstream for approximate scalable multimedia retrieval. They consist of two main components: hash codes learning for training data and hash functions learning for new data points. Tremendous efforts have been devoted to designing novel methods for these two components, i.e., supervised and unsupervised methods for learning hash codes, and different models for inferring hashing functions. However, there is little work integrating supervised and unsupervised hash codes learning into a single framework. Moreover, the hash function learning component is usually based on hand-crafted visual features extracted from the training images. The performance of a content-based image retrieval system crucially depends on the feature representation and such hand-crafted visual features may degrade the accuracy of the hash functions. In this paper, we propose a semi-supervised deep learning hashing (DLH) method for fast multimedia retrieval. More specifically, in the first component, we utilize both visual and label information to learn an optimal similarity graph that can more precisely encode the relationship among training data, and then generate the hash codes based on the graph. In the second stage, we apply a deep convolutional network to simultaneously learn a good multimedia representation and a set of hash functions. Extensive experiments on five popular datasets demonstrate the superiority of our DLH over both supervised and unsupervised hashing methods.
Deep and fast: Deep learning hashing with semi-supervised graph construction
S0262885616300282
Human action recognition from still image has recently drawn increasing attention in human behavior analysis and also poses great challenges due to the huge inter ambiguity and intra variability. Vector of locally aggregated descriptors (VLAD) has achieved state-of-the-art performance in many image classification tasks based on local features. The great success of VLAD is largely due to its high descriptive ability and computational efficiency. In this paper, towards optimal VLAD representations for human action recognition from still images, we improve VLAD by tackling three important issues including empty cavity, ambiguity and pooling strategies. The empty cavity limits the performance of VLAD and has long been overlooked. We investigate the empty cavity and provide an effective solution to deal with it, which improves the performance of VLAD; we enhance the codewords with middle level of assignments which are more reliable and can provide more useful information for realistic activity; we propose incorporating the generalized max pooling to replace sum pooling in VLAD, which is more reliable for the final representation. We have conducted extensive experiments on four widely-used benchmarks to validate the proposed method for human action recognition from still images. Our method produces competitive performance with state-of-the-art algorithms.
Towards optimal VLAD for human action recognition from still images
S0262885616300294
Automatic lip-reading (ALR) is a challenging task because the visual speech signal is known to be missing some important information, such as voicing. We propose an approach to ALR that acknowledges that this information is missing but assumes that it is substituted or deleted in a systematic way that can be modelled. We describe a system that learns such a model and then incorporates it into decoding, which is realised as a cascade of weighted finite-state transducers. Our results show a small but statistically significant improvement in recognition accuracy. We also investigate the issue of suitable visual units for ALR, and show that visemes are sub-optimal, not but because they introduce lexical ambiguity, but because the reduction in modelling units entailed by their use reduces accuracy.
Visual units and confusion modelling for automatic lip-reading
S0262885616300312
Image classification is to assign a category of an image and image annotation is to describe individual components of an image by using some annotation terms. These two learning tasks are strongly related. The main contribution of this paper is to propose a new discriminative and sparse topic model (DSTM) for image classification and annotation by combining visual, annotation and label information from a set of training images. The essential features of DSTM different from existing approaches are that (i) the label information is enforced in the generation of both visual words and annotation terms such that each generative latent topic corresponds to a category; (ii) the zero-mean Laplace distribution is employed to give a sparse representation of images in visual words and annotation terms such that relevant words and terms are associated with latent topics. Experimental results demonstrate that the proposed method provides the discrimination ability in classification and annotation, and its performance is better than the other testing methods (sLDA-ann, abc-corr-LDA, SupDocNADE, SAGE and MedSTC) for LabelMe, UIUC, NUS-WIDE and PascalVOC07 images.
A discriminative and sparse topic model for image classification and annotation
S0262885616300324
Efficient feature description and classification of dynamic texture (DT) is an important problem in computer vision and pattern recognition. Recently, the local binary pattern (LBP) based dynamic texture descriptor has been proposed to classify DTs by extending the LBP operator used in static texture analysis to the temporal domain. However, the extended LBP operator cannot characterize the intrinsic motion of dynamic texture well. In this paper, we propose a novel video set based collaborative representation dynamic texture classification method. First, we divide the dynamic texture sequence into subsequences along the temporal axis to form the video set. For each DT, we extract the video set based LBP histogram to describe it. We then propose a regularized collaborative representation model to code the LBP histograms of the query video sets over the LBP histograms of the training video sets. Finally, with the coding coefficients, the distance between the query video set and the training video sets can be calculated for classification. Experimental results on the benchmark dynamic texture datasets demonstrate that the proposed method can yield good performance in terms of both classification accuracy and efficiency.
Dynamic texture recognition with video set based collaborative representation
S0262885616300336
In this paper, we present a new algorithm for the computation of the focus of expansion in a video sequence. Although several algorithms have been proposed in the literature for its computation, almost all of them are based on the optical flow vectors between a pair of consecutive frames, so being very sensitive to noise, optical flow errors and camera vibrations. Our algorithm is based on the computation of the vanishing point of point trajectories, thus integrating information for more than two consecutive frames. It can improve performance in the presence of erroneous correspondences and occlusions in the field of view of the camera. The algorithm has been tested with virtual sequences generated with Blender, as well as some real sequences from both, the public KITTI benchmark, and a number of challenging video sequences also proposed in this paper. For comparison purposes, some algorithms from the literature have also been implemented. The results show that the algorithm has proven to be very robust, outperforming the compared algorithms, specially in outdoor scenes, where the lack of texture can make optical flow algorithms yield inaccurate results. Timing evaluation proves that the proposed algorithm can reach up to 15fps, showing its suitability for real-time applications.
Estimating the focus of expansion in a video sequence using the trajectories of interest points
S0262885616300464
This paper presents an orthonormal dictionary learning method for low-rank representation. The orthonormal property encourages the dictionary atoms to be as dissimilar as possible, which is beneficial for reducing the ambiguities of representations and computation cost. To make the dictionary more discriminative, we enhance the ability of the class-specific dictionary to well represent samples from the associated class and suppress the ability of representing samples from other classes, and also enforce the representations that have small within-class scatter and big between-class scatter. The learned orthonormal dictionary is used to obtain low-rank representations with fast computation. The performances of face recognition demonstrate the effectiveness and efficiency of the method.
Orthonormal dictionary learning and its application to face recognition
S0262885616300506
On the onset of the second decade of research in eye movement biometrics, the already demonstrated results strongly support the promising perspectives of the field. This paper presents a description of the research conducted in eye movement biometrics based on an extended analysis of the characteristics and results of the “BioEye 2015: Competition on Biometrics via Eye Movements.” This extended presentation can contribute to the understanding of the current level of research in eye movement biometrics, covering areas such as the previous work in the field, the procedures for the creation of a database of eye movement recordings, and the different approaches that can be used for the analysis of eye movements. Also, the presented results from BioEye 2015 competition can demonstrate the potential identification accuracy that can be achieved under easier and more difficult scenarios. Based on the provided presentation, we discuss topics related to the current status in eye movement biometrics and suggest possible directions for the future research in the field.
Current research in eye movement biometrics: An analysis based on BioEye 2015 competition
S0262885616300579
Feature pooling is a key component in modern visual classification system. However, the conventional two prevailing pooling techniques, namely average and max poolings, are not theoretically optimal, due to the unrecoverable loss of the spatial information during the statistical summarization and the underlying over-simplified assumption about the feature distribution. Addressing these issues, this paper proposes to generalize previous pooling methods toward a weighted ℓ p -norm spatial pooling function tailored for class-specific feature spatial distribution. Optimizing such a pooling function toward discriminative class separability that is subject to a spatial smoothness constraint yields a so-called geometric ℓ p -norm pooling (GLP) method. Furthermore, to handle the variation of object scale/position, which would affect not only the learning of discriminative pooling weights but also the applicability of the learned weights, we propose a simple yet effective self-alignment step during both learning and testing to adaptively adjust the pooling weights for individual images. Image segmentation and visual saliency map are utilized to construct a directed pixel adjacency graph. The discriminative pooling weights are diffused using random walk on the constructed graph and therefore the discriminative pooling weights are propagated onto the salient and foreground region. This leads to a robust version of GLP (RGLP) which can cope with the misalignment of object position and scale in images. Comprehensive experiments validate the effectiveness of the proposed GLP feature pooling framework. The proposed random walk based self-alignment step can effectively alleviate the image misalignment issue and further boost classification accuracy. State-of-the-art image classification and action recognition performances are attained on several benchmarks.
Robust geometric ℓ p -norm feature pooling for image classification and action recognition
S0262885616300580
Great variances in visual features often present significant challenges in human action recognitions. To address this common problem, this paper proposes a statistical adaptive metric learning (SAML) method by exploring various selections and combinations of multiple statistics in a unified metric learning framework. Most statistics have certain advantages in specific controlled environments, and systematic selections and combinations can adapt them to more realistic “in the wild” scenarios. In the proposed method, multiple statistics, include means, covariance matrices and Gaussian distributions, are explicitly mapped or generated in the Riemannian manifolds. Typically, d-dimensional mean vectors in Rd are mapped to a Rd ×d space of symmetric positive definite (SPD) matrices . Subsequently, by embedding the heterogeneous manifolds in their tangent Hilbert space, subspace combination with minimal deviation is selected from multiple statistics. Then Mahalanobis metrics are introduced to map them back into the Euclidean space. Unified optimizations are finally performed based on the Euclidean distances. In the proposed method, subspaces with smaller deviations are selected before metric learning. Therefore, by exploring different metric combinations, the final learning is more representative and effective than exhaustively learning from all the hybrid metrics. Experimental evaluations are conducted on human action recognitions in both static and dynamic scenarios. Promising results demonstrate that the proposed method performs effectively for human action recognitions in the wild.
Statistical adaptive metric learning in visual action feature set recognition
S0304397515005277
In this article we propose a novel formalism to model and analyse gene regulatory networks using a well-established formal verification technique. We model the possible behaviours of networks by logical formulae in linear temporal logic (LTL). By checking the satisfiability of LTL, it is possible to check whether some or all behaviours satisfy a given biological property, which is difficult in quantitative analyses such as the ordinary differential equation approach. Owing to the complexity of LTL satisfiability checking, analysis of large networks is generally intractable in this method. To mitigate this computational difficulty, we developed two methods. One is a modular checking method where we divide a network into subnetworks, check them individually, and then integrate them. The other is an approximate analysis method in which we specify behaviours in simpler formulae which compress or expand the possible behaviours of networks. In the approximate method, we focused on network motifs and presented approximate specifications for them. We confirmed by experiments that both methods improved the analysis of large networks.
Qualitative analysis of gene regulatory networks by temporal logic
S0304397516000888
In this paper, we consider the two-stage scheduling problem in which n jobs are first processed on m identical machines at a manufacturing facility and then delivered to their customers by one vehicle which can deliver one job at each shipment. In the problem, a set of n delivery times is given in advance, and in a schedule, the n delivery times should be assigned to the n jobs, respectively. The objective is to minimize the maximum delivery completion time, i.e., the time when all jobs are delivered to their respective customers and the vehicle returns to the facility. For this problem, we present a 3 2 -approximation algorithm and a polynomial-time approximation scheme.
Two-stage scheduling on identical machines with assignable delivery times to minimize the maximum delivery completion time
S0305054816300041
In accordance with Basel Capital Accords, the Capital Requirements (CR) for market risk exposure of banks is a nonlinear function of Value-at-Risk (VaR). Importantly, the CR is calculated based on a bank’s actual portfolio, i.e. the portfolio represented by its current holdings. To tackle mean-VaR portfolio optimization within the actual portfolio framework (APF), we propose a novel mean-VaR optimization method where VaR is estimated using a univariate Generalized AutoRegressive Conditional Heteroscedasticity (GARCH) volatility model. The optimization was performed by employing a Nondominated Sorting Genetic Algorithm (NSGA-II). On a sample of 40 large US stocks, our procedure provided superior mean-VaR trade-offs compared to those obtained from applying more customary mean-multivariate GARCH and historical VaR models. The results hold true in both low and high volatility samples.
Mean-univariate GARCH VaR portfolio optimization: Actual portfolio approach
S0305054816300867
As a result of the growing demand for health services, China's large city hospitals have become markedly overstretched, resulting in delicate and complex operating room scheduling problems. While the operating rooms are struggling to meet demand, they face idle times because of (human) resources being pulled away for other urgent demands, and cancellations for economic and health reasons. In this research we analyze the resulting stochastic operating room scheduling problems, and the improvements attainable by scheduled cancellations to accommodate the large demand while avoiding the negative consequences of excessive overtime work. We present a three-stage recourse model which formalizes the scheduled cancellations and is anticipative to further uncertainty. We develop a solution method for this three-stage model which relies on the sample average approximation and the L-shaped method. The method exploits the structure of optimal solutions to speed up the optimization. Scheduled cancellations can significantly and substantially improve the operating room schedule when the costs of cancellations are close to the costs of overtime work. Moreover, the proposed methods illustrate how the adverse impact of cancellations (by patients) for economic and health reasons can be largely controlled. The (human) resource unavailability however is shown to cause a more than proportional loss of solution value for the surgery scheduling problems occurring in China's large city hospitals, even when applying the proposed solution techniques, and requires different management measures.
Stochastic programming analysis and solutions to schedule overcrowded operating rooms in China
S0306437913000768
Modeling collaboration processes is a challenging task. Existing modeling approaches are not capable of expressing the unpredictable, non-routine nature of human collaboration, which is influenced by the social context of involved collaborators. We propose a modeling approach which considers collaboration processes as the evolution of a network of collaborative documents along with a social network of collaborators. Our modeling approach, accompanied by a graphical notation and formalization, allows to capture the influence of complex social structures formed by collaborators, and therefore facilitates such activities as the discovery of socially coherent teams, social hubs, or unbiased experts. We demonstrate the applicability and expressiveness of our approach and notation, and discuss their strengths and weaknesses.
On modeling context-aware social collaboration processes
S0306437914001550
Enabling process changes constitutes a major challenge for any process-aware information system. This not only holds for processes running within a single enterprise, but also for collaborative scenarios involving distributed and autonomous partners. In particular, if one partner adapts its private process, the change might affect the processes of the other partners as well. Accordingly, it might have to be propagated to concerned partners in a transitive way. A fundamental challenge in this context is to find ways of propagating the changes in a decentralized manner. Existing approaches are limited with respect to the change operations considered as well as their dependency on a particular process specification language. This paper presents a generic change propagation approach that is based on the Refined Process Structure Tree, i.e., the approach is independent of a specific process specification language. Further, it considers a comprehensive set of change patterns. For all these change patterns, it is shown that the provided change propagation algorithms preserve consistency and compatibility of the process choreography. Finally, a proof-of-concept prototype of a change propagation framework for process choreographies is presented. Overall, comprehensive change support in process choreographies will foster the implementation and operational support of agile collaborative process scenarios.
Dealing with change in process choreographies: Design and implementation of propagation algorithms
S0306437915000459
In recent years, monitoring the compliance of business processes with relevant regulations, constraints, and rules during runtime has evolved as major concern in literature and practice. Monitoring not only refers to continuously observing possible compliance violations, but also includes the ability to provide fine-grained feedback and to predict possible compliance violations in the future. The body of literature on business process compliance is large and approaches specifically addressing process monitoring are hard to identify. Moreover, proper means for the systematic comparison of these approaches are missing. Hence, it is unclear which approaches are suitable for particular scenarios. The goal of this paper is to define a framework for Compliance Monitoring Functionalities (CMF) that enables the systematic comparison of existing and new approaches for monitoring compliance rules over business processes during runtime. To define the scope of the framework, at first, related areas are identified and discussed. The CMFs are harvested based on a systematic literature review and five selected case studies. The appropriateness of the selection of CMFs is demonstrated in two ways: (a) a systematic comparison with pattern-based compliance approaches and (b) a classification of existing compliance monitoring approaches using the CMFs. Moreover, the application of the CMFs is showcased using three existing tools that are applied to two realistic data sets. Overall, the CMF framework provides powerful means to position existing and future compliance monitoring approaches.
Compliance monitoring in business processes: Functionalities, application, and tool-support
S0306457313000320
Privacy-preserving collaborative filtering is an emerging web-adaptation tool to cope with information overload problem without jeopardizing individuals’ privacy. However, collaborative filtering with privacy schemes commonly suffer from scalability and sparseness as the content in the domain proliferates. Moreover, applying privacy measures causes a distortion in collected data, which in turn defects accuracy of such systems. In this work, we propose a novel privacy-preserving collaborative filtering scheme based on bisecting k-means clustering in which we apply two preprocessing methods. The first preprocessing scheme deals with scalability problem by constructing a binary decision tree through a bisecting k-means clustering approach while the second produces clones of users by inserting pseudo-self-predictions into original user profiles to boost accuracy of scalability-enhanced structure. Sparse nature of collections are handled by transforming ratings into item features-based profiles. After analyzing our scheme with respect to privacy and supplementary costs, we perform experiments on benchmark data sets to evaluate it in terms of accuracy and online performance. Our empirical outcomes verify that combined effects of the proposed preprocessing schemes relieve scalability and augment accuracy significantly.
A scalable privacy-preserving recommendation scheme via bisecting k-means clustering
S0306457313000356
Relevance-Based Language Models, commonly known as Relevance Models, are successful approaches to explicitly introduce the concept of relevance in the statistical Language Modelling framework of Information Retrieval. These models achieve state-of-the-art retrieval performance in the pseudo relevance feedback task. On the other hand, the field of recommender systems is a fertile research area where users are provided with personalised recommendations in several applications. In this paper, we propose an adaptation of the Relevance Modelling framework to effectively suggest recommendations to a user. We also propose a probabilistic clustering technique to perform the neighbour selection process as a way to achieve a better approximation of the set of relevant items in the pseudo relevance feedback process. These techniques, although well known in the Information Retrieval field, have not been applied yet to recommender systems, and, as the empirical evaluation results show, both proposals outperform individually several baseline methods. Furthermore, by combining both approaches even larger effectiveness improvements are achieved.
Relevance-based language modelling for recommender systems
S0306457313000368
Named entity recognition (NER) is mostly formalized as a sequence labeling problem in which segments of named entities are represented by label sequences. Although a considerable effort has been made to investigate sophisticated features that encode textual characteristics of named entities (e.g. PEOPLE, LOCATION, etc.), little attention has been paid to segment representations (SRs) for multi-token named entities (e.g. the IOB2 notation). In this paper, we investigate the effects of different SRs on NER tasks, and propose a feature generation method using multiple SRs. The proposed method allows a model to exploit not only highly discriminative features of complex SRs but also robust features of simple SRs against the data sparseness problem. Since it incorporates different SRs as feature functions of Conditional Random Fields (CRFs), we can use the well-established procedure for training. In addition, the tagging speed of a model integrating multiple SRs can be accelerated equivalent to that of a model using only the most complex SR of the integrated model. Experimental results demonstrate that incorporating multiple SRs into a single model improves the performance and the stability of NER. We also provide the detailed analysis of the results.
Named entity recognition with multiple segment representations
S0306457313000381
Most existing search engines focus on document retrieval. However, information needs are certainly not limited to finding relevant documents. Instead, a user may want to find relevant entities such as persons and organizations. In this paper, we study the problem of related entity finding. Our goal is to rank entities based on their relevance to a structured query, which specifies an input entity, the type of related entities and the relation between the input and related entities. We first discuss a general probabilistic framework, derive six possible retrieval models to rank the related entities, and then compare these models both analytically and empirically. To further improve performance, we study the problem of feedback in the context of related entity finding. Specifically, we propose a mixture model based feedback method that can utilize the pseudo feedback entities to estimate an enriched model for the relation between the input and related entities. Experimental results over two standard TREC collections show that the derived relation generation model combined with a relation feedback method performs better than other models.
An exploration of ranking models and feedback method for related entity finding
S0306457313000459
In our paper we present an experimental study which investigated the possibility to project the need for information specialists serving knowledge workers in knowledge industries on the basis of an average university library serving their counterparts at a university. Information management functions, i.e. functions and processes related to information evaluation, acquisition, metadata creation, etc., performed in an average university library are the starting point of this investigation. The fundamental assumption is that these functions do not only occur in libraries but also in other contexts like, for instance, in knowledge industries. As a consequence, we try to estimate the need for information professionals in knowledge industries by means of quantitative methods from library and information science (Library Planning Model) and economics (input output analysis, occupational analysis). Our study confirms the validity of our assumption. Accordingly, the number of information specialists projected on the basis of university libraries is consistent with their actual number reported in national statistics. However, in order to attain a close fit, we had to revise the original research model by dismissing the split-up of information specialists into reader services and technical services staff.
University libraries as a model for the determination of the need for information specialists in knowledge industries? An exploratory analysis of the information sector in Austria
S0306457313000496
The authors of this paper investigate terms of consumers’ diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers’ perspective, and reveal users’ diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
A user term visualization analysis based on a social question and answer log
S0306457313000502
Archives are an extremely valuable part of our cultural heritage since they represent the trace of the activities of a physical or juridical person in the course of their business. Despite their importance, the models and technologies that have been developed over the past two decades in the Digital Library (DL) field have not been specifically tailored to archives. This is especially true when it comes to formal and foundational frameworks, as the Streams, Structures, Spaces, Scenarios, Societies (5S) model is. Therefore, we propose an innovative formal model, called NEsted SeTs for Object hieRarchies (NESTOR), for archives, explicitly built around the concepts of context and hierarchy which play a central role in the archival realm. NESTOR is composed of two set-based data models: the Nested Sets Model (NS-M) and the Inverse Nested Sets Model (INS-M) that express the hierarchical relationships between objects through the inclusion property between sets. We formally study the properties of these models and prove their equivalence with the notion of hierarchy entailed by archives. We then use NESTOR to extend the 5S model in order to take into account the specific features of archives and to tailor the notion of digital library accordingly. This offers the possibility of opening up the full wealth of DL methods and technologies to archives. We demonstrate the impact of NESTOR on this problem through three example use cases.
NESTOR: A formal model for digital archives
S0306457313000514
Transfer learning utilizes labeled data available from some related domain (source domain) for achieving effective knowledge transformation to the target domain. However, most state-of-the-art cross-domain classification methods treat documents as plain text and ignore the hyperlink (or citation) relationship existing among the documents. In this paper, we propose a novel cross-domain document classification approach called Link-Bridged Topic model (LBT). LBT consists of two key steps. Firstly, LBT utilizes an auxiliary link network to discover the direct or indirect co-citation relationship among documents by embedding the background knowledge into a graph kernel. The mined co-citation relationship is leveraged to bridge the gap across different domains. Secondly, LBT simultaneously combines the content information and link structures into a unified latent topic model. The model is based on an assumption that the documents of source and target domains share some common topics from the point of view of both content information and link structure. By mapping both domains data into the latent topic spaces, LBT encodes the knowledge about domain commonality and difference as the shared topics with associated differential probabilities. The learned latent topics must be consistent with the source and target data, as well as content and link statistics. Then the shared topics act as the bridge to facilitate knowledge transfer from the source to the target domains. Experiments on different types of datasets show that our algorithm significantly improves the generalization performance of cross-domain document classification.
A link-bridged topic model for cross-domain document classification
S0306457313000526
In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital library developments is expected to contribute to the ongoing discussion on data and metadata quality occurring in the emerging yet more general framework of data infrastructures.
Dealing with metadata quality: The legacy of digital library efforts
S0306457313000538
Multimedia objects can be retrieved using their context that can be for instance the text surrounding them in documents. This text may be either near or far from the searched objects. Our goal in this paper is to study the impact, in term of effectiveness, of text position relatively to searched objects. The multimedia objects we consider are described in structured documents such as XML ones. The document structure is therefore exploited to provide this text position in documents. Although structural information has been shown to be an effective source of evidence in textual information retrieval, only a few works investigated its interest in multimedia retrieval. More precisely, the task we are interested in this paper is to retrieve multimedia fragments (i.e. XML elements having at least one multimedia object). Our general approach is built on two steps: we first retrieve XML elements containing multimedia objects, and we then explore the surrounding information to retrieve relevant multimedia fragments. In both cases, we study the impact of the surrounding information using the documents structure. Our work is carried out on images, but it can be extended to any other media, since the physical content of multimedia objects is not used. We conducted several experiments in the context of the Multimedia track of the INEX evaluation campaign. Results showed that structural evidences are of high interest to tune the importance of textual context for multimedia retrieval. Moreover, the proposed approach outperforms state of the art approaches.
Investigating the document structure as a source of evidence for multimedia fragment retrieval
S0306457313000708
Uncertainty is an important idea in information-retrieval (IR) research, but the concept has yet to be fully elaborated and explored. Common assumptions about uncertainty are (a) that it is a negative (anxiety-producing) state and (b) that it will be reduced through information search and retrieval. Research in the domain of uncertainty in illness, however, has demonstrated that uncertainty is a complex phenomenon that shares a complicated relationship with information. Past research on people living with HIV and individuals who have tested positive for genetic risk for different illnesses has revealed that information and the reduction of uncertainty can, in fact, produce anxiety, and that maintaining uncertainty can be associated with optimism and hope. We review the theory of communication and uncertainty management and offer nine principles based on that theoretical work that can be used to influence IR system design. The principles reflect a view of uncertainty as a multi-faceted and dynamic experience, one subject to ongoing appraisal and management efforts that include interaction with and use of information in a variety of forms.
The appraisal and management of uncertainty: Implications for information-retrieval systems
S0306457313000721
In this paper we propose improved variants of the sentence retrieval method TF–ISF (a TF–IDF or Term Frequency–Inverse Document Frequency variant for sentence retrieval). The improvement is achieved by using context consisting of neighboring sentences and at the same time promoting the retrieval of longer sentences. We thoroughly compare new modified TF–ISF methods to the TF–ISF baseline, to an earlier attempt to include context into TF–ISF named tfmix and to a language modeling based method that uses context and promoting retrieval of long sentences named 3MMPDS. Experimental results show that the TF–ISF method can be improved using local context. Results also show that the TF–ISF method can be improved by promoting the retrieval of longer sentences. Finally we show that the best results are achieved when combining both modifications. All new methods (TF–ISF variants) also show statistically significant better results than the other tested methods.
Improved sentence retrieval using local context and sentence length
S0306457313000733
Knowledge organization (KO) and bibliometrics have traditionally been seen as separate subfields of library and information science, but bibliometric techniques make it possible to identify candidate terms for thesauri and to organize knowledge by relating scientific papers and authors to each other and thereby indicating kinds of relatedness and semantic distance. It is therefore important to view bibliometric techniques as a family of approaches to KO in order to illustrate their relative strengths and weaknesses. The subfield of bibliometrics concerned with citation analysis forms a distinct approach to KO which is characterized by its social, historical and dynamic nature, its close dependence on scholarly literature and its explicit kind of literary warrant. The two main methods, co-citation analysis and bibliographic coupling represent different things and thus neither can be considered superior for all purposes. The main difference between traditional knowledge organization systems (KOSs) and maps based on citation analysis is that the first group represents intellectual KOSs, whereas the second represents social KOSs. For this reason bibliometric maps cannot be expected ever to be fully equivalent to scholarly taxonomies, but they are – along with other forms of KOSs – valuable tools for assisting users’ to orient themselves to the information ecology. Like other KOSs, citation-based maps cannot be neutral but will always be based on researchers’ decisions, which tend to favor certain interests and views at the expense of others.
Citation analysis: A social and dynamic approach to knowledge organization
S0306457313000745
We present PubSearch, a hybrid heuristic scheme for re-ranking academic papers retrieved from standard digital libraries such as the ACM Portal. The scheme is based on the hierarchical combination of a custom implementation of the term frequency heuristic, a time-depreciated citation score and a graph-theoretic computed score that relates the paper’s index terms with each other. We designed and developed a meta-search engine that submits user queries to standard digital repositories of academic publications and re-ranks the repository results using the hierarchical heuristic scheme. We evaluate our proposed re-ranking scheme via user feedback against the results of ACM Portal on a total of 58 different user queries specified from 15 different users. The results show that our proposed scheme significantly outperforms ACM Portal in terms of retrieval precision as measured by most common metrics in Information Retrieval including Normalized Discounted Cumulative Gain (NDCG), Expected Reciprocal Rank (ERR) as well as a newly introduced lexicographic rule (LEX) of ranking search results. In particular, PubSearch outperforms ACM Portal by more than 77% in terms of ERR, by more than 11% in terms of NDCG, and by more than 907.5% in terms of LEX. We also re-rank the top-10 results of a subset of the original 58 user queries produced by Google Scholar, Microsoft Academic Search, and ArnetMiner; the results show that PubSearch compares very well against these search engines as well. The proposed scheme can be easily plugged in any existing search engine for retrieval of academic publications.
A heuristic hierarchical scheme for academic search and retrieval
S0306457313000757
Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.
An evaluation framework for cross-lingual link discovery
S0306457313000769
Several studies of Web server workloads have hypothesized that these workloads are self-similar. The explanation commonly advanced for this phenomenon is that the distribution of Web server requests may be heavy-tailed. However, there is another possible explanation: self-similarity can also arise from deterministic, chaotic processes. To our knowledge, this possibility has not previously been investigated, and so existing studies on Web workloads lack an adequate comparison against this alternative. We conduct an empirical study of workloads from two different Web sites: one public university, and one private company, using the largest datasets that have been described in the literature. Our study employs methods from nonlinear time series analysis to search for chaotic behavior in the web logs of these two sites. While we do find that the deterministic components (i.e. the well-known “weekend effect”) are significant components in these time series, we do not find evidence of chaotic behavior. Predictive modeling experiments contrasting heavy-tailed with deterministic models showed that both approaches were equally effective in modeling our datasets.
An empirical investigation of Web session workloads: Can self-similarity be explained by deterministic chaos?
S0306457313000770
A user study of aNobii was conducted with an aim to exploring possible criteria for evaluating social navigational tools. A set of measures designed to capture various aspects of the benefits provided by the tools was proposed. To test the applicability of these measures, a within-subject experimental design was adopted where fifty regular aNobii users searched alternately with three book-finding tools: browsing “friends’ bookshelves”, “similar bookshelves”, and “books by known authors”. Other than the self-report user experience and search result measures, the “choice set” model was used as a novel framework for navigational effectiveness. Further analyses were conducted to explore whether three aspects of reader preference, “preference insight”, “preference diversity”, and “reading involvement” might influence the performance of the tools. Some major findings are as follows. While the author browsing function was shown to be most efficient, browsing friends’ bookshelves was shown to generate more interesting and informative browsing experiences. Three evaluative dimensions were derived from our study: search experience, search efficiency, and result quality. The disagreement of these measures shows a need for a multi-faceted evaluative framework for these exploration-based navigational tools. Furthermore, interaction effects on performance were found between users’ preference characteristics and tools. While users with high preference insight relied more heavily on author browsing to obtain more accurate results, highly involved readers tended percentage wise to examine and select more titles when browsing friends’ bookshelves.
Evaluating books finding tools on social media: A case study of aNobii
S0306457313000782
The study explores the relationship between value attribution and information source use of 17 Chinese business managers during their knowledge management (KM) strategic decision-making. During semi-structured interviews, the Chinese business managers, half in the telecommunications sector and half in the manufacturing sector, were asked to rate 16 information sources on five-point Likert Scales. The 16 information sources were grouped into internal–external and personal–impersonal types. The participants rated the information sources according to five value criteria: relevancy, comprehensiveness, reliability, time/effort, and accessibility. Open-ended questions were also asked to get at how and why value attribution affected the participants’ use of one information source over another during decision-making. Findings show that the participants preferred internal–personal type of information sources over external–impersonal information sources. The differences in value ratings per information source were striking: Telecommunications managers rated customers, newspapers/magazines, and conferences/trips much lower than the manufacturing managers but they rated corporate library/intranet and databases much higher than manufacturing managers. The type of industrial sector therefore highly influenced information source use for decision-making by the study’s Chinese business managers. Based on this conclusion, we added organizational and environmental categories to revise the De Alwis, Majid, and Chaudhry’s (2006) typology of factors affecting Chinese managers’ information source preferences during decision-making.
The relationship between perceived value and information source use during KM strategic decision-making: A study of 17 Chinese business managers
S0306457313000794
This paper aims at identifying the factors influencing the implementation of Web accessibility (WA) by European banks. We studied a database made up of 49 European banks whose shares are included in the Dow Jones EURO STOXX® TMI Banks [8300] Index. Regarding the factors for the implementation, we considered three feasible reasons. Firstly, WA adoption can be motivated by operational factors, as WA can aid in increasing operational efficiency. Secondly, we expect large banks to have higher WA levels, as small firms face competitive disadvantages with regard to technology adoption. Lastly, WA can also be understood as a part of the Corporate Social Responsibility (CSR) strategy, so, the more committed a bank is to CSR, the more prone it will be to implement WA. Our results indicate that neither the operational factors nor the firm size seem to have exerted a significant influence on WA adoption. Regarding CSR commitment, results indicate a significant influence on WA adoption. However, the effect of the influence is contrary to that hypothesized, since more CSR-committed banks have less accessible Web sites. A possible reason for this result is that banks not included in the CSR indexes try to overcome this drawback by engaging in alternative CSR activities such as WA.
Determinants of the Web accessibility of European banks
S0306457313000800
Textual entailment is a task for which the application of supervised learning mechanisms has received considerable attention as driven by successive Recognizing Data Entailment data challenges. We developed a linguistic analysis framework in which a number of similarity/dissimilarity features are extracted for each entailment pair in a data set and various classifier methods are evaluated based on the instance data derived from the extracted features. The focus of the paper is to compare and contrast the performance of single and ensemble based learning algorithms for a number of data sets. We showed that there is some benefit to the use of ensemble approaches but, based on the extracted features, Naïve Bayes proved to be the strongest learning mechanism. Only one ensemble approach demonstrated a slight improvement over the technique of Naïve Bayes.
An investigation into the application of ensemble learning for entailment classification
S0306457313000939
Extensible Markup Language (XML) documents are associated with time in two ways: (1) XML documents evolve over time and (2) XML documents contain temporal information. The efficient management of the temporal and multi-versioned XML documents requires optimized use of storage and efficient processing of complex historical queries. This paper provides a comparative analysis of the various schemes available to efficiently store and query the temporal and multi-versioned XML documents based on temporal, change management, versioning, and querying support. Firstly, the paper studies the multi-versioning control schemes to detect, manage, and query change in dynamic XML documents. Secondly, it describes the storage structures used to efficiently store and retrieve XML documents. Thirdly, it provides a comparative analysis of the various commercial tools based on change management, versioning, collaborative editing, and validation support. Finally, the paper presents some future research and development directions for the multi-versioned XML documents.
Temporal and multi-versioned XML documents: A survey
S0306457313000940
The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., the bias–variance tradeoff, which is a fundamental theory in statistics. We formulate the notion of bias–variance regarding retrieval performance and estimation quality of query models. We then investigate several estimated query models, by analyzing when and why the bias–variance tradeoff will occur, and how the bias and variance can be reduced simultaneously. A series of experiments on four TREC collections have been conducted to systematically evaluate our bias–variance analysis. Our approach and results will potentially form an analysis framework and a novel evaluation strategy for query language modeling.
Bias–variance analysis in estimating true query model for information retrieval
S0306457313000952
This paper reports on an approach to the analysis of form (layout and formatting) during genre recognition recorded using eye tracking. The researchers focused on eight different types of e-mail, such as calls for papers, newsletters and spam, which were chosen to represent different genres. The study involved the collection of oculographic behavior data based on the scanpath duration and scanpath length based metric, to highlight the ways in which people view the features of genres. We found that genre analysis based on purpose and form (layout features, etc.) was an effective means of identifying the characteristics of these e-mails. The research, carried out on a group of 24 participants, highlighted their interaction and interpretation of the e-mail texts and the visual cues or features perceived. In addition, the ocular strategies of scanning and skimming, they employed for the processing of the texts by block, genre and representation were evaluated.
You have e-mail, what happens next? Tracking the eyes for genre
S0306457313000964
Preprocessing is one of the key components in a typical text classification framework. This paper aims to extensively examine the impact of preprocessing on text classification in terms of various aspects such as classification accuracy, text domain, text language, and dimension reduction. For this purpose, all possible combinations of widely used preprocessing tasks are comparatively evaluated on two different domains, namely e-mail and news, and in two different languages, namely Turkish and English. In this way, contribution of the preprocessing tasks to classification success at various feature dimensions, possible interactions among these tasks, and also dependency of these tasks to the respective languages and domains are comprehensively assessed. Experimental analysis on benchmark datasets reveals that choosing appropriate combinations of preprocessing tasks, rather than enabling or disabling them all, may provide significant improvement on classification accuracy depending on the domain and language studied on.
The impact of preprocessing on text classification
S0306457313000976
On the Semantic Web, the types of resources and the semantic relationships between resources are defined in an ontology. By using that information, the accuracy of information retrieval can be improved. In this paper, we present effective ranking and search techniques considering the semantic relationships in an ontology. Our technique retrieves top-k resources which are the most relevant to query keywords through the semantic relationships. To do this, we propose a weighting measure for the semantic relationship. Based on this measure, we propose a novel ranking method which considers the number of meaningful semantic relationships between a resource and keywords as well as the coverage and discriminating power of keywords. In order to improve the efficiency of the search, we prune the unnecessary search space using the length and weight thresholds of the semantic relationship path. In addition, we exploit Threshold Algorithm based on an extended inverted index to answer top-k results efficiently. The experimental results using real data sets demonstrate that our retrieval method using the semantic information generates accurate results efficiently compared to the traditional methods.
Effective ranking and search techniques for Web resources considering semantic relationships
S0306457313000988
Arabic is a widely spoken language but few mining tools have been developed to process Arabic text. This paper examines the crime domain in the Arabic language (unstructured text) using text mining techniques. The development and application of a Crime Profiling System (CPS) is presented. The system is able to extract meaningful information, in this case the type of crime, location and nationality, from Arabic language crime news reports. The system has two unique attributes; firstly, information extraction that depends on local grammar, and secondly, dictionaries that can be automatically generated. It is shown that the CPS improves the quality of the data through reduction where only meaningful information is retained. Moreover, the Self Organising Map (SOM) approach is adopted in order to perform the clustering of the crime reports, based on crime type. This clustering technique is improved because only refined data containing meaningful keywords extracted through the information extraction process are inputted into it, i.e. the data are cleansed by removing noise. The proposed system is validated through experiments using a corpus collated from different sources; it was not used during system development. Precision, recall and F-measure are used to evaluate the performance of the proposed information extraction approach. Also, comparisons are conducted with other systems. In order to evaluate the clustering performance, three parameters are used: data size, loading time and quantization error.
Crime profiling for the Arabic language using computational linguistic techniques
S0306457313001003
Although most of the queries submitted to search engines are composed of a few keywords and have a length that ranges from three to six words, more than 15% of the total volume of the queries are verbose, introduce ambiguity and cause topic drifts. We consider verbosity a different property of queries from length since a verbose query is not necessarily long, it might be succinct and a short query might be verbose. This paper proposes a methodology to automatically detect verbose queries and conditionally modify queries. The methodology proposed in this paper exploits state-of-the-art classification algorithms, combines concepts from a large linguistic database and uses a topic gisting algorithm we designed for verbose query modification purposes. Our experimental results have been obtained using the TREC Robust track collection, thirty topics classified by difficulty degree, four queries per topic classified by verbosity and length, and human assessment of query verbosity. Our results suggest that the methodology for query modification conditioned to query verbosity detection and topic gisting is significantly effective and that query modification should be refined when topic difficulty and query verbosity are considered since these two properties interact and query verbosity is not straightforwardly related to query length.
Detecting verbose queries and improving information retrieval
S0306457313001015
In this paper, we propose an optimization framework to retrieve an optimal group of experts to perform a multi-aspect task. While a diverse set of skills are needed to perform a multi-aspect task, the group of assigned experts should be able to collectively cover all these required skills. We consider three types of multi-aspect expert group formation problems and propose a unified framework to solve these problems accurately and efficiently. The first problem is concerned with finding the top k experts for a given task, while the required skills of the task are implicitly described. In the second problem, the required skills of the tasks are explicitly described using some keywords but each expert has a limited capacity to perform these tasks and therefore should be assigned to a limited number of them. Finally, the third problem is the combination of the first and the second problems. Our proposed optimization framework is based on the Facility Location Analysis which is a well known branch of the Operation Research. In our experiments, we compare the accuracy and efficiency of the proposed framework with the state-of-the-art approaches for the group formation problems. The experiment results show the effectiveness of our proposed methods in comparison with state-of-the-art approaches.
Expert group formation using facility location analysis
S0306457313001027
Knowledge acquisition and bilingual terminology extraction from multilingual corpora are challenging tasks for cross-language information retrieval. In this study, we propose a novel method for mining high quality translation knowledge from our constructed Persian–English comparable corpus, University of Tehran Persian–English Comparable Corpus (UTPECC). We extract translation knowledge based on Term Association Network (TAN) constructed from term co-occurrences in same language as well as term associations in different languages. We further propose a post-processing step to do term translation validity check by detecting the mistranslated terms as outliers. Evaluation results on two different data sets show that translating queries using UTPECC and using the proposed methods significantly outperform simple dictionary-based methods. Moreover, the experimental results show that our methods are especially effective in translating Out-Of-Vocabulary terms and also expanding query words based on their associated terms.
Mining a Persian–English comparable corpus for cross-language information retrieval
S0306457313001106
With ever increasing information being available to the end users, search engines have become the most powerful tools for obtaining useful information scattered on the Web. However, it is very common that even most renowned search engines return result sets with not so useful pages to the user. Research on semantic search aims to improve traditional information search and retrieval methods where the basic relevance criteria rely primarily on the presence of query keywords within the returned pages. This work is an attempt to explore different relevancy ranking approaches based on semantics which are considered appropriate for the retrieval of relevant information. In this paper, various pilot projects and their corresponding outcomes have been investigated based on methodologies adopted and their most distinctive characteristics towards ranking. An overview of selected approaches and their comparison by means of the classification criteria has been presented. With the help of this comparison, some common concepts and outstanding features have been identified.
A review of ranking approaches for semantic search on Web
S0306457313001118
Disaster Management (DM) is a diffused area of knowledge. It has many complex features interconnecting the physical and the social views of the world. Many international and national bodies create knowledge models to allow knowledge sharing and effective DM activities. But these are often narrow in focus and deal with specified disaster types. We analyze thirty such models to uncover that many DM activities are actually common even when the events vary. We then create a unified view of DM in the form of a metamodel. We apply a metamodelling process to ensure that this metamodel is complete and consistent. We validate it and present a representational layer to unify and share knowledge as well as combine and match different DM activities according to different disaster situations.
Development and validation of a Disaster Management Metamodel (DMM)
S0306457313001131
The volume of entity-centric structured data grows rapidly on the Web. The description of an entity, composed of property-value pairs (a.k.a. features), has become very large in many applications. To avoid information overload, efforts have been made to automatically select a limited number of features to be shown to the user based on certain criteria, which is called automatic entity summarization. However, to the best of our knowledge, there is a lack of extensive studies on how humans rank and select features in practice, which can provide empirical support and inspire future research. In this article, we present a large-scale statistical analysis of the descriptions of entities provided by DBpedia and the abstracts of their corresponding Wikipedia articles, to empirically study, along several different dimensions, which kinds of features are preferable when humans summarize. Implications for automatic entity summarization are drawn from the findings.
Preferences in Wikipedia abstracts: Empirical findings and implications for automatic entity summarization
S0306457313001143
Despite the fact that both the Efficient Market Hypothesis and Random Walk Theory postulate that it is impossible to predict future stock prices based on currently available information, recent advances in empirical research have been proving the opposite by achieving what seems to be better than random prediction performance. We discuss some of the (dis)advantages of the most widely used performance metrics and conclude that is difficult to assess the external validity of performance using some of these measures. Moreover, there remain many questions as to the real-world applicability of these empirical models. In the first part of this study we design novel stock price prediction models, based on state-of-the-art text-mining techniques to assert whether we can predict the movement of stock prices more accurately by including indicators of irrationality. Along with this, we discuss which metrics are most appropriate for which scenarios in order to evaluate the models. Finally, we discuss how to gain insight into text-mining-based stock price prediction models in order to evaluate, validate and refine the models.
Evaluating and understanding text-based stock price prediction models
S0306457313001155
Multi-document discourse parsing aims to automatically identify the relations among textual spans from different texts on the same topic. Recently, with the growing amount of information and the emergence of new technologies that deal with many sources of information, more precise and efficient parsing techniques are required. The most relevant theory to multi-document relationship, Cross-document Structure Theory (CST), has been used for parsing purposes before, though the results had not been satisfactory. CST has received many critics because of its subjectivity, which may lead to low annotation agreement and, consequently, to poor parsing performance. In this work, we propose a refinement of the original CST, which consists in (i) formalizing the relationship definitions, (ii) pruning and combining some relations based on their meaning, and (iii) organizing the relations in a hierarchical structure. The hypothesis for this refinement is that it will lead to better agreement in the annotation and consequently to better parsing results. For this aim, it was built an annotated corpus according to this refinement and it was observed an improvement in the annotation agreement. Based on this corpus, a parser was developed using machine learning techniques and hand-crafted rules. Specifically, hierarchical techniques were used to capture the hierarchical organization of the relations according to the proposed refinement of CST. These two approaches were used to identify the relations among texts spans and to generate multi-document annotation structure. Results outperformed other CST parsers, showing the adequacy of the proposed refinement in the theory.
Revisiting Cross-document Structure Theory for multi-document discourse parsing
S0306457314000119
Automatic text summarization has been an active field of research for many years. Several approaches have been proposed, ranging from simple position and word-frequency methods, to learning and graph based algorithms. The advent of human-generated knowledge bases like Wikipedia offer a further possibility in text summarization – they can be used to understand the input text in terms of salient concepts from the knowledge base. In this paper, we study a novel approach that leverages Wikipedia in conjunction with graph-based ranking. Our approach is to first construct a bipartite sentence–concept graph, and then rank the input sentences using iterative updates on this graph. We consider several models for the bipartite graph, and derive convergence properties under each model. Then, we take up personalized and query-focused summarization, where the sentence ranks additionally depend on user interests and queries, respectively. Finally, we present a Wikipedia-based multi-document summarization algorithm. An important feature of the proposed algorithms is that they enable real-time incremental summarization – users can first view an initial summary, and then request additional content if interested. We evaluate the performance of our proposed summarizer using the ROUGE metric, and the results show that leveraging Wikipedia can significantly improve summary quality. We also present results from a user study, which suggests that using incremental summarization can help in better understanding news articles.
Text summarization using Wikipedia
S0306457314000120
Social networking sites (SNSs) enable user to personalize their contents and functions. This feature has been assumed as causing positive effects on the use of online information services through enhancing user satisfaction. However, unlike other online information services (non-participatory information services), due to the results of personalization in a certain situation, SNS users cannot help using the SNS even though they feel dissatisfaction on using it. SNSs are different from other information services in the sense that they create and sustain their own value based on the number of participating members. In SNSs, personalization, reflected by updates and maintenance of profile pages, results in such participation. This study hypothesizes that personalization influences on the continued use of SNSs through two factors: switching cost (extrinsic factor) and satisfaction (intrinsic factor). Web-based survey was conducted with the samples of 677 SNS users from six universities in the US. In-person interviews were conducted with 25 university students to elicit their thoughts on the SNSs. Quantitative analysis employed by testing the proposed model with five hypotheses through a structural equation modeling (SEM) technique. The transcribed interview data was analyzed following the constant comparative technique. The main findings indicate that, as expected, the personalization increases its switching cost as well as satisfaction, which results in further use of SNSs. These findings suggest that it is necessary to consider both extrinsic and intrinsic factors of user perceptions when adding personalization features on SNSs.
The effects of personalization on user continuance in social networking sites
S0306457314000132
The norm of practice in estimating graph properties is to use uniform random node (RN) samples whenever possible. Many graphs are large and scale-free, inducing large degree variance and estimator variance. This paper shows that random edge (RE) sampling and the corresponding harmonic mean estimator for average degree can reduce the estimation variance significantly. First, we demonstrate that the degree variance, and consequently the variance of the RN estimator, can grow almost linearly with data size for typical scale-free graphs. Then we prove that the RE estimator has a variance bounded from above. Therefore, the variance ratio between RN and RE samplings can be very large for big data. The analytical result is supported by both simulation studies and 18 real networks. We observe that the variance reduction ratio can be more than a hundred for some real networks such as Twitter. Furthermore, we show that random walk (RW) sampling is always worse than RE sampling, and it can reduce the variance of RN method only when its performance is close to that of RE sampling.
Variance reduction in large graph sampling
S0306457314000144
In sponsored search, many advertisers have not achieved their expected performances while the search engine also has a large room to improve their revenue. Specifically, due to the improper keyword bidding, many advertisers cannot survive the competitive ad auctions to get their desired ad impressions; meanwhile, a significant portion of search queries have no ads displayed in their search result pages, even if many of them have commercial values. We propose recommending a group of relevant yet less-competitive keywords to an advertiser. Hence, the advertiser can get the chance to win some (originally empty) ad slots and accumulate a number of impressions. At the same time, the revenue of the search engine can also be boosted since many empty ad shots are filled. Mathematically, we model the problem as a mixed integer programming problem, which maximizes the advertiser revenue and the relevance of the recommended keywords, while minimizing the keyword competitiveness, subject to the bid and budget constraints. By solving the problem, we can offer an optimal group of keywords and their optimal bid prices to an advertiser. Simulation results have shown the proposed method is highly effective in increasing ad impressions, expected clicks, advertiser revenue, and search engine revenue.
Bid keyword suggestion in sponsored search based on competitiveness and relevance
S0306457314000156
Downloading software via Web is a major solution for publishers to deliver their software products. In this context, user interfaces for software downloading play a key role. Actually, they have to allow usable interactions as well as support users in taking conscious and coherent decisions about whether to accept to download a software product or not. This paper presents different design alternatives for software download interfaces, i.e. the interface that prompts the user if he wishes to actually complete its download, and evaluates their ability to improve the quality of user interactions while reducing errors in user decisions. More precisely, we compare Authenticode, the leading software download interface for Internet Explorer, to Question-&-Answer, a software download interface previously proposed by the authors Dini, Foglia, Prete, & Zanda (2007). Furthermore, we evaluate the effect of extending both interfaces by means of a reputation system similar to the eBay Feedback Forum. The results of the usability studies show that (i) the pure Question-&-Answer interface is the most effective in minimizing users incoherent behaviors, and (ii) the differences in reputation rankings significantly influence users. Overall results suggest guidelines to design the best interface depending on the context (brand reputation and product features).
Social and Q&A interfaces for app download
S0306457314000168
We digitized three years of Dutch election manifestos annotated by the Dutch political scientist Isaac Lipschits. We used these data to train a classifier that can automatically label new, unseen election manifestos with themes. Having the manifestos in a uniform XML format with all paragraphs annotated with their themes has advantages for both electronic publishing of the data and diachronic comparative data analysis. The data that we created will be disclosed to the public through a search interface. This means that it will be possible to query the data and filter them on themes and parties. We optimized the Lipschits classifier on the task of classifying election manifestos using models trained on earlier years. We built a classifier that is suited for classifying election manifestos from 2002 onwards using the data from the 1980s and 1990s. We evaluated the results by having a domain expert manually assess a sample of the classified data. We found that our automatic classifier obtains the same precision as a human classifier on unseen data. Its recall could be improved by extending the set of themes with newly emerged themes. Thus when using old political texts to classify new texts, work is needed to link and expand the set of themes to newer topics.
Automatic thematic classification of election manifestos
S0306457314000181
Several Web 2.0 applications allow users to assign keywords (or tags) to provide better organization and description of the shared content. Tag recommendation methods may assist users in this task, improving the quality of the available information and, thus, the effectiveness of various tag-based information retrieval services, such as searching, content recommendation and classification. This work addresses the tag recommendation problem from two perspectives. The first perspective, centered at the object, aims at suggesting relevant tags to a target object, jointly exploiting the following three dimensions: (i) tag co-occurrences, (ii) terms extracted from multiple textual features (e.g., title, description), and (iii) various metrics to estimate tag relevance. The second perspective, centered at both object and user, aims at performing personalized tag recommendation to a target object-user pair, exploiting, in addition to the three aforementioned dimensions, a metric that captures user interests. In particular, we propose new heuristic methods that extend state-of-the-art strategies by including new metrics that estimate how accurately a candidate tag describes the target object. We also exploit three learning-to-rank (L2R) based techniques, namely, RankSVM, Genetic Programming (GP) and Random Forest (RF), for generating ranking functions that exploit multiple metrics as attributes to estimate the relevance of a tag to a given object or object-user pair. We evaluate the proposed methods using data from four popular Web 2.0 applications, namely, Bibsonomy, LastFM, YouTube and YahooVideo. Our new heuristics for object-centered tag recommendation provide improvements in precision over the best state-of-the-art alternative of 12% on average (up to 20% in any single dataset), while our new heuristics for personalized tag recommendation produce average gains in precision of 121% over the baseline. Similar performance gains are also achieved in terms of other metrics, notably recall, Normalized Discounted Cumulative Gain (NDCG) and Mean-Reciprocal Rank (MRR). Further improvements, for both object-centered (up to 23% in precision) and personalized tag recommendation (up to 13% in precision), can also be achieved with our new L2R-based strategies, which are flexible and can be easily extended to exploit other aspects of the tag recommendation problem. Finally, we also quantify the benefits of personalized tag recommendation to provide better descriptions of the target object when compared to object-centered recommendation by focusing only on the relevance of the suggested tags to the object. We find that our best personalized method outperforms the best object-centered strategy, with average gains in precision of 10%.
Personalized and object-centered tag recommendation methods for Web 2.0 applications
S0306457314000193
Both general and domain-specific search engines have adopted query suggestion techniques to help users formulate effective queries. In the specific domain of literature search (e.g., finding academic papers), the initial queries are usually based on a draft paper or abstract, rather than short lists of keywords. In this paper, we investigate phrasal-concept query suggestions for literature search. These suggestions explicitly specify important phrasal concepts related to an initial detailed query. The merits of phrasal-concept query suggestions for this domain are their readability and retrieval effectiveness: (1) phrasal concepts are natural for academic authors because of their frequent use of terminology and subject-specific phrases and (2) academic papers describe their key ideas via these subject-specific phrases, and thus phrasal concepts can be used effectively to find those papers. We propose a novel phrasal-concept query suggestion technique that generates queries by identifying key phrasal-concepts from pseudo-labeled documents and combines them with related phrases. Our proposed technique is evaluated in terms of both user preference and retrieval effectiveness. We conduct user experiments to verify a preference for our approach, in comparison to baseline query suggestion methods, and demonstrate the effectiveness of the technique with retrieval experiments.
Automatic suggestion of phrasal-concept queries for literature search
S0306457314000272
Recently, sentiment classification has received considerable attention within the natural language processing research community. However, since most recent works regarding sentiment classification have been done in the English language, there are accordingly not enough sentiment resources in other languages. Manual construction of reliable sentiment resources is a very difficult and time-consuming task. Cross-lingual sentiment classification aims to utilize annotated sentiment resources in one language (typically English) for sentiment classification of text documents in another language. Most existing research works rely on automatic machine translation services to directly project information from one language to another. However, different term distribution between original and translated text documents and translation errors are two main problems faced in the case of using only machine translation. To overcome these problems, we propose a novel learning model based on active learning and semi-supervised co-training to incorporate unlabelled data from the target language into the learning process in a bi-view framework. This model attempts to enrich training data by adding the most confident automatically-labelled examples, as well as a few of the most informative manually-labelled examples from unlabelled data in an iterative process. Further, in this model, we consider the density of unlabelled data so as to select more representative unlabelled examples in order to avoid outlier selection in active learning. The proposed model was applied to book review datasets in three different languages. Experiments showed that our model can effectively improve the cross-lingual sentiment classification performance and reduce labelling efforts in comparison with some baseline methods.
Bi-view semi-supervised active learning for cross-lingual sentiment classification
S0306457314000284
This study aims to compare representations of Japanese personal and corporate name authority data in Japan, South Korea, China (including Hong Kong and Taiwan), and the Library of Congress (LC) in order to identify differences and to bring to light issues affecting name authority data sharing projects, such as the Virtual International Authority File (VIAF). For this purpose, actual data, manuals, formats, and case reports of organizations as research objects were collected. Supplemental e-mail and face-to-face interviews were also conducted. Subsequently, five check points considered to be important in creating Japanese name authority data were set, and the data of each organization were compared from these five perspectives. Before the comparison, an overview of authority control in Chinese, Japanese, Korean-speaking (CJK) countries was also provided. The findings of the study are as follows: (1) the databases of China and South Korea have mixed headings in Kanji and other Chinese characters; (2) few organizations display the correspondence between Kanji and their yomi; (3) romanization is not mandatory in some organizations and is different among organizations; (4) some organizations adopt representations in their local language; and (5) some names in hiragana are not linked with their local forms and might elude a search.
Differences in representations of Japanese name authority data among CJK countries and the Library of Congress
S0306457314000296
Sentiment analysis from data streams is aimed at detecting authors’ attitude, emotions and opinions from texts in real-time. To reduce the labeling effort needed in the data collection phase, active learning is often applied in streaming scenarios, where a learning algorithm is allowed to select new examples to be manually labeled in order to improve the learner’s performance. Even though there are many on-line platforms which perform sentiment analysis, there is no publicly available interactive on-line platform for dynamic adaptive sentiment analysis, which would be able to handle changes in data streams and adapt its behavior over time. This paper describes ClowdFlows, a cloud-based scientific workflow platform, and its extensions enabling the analysis of data streams and active learning. Moreover, by utilizing the data and workflow sharing in ClowdFlows, the labeling of examples can be distributed through crowdsourcing. The advanced features of ClowdFlows are demonstrated on a sentiment analysis use case, using active learning with a linear Support Vector Machine for learning sentiment classification models to be applied to microblogging data streams.
Active learning for sentiment analysis on data streams: Methodology and workflow implementation in the ClowdFlows platform
S0306457314000302
Collaborative information retrieval involves retrieval settings in which a group of users collaborates to satisfy the same underlying need. One core issue of collaborative IR models involves either supporting collaboration with adapted tools or developing IR models for a multiple-user context and providing a ranked list of documents adapted for each collaborator. In this paper, we introduce the first document-ranking model supporting collaboration between two users characterized by roles relying on different domain expertise levels. Specifically, we propose a two-step ranking model: we first compute a document-relevance score, taking into consideration domain expertise-based roles. We introduce specificity and novelty factors into language-model smoothing, and then we assign, via an Expectation–Maximization algorithm, documents to the best-suited collaborator. Our experiments employ a simulation-based framework of collaborative information retrieval and show the significant effectiveness of our model at different search levels.
On domain expertise-based roles in collaborative information retrieval
S0306457314000314
Expertise seeking is the activity of selecting people as sources for consultation about an information need. This review of 72 expertise-seeking papers shows that across a range of tasks and contexts people, in particular work-group colleagues and other strong ties, are among the most frequently used sources. Studies repeatedly show the influence of the social network – of friendships and personal dislikes – on the expertise-seeking network of organisations. In addition, people are no less prominent than documentary sources, in work contexts as well as daily-life contexts. The relative influence of source quality and source accessibility on source selection varies across studies. Overall, expertise seekers appear to aim for sufficient quality, composed of reliability and relevance, while also attending to accessibility, composed of access to the source and access to the source information. Earlier claims that seekers disregard quality to minimise effort receive little support. Source selection is also affected by task-related, seeker-related, and contextual factors. For example, task complexity has been found to increase the use of information sources whereas task importance has been found to amplify the influence of quality on source selection. Finally, the reviewed studies identify a number of barriers to expertise seeking.
Expertise seeking: A review
S0306457314000326
A trie is one of the data structures for keyword matching. It is used in natural language processing, IP address routing, and so on. It is represented by the matrix form, the link form, the double array, and LOUDS. The double array representation combines retrieval speed of the matrix form with compactness of the list form. LOUDS is a succinct data structure using bit-string. Retrieval speed of LOUDS is not faster than that of the double array, but its space usage is smaller. This paper proposes a compressed version of the double array by dividing the trie into multiple levels and removing the BASE array from the double array. Moreover, a retrieval algorithm and a construction algorithm are proposed. According to the presented experimental results for pseudo and real data sets, the retrieval speed of the presented method is almost the same as the double array, and its space usage is compressed to 66% comparing with LOUDS for a large set of keywords with fixed length.
Compression of double array structures for fixed length keywords
S0306457314000338
Much of the valuable information in supporting decision making processes originates in text-based documents. Although these documents can be effectively searched and ranked by modern search engines, actionable knowledge need to be extracted and transformed in a structured form before being used in a decision process. In this paper we describe how the discovery of semantic information embedded in natural language documents can be viewed as an optimization problem aimed at assigning a sequence of labels (hidden states) to a set of interdependent variables (textual tokens). Dependencies among variables are efficiently modeled through Conditional Random Fields, an indirected graphical model able to represent the distribution of labels given a set of observations. The Markov property of these models prevent them to take into account long-range dependencies among variables, which are indeed relevant in Natural Language Processing. In order to overcome this limitation we propose an inference method based on Integer Programming formulation of the problem, where long distance dependencies are included through non-deterministic soft constraints.
Soft-constrained inference for Named Entity Recognition
S0306457314000351
This paper describes the use of Wikipedia as a rich knowledge source for a question answering (QA) system. We suggest multiple answer matching modules based on different types of semi-structured knowledge sources of Wikipedia, including article content, infoboxes, article structure, category structure, and definitions. These semi-structured knowledge sources each have their unique strengths in finding answers for specific question types, such as infoboxes for factoid questions, category structure for list questions, and definitions for descriptive questions. The answers extracted from multiple modules are merged using an answer merging strategy that reflects the specialized nature of the answer matching modules. Through an experiment, our system showed promising results, with a precision of 87.1%, a recall of 52.7%, and an F-measure of 65.6%, all of which are much higher than the results of a simple text analysis based system.
Open domain question answering using Wikipedia-based knowledge model
S0306457314000363
Practical classification problems often involve some kind of trade-off between the decisions a classifier may take. Indeed, it may be the case that decisions are not equally good or costly; therefore, it is important for the classifier to be able to predict the risk associated with each classification decision. Bayesian decision theory is a fundamental statistical approach to the problem of pattern classification. The objective is to quantify the trade-off between various classification decisions using probability and the costs that accompany such decisions. Within this framework, a loss function measures the rates of the costs and the risk in taking one decision over another. In this paper, we give a formal justification for a decision function under the Bayesian decision framework that comprises (i) the minimisation of Bayesian risk and (ii) an empirical decision function found by Domingos and Pazzani (1997). This new decision function has a very intuitive geometrical interpretation that can be explored on a Cartesian plane. We use this graphical interpretation to analyse different approaches to find the best decision on four different Naïve Bayes (NB) classifiers: Gaussian, Bernoulli, Multinomial, and Poisson, on different standard collections. We show that the graphical interpretation significantly improves the understanding of the models and opens new perspectives for new research studies.
A new decision to take for cost-sensitive Naïve Bayes classifiers
S0306457314000375
Research in natural language processing has increasingly focused on normalizing Twitter messages. Currently, while different well-defined approaches have been proposed for the English language, the problem remains far from being solved for other languages, such as Malay. Thus, in this paper, we propose an approach to normalize the Malay Twitter messages based on corpus-driven analysis. An architecture for Malay Tweet normalization is presented, which comprises seven main modules: (1) enhanced tokenization, (2) In-Vocabulary (IV) detection, (3) specialized dictionary query, (4) repeated letter elimination, (5) abbreviation adjusting, (6) English word translation, and (7) de-tokenization. A parallel Tweet dataset, consisting of 9000 Malay Tweets, is used in the development and testing stages. To measure the performance of the system, an evaluation is carried out. The result is promising whereby we score 0.83 in BLEU against the baseline BLEU, which scores 0.46. To compare the accuracy of the architecture with other statistical approaches, an SMT-like normalization system is implemented, trained, and evaluated with an identical parallel dataset. The experimental results demonstrate that we achieve higher accuracy by the normalization system, which is designed based on the features of Malay Tweets, compared to the SMT-like system.
An architecture for Malay Tweet normalization
S0306457314000387
Nowadays, using increasingly granular data, from real-time location information and detailed demographics to consumers-generated content on the social networking sites (SNSs), businesses are starting to offer precise location-based product recommendation services through mobile devices. Based on the technology acceptance model (TAM), this paper develops a theoretical model to examine the adoption intention of active SNS users toward location-based recommendation agents (LBRAs). The research model was tested by using the Partial Least Squares (PLS) technique.The results show that perceived usefulness, perceived control, and perceived institutional assurance are important in developing adoption intention. Perceived effort saving, special treatment, and social benefit have influences on the adoption intention through the mediating effect of perceived usefulness. Perceived accuracy has direct influence on adoption intention.
Understanding the adoption of location-based recommendation agents among active users of social networking sites
S0306457314000399
This article describes in-depth research on machine learning methods for sentiment analysis of Czech social media. Whereas in English, Chinese, or Spanish this field has a long history and evaluation datasets for various domains are widely available, in the case of the Czech language no systematic research has yet been conducted. We tackle this issue and establish a common ground for further research by providing a large human-annotated Czech social media corpus. Furthermore, we evaluate state-of-the-art supervised machine learning methods for sentiment analysis. We explore different pre-processing techniques and employ various features and classifiers. We also experiment with five different feature selection algorithms and investigate the influence of named entity recognition and preprocessing on sentiment classification performance. Moreover, in addition to our newly created social media dataset, we also report results for other popular domains, such as movie and product reviews. We believe that this article will not only extend the current sentiment analysis research to another family of languages, but will also encourage competition, potentially leading to the production of high-end commercial solutions.
Supervised sentiment analysis in Czech social media
S0306457314000491
Influence theories constitute formal models that identify those individuals that are able to affect and guide their peers through their activity. There is a large body of work on developing such theories, as they have important applications in viral marketing, recommendations, as well as information retrieval. Influence theories are typically evaluated through a manual process that cannot scale to data voluminous enough to draw safe, representative conclusions. To overcome this issue, we introduce in this paper a formalized framework for large-scale, automatic evaluation of topic-specific influence theories that are specialized in Twitter. Basically, it consists of five conjunctive conditions that are indicative of real influence exertion: the first three determine which influence theories are compatible with our framework, while the other two estimate their relative effectiveness. At the core of these two conditions lies a novel metric that assesses the aggregate sentiment of a group of users and allows for estimating how close the behavior of influencers is to that of the entire community. We put our framework into practice using a large-scale test-bed with real data from 75 Twitter communities. In order to select the theories that can be employed in our analysis, we introduce a generic, two-dimensional taxonomy that elucidates their functionality. With its help, we ended up with five established topic-specific theories that are applicable to our settings. The outcomes of our analysis reveal significant differences in their performance. To explain them, we introduce a novel methodology for delving into the internal dynamics of the groups of influencers they define. We use it to analyze the implications of the selected theories and, based on the resulting evidence, we propose a novel partition of influence theories in three major categories with divergent performance.
Large-scale evaluation framework for local influence theories in Twitter
S0306457314000508
Multi-document summarization techniques aim to reduce documents into a small set of words or paragraphs that convey the main meaning of the original document. Many approaches to multi-document summarization have used probability-based methods and machine learning techniques to simultaneously summarize multiple documents sharing a common topic. However, these techniques fail to semantically analyze proper nouns and newly-coined words because most depend on an out-of-date dictionary or thesaurus. To overcome these drawbacks, we propose a novel multi-document summarization system called FoDoSu, or Folksonomy-based Multi-Document Summarization, that employs the tag clusters used by Flickr, a Folksonomy system, for detecting key sentences from multiple documents. We first create a word frequency table for analyzing the semantics and contributions of words using the HITS algorithm. Then, by exploiting tag clusters, we analyze the semantic relationships between words in the word frequency table. Finally, we create a summary of multiple documents by analyzing the importance of each word and its semantic relatedness to others. Experimental results from the TAC 2008 and 2009 data sets demonstrate the improvement of our proposed framework over existing summarization systems.
FoDoSu: Multi-document summarization exploiting semantic analysis based on social Folksonomy
S0306457314000521
The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
Character n-gram application for automatic new topic identification
S0306457314000533
In this paper, a new homomorphic image watermarking method implementing the Singular Value Decomposition (SVD) algorithm is presented. The idea of the proposed method is based on embedding the watermark with the SVD algorithm in the reflectance component after applying the homomorphic transform. The reflectance component contains most of the image features but with low energy, and hence watermarks embedded in this component will be invisible. A block-by-block implementation of the proposed method is also introduced. The watermark embedding on a block-by-block basis makes the watermark more robust to attacks. A comparison study between the proposed method and the traditional SVD watermarking method is presented in the presence of attacks. The proposed method is more robust to various attacks. The embedding of chaotic encrypted watermarks is also investigated in this paper to increase the level of security.
Homomorphic image watermarking with a singular value decomposition algorithm
S0306457314000636
Applying text mining techniques to legal issues has been an emerging research topic in recent years. Although a few previous studies focused on assisting professionals in the retrieval of related legal documents, to our knowledge, no previous studies could provide relevant statutes to the general public using problem statements. In this work, we design a text mining based method, the three-phase prediction (TPP) algorithm, which allows the general public to use everyday vocabulary to describe their problems and find pertinent statutes for their cases. The experimental results indicate that our approach can help the general public, who are not familiar with professional legal terms, to acquire relevant statutes more accurately and effectively.
Predicting associated statutes for legal problems
S0306457314000648
This paper proposes a novel query expansion method to improve accuracy of text retrieval systems. Our method makes use of a minimal relevance feedback to expand the initial query with a structured representation composed of weighted pairs of words. Such a structure is obtained from the relevance feedback through a method for pairs of words selection based on the Probabilistic Topic Model. We compared our method with other baseline query expansion schemes and methods. Evaluations performed on TREC-8 demonstrated the effectiveness of the proposed method with respect to the baseline.
Weighted Word Pairs for query expansion
S0306457314000661
To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top-250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent.
Signaling sarcasm: From hyperbole to hashtag
S0306457314000673
We live in the Information Age, where most of the personal, business, and administrative data are collected and managed electronically. However, poor data quality may affect the effectiveness of knowledge discovery processes, thus making the development of the data improvement steps a significant concern. In this paper we propose the Multidimensional Robust Data Quality Analysis, a domain-independent technique aimed to improve data quality by evaluating the effectiveness of a black-box cleansing function. Here, the proposed approach has been realized through model checking techniques and then applied on a weakly structured dataset describing the working careers of millions of people. Our experimental outcomes show the effectiveness of our model-based approach for data quality as they provide a fine-grained analysis of both the source dataset and the cleansing procedures, enabling domain experts to identify the most relevant quality issues as well as the action points for improving the cleansing activities. Finally, an anonymized version of the dataset and the analysis results have been made publicly available to the community.
A model-based evaluation of data quality activities in KDD
S0306457314000685
Background: Our methodology describes a human activity recognition framework based on feature extraction and feature selection techniques where a set of time, statistical and frequency domain features taken from 3-dimensional accelerometer sensors are extracted. This framework specifically focuses on activity recognition using on-body accelerometer sensors. We present a novel interactive knowledge discovery tool for accelerometry in human activity recognition and study the sensitivity to the feature extraction parametrization. Results: The implemented framework achieved encouraging results in human activity recognition. We have implemented a new set of features extracted from wearable sensors that are ambitious from a computational point of view and able to ensure high classification results comparable with the state of the art wearable systems (Mannini et al. 2013). A feature selection framework is developed in order to improve the clustering accuracy and reduce computational complexity. 1 The software OpenSignals (Gomes, Nunes, Sousa, & Gamboa, 2012) was used for signal acquisition and signal processing algorithms were developed in Python Programming Language (Rossum & de Boer, 1991) and Orange Software (Curk et al., 2005). 1 Several clustering methods such as K-Means, Affinity Propagation, Mean Shift and Spectral Clustering were applied. The K-means methodology presented promising accuracy results for person-dependent and independent cases, with 99.29% and 88.57%, respectively. Conclusions: The presented study performs two different tests in intra and inter subject context and a set of 180 features is implemented which are easily selected to classify different activities. The implemented algorithm does not stipulate, a priori, any value for time window or its overlap percentage of the signal but performs a search to find the best parameters that define the specific data. A clustering metric based on the construction of the data confusion matrix is also proposed. The main contribution of this work is the design of a novel gesture recognition system based solely on data from a single 3-dimensional accelerometer.
Human activity data discovery from triaxial accelerometer sensor: Non-supervised learning sensitivity to feature extraction parametrization
S0306457314000697
Community question answering (CQA) services that enable users to ask and answer questions have become popular on the internet. However, lots of new questions usually cannot be resolved by appropriate answerers effectively. To address this question routing task, in this paper, we treat it as a ranking problem and rank the potential answerers by the probability that they are able to solve the given new question. We utilize tensor model and topic model simultaneously to extract latent semantic relations among asker, question and answerer. Then, we propose a learning procedure based on the above models to get optimal ranking of answerers for new questions by optimizing the multi-class AUC (Area Under the ROC Curve). Experimental results on two real-world CQA datasets show that the proposed method is able to predict appropriate answerers for new questions and outperforms other state-of-the-art approaches.
Optimal answerer ranking for new questions in community question answering
S0306457314000703
We present IntoNews, a system to match online news articles with spoken news from a television newscasts represented by closed captions. We formalize the news matching problem as two independent tasks: closed captions segmentation and news retrieval. The system segments closed captions by using a windowing scheme: sliding or tumbling window. Next, it uses each segment to build a query by extracting representative terms. The query is used to retrieve previously indexed news articles from a search engine. To detect when a new article should be surfaced, the system compares the set of retrieved articles with the previously retrieved one. The intuition is that if the difference between these sets is large enough, it is likely that the topic of the newscast currently on air has changed and a new article should be displayed to the user. In order to evaluate IntoNews, we build a test collection using data coming from a second screen application and a major online news aggregator. The dataset is manually segmented and annotated by expert assessors, and used as our ground truth. It is freely available for download through the Webscope program. 1 http://webscope.sandbox.yahoo.com. 1 Our evaluation is based on a set of novel time-relevance metrics that take into account three different aspects of the problem at hand: precision, timeliness and coverage. We compare our algorithms against the best method previously proposed in literature for this problem. Experiments show the trade-offs involved among precision, timeliness and coverage of the airing news. Our best method is four times more accurate than the baseline.
IntoNews: Online news retrieval using closed captions
S0306457314000715
Automatic document summarization using citations is based on summarizing what others explicitly say about the document, by extracting a summary from text around the citations (citances). While this technique works quite well for summarizing the impact of scientific articles, other genres of documents as well as other types of summaries require different approaches. In this paper, we introduce a new family of methods that we developed for legal documents summarization to generate catchphrases for legal cases (where catchphrases are a form of legal summary). Our methods use both incoming and outgoing citations, and we show how citances can be combined with other elements of cited and citing documents, including the full text of the target document, and catchphrases of cited and citing cases. On a legal summarization corpus, our methods outperform competitive baselines. The combination of full text sentences and catchphrases from cited and citing cases is particularly successful. We also apply and evaluate the methods on scientific paper summarization, where they perform at the level of state-of-the-art techniques. Our family of citation-based summarization methods is powerful and flexible enough to target successfully a range of different domains and summarization tasks.
Summarization based on bi-directional citation analysis
S0306457314000727
In this paper, we study on effective and efficient processing of keyword-based queries over graph databases. To produce more relevant answers to a query than the previous approaches, we suggest a new answer tree structure which has no constraint on the number of keyword nodes chosen for each keyword in the query. For efficient search of answer trees on the large graph databases, we design an inverted list index to pre-compute and store connectivity and relevance information of nodes to keyword terms in the graph. We propose a query processing algorithm which aggregates from the pre-constructed inverted lists the best keyword nodes and root nodes to find top-k answer trees most relevant to the given query. We also enhance the method by extending the structure of the inverted list and adopting a relevance lookup table, which enables more accurate estimation of the relevance scores of candidate root nodes and efficient search of top-k answer trees. Performance evaluation by experiments with real graph datasets shows that the proposed method can find more effective top-k answers than the previous approaches and provides acceptable and scalable execution performance for various types of keyword queries on large graph databases.
Efficient processing of keyword queries over graph databases for finding effective answers
S0306457314000739
Probabilistic topic models are unsupervised generative models which model document content as a two-step generation process, that is, documents are observed as mixtures of latent concepts or topics, while topics are probability distributions over vocabulary words. Recently, a significant research effort has been invested into transferring the probabilistic topic modeling concept from monolingual to multilingual settings. Novel topic models have been designed to work with parallel and comparable texts. We define multilingual probabilistic topic modeling (MuPTM) and present the first full overview of the current research, methodology, advantages and limitations in MuPTM. As a representative example, we choose a natural extension of the omnipresent LDA model to multilingual settings called bilingual LDA (BiLDA). We provide a thorough overview of this representative multilingual model from its high-level modeling assumptions down to its mathematical foundations. We demonstrate how to use the data representation by means of output sets of (i) per-topic word distributions and (ii) per-document topic distributions coming from a multilingual probabilistic topic model in various real-life cross-lingual tasks involving different languages, without any external language pair dependent translation resource: (1) cross-lingual event-centered news clustering, (2) cross-lingual document classification, (3) cross-lingual semantic similarity, and (4) cross-lingual information retrieval. We also briefly review several other applications present in the relevant literature, and introduce and illustrate two related modeling concepts: topic smoothing and topic pruning. In summary, this article encompasses the current research in multilingual probabilistic topic modeling. By presenting a series of potential applications, we reveal the importance of the language-independent and language pair independent data representations by means of MuPTM. We provide clear directions for future research in the field by providing a systematic overview of how to link and transfer aspect knowledge across corpora written in different languages via the shared space of latent cross-lingual topics, that is, how to effectively employ learned per-topic word distributions and per-document topic distributions of any multilingual probabilistic topic model in various cross-lingual applications.
Probabilistic topic modeling in multilingual settings: An overview of its methodology and applications
S0306457314000740
Aspect level sentiment analysis is important for numerous opinion mining and market analysis applications. In this paper, we study the problem of identifying and rating review aspects, which is the fundamental task in aspect level sentiment analysis. Previous review aspect analysis methods seldom consider entity or rating but only 2-tuples, i.e., head and modifier pair, e.g., in the phrase “nice room”, “room” is the head and “nice” is the modifier. To solve this problem, we novelly present a Quad-tuple Probability Latent Semantic Analysis (QPLSA), which incorporates entity and its rating together with the 2-tuples into the PLSA model. Specifically, QPLSA not only generates fine-granularity aspects, but also captures the correlations between words and ratings. We also develop two novel prediction approaches, the Quad-tuple Prediction (from the global perspective) and the Expectation Prediction (from the local perspective). For evaluation, systematic experiments show that: Quad-tuple PLSA outperforms 2-tuple PLSA significantly on both aspect identification and aspect rating prediction for publication datasets. Moreover, for aspect rating prediction, QPLSA shows significant superiority over state-of-the-art baseline methods. Besides, the Quad-tuple Prediction and the Expectation Prediction also show their strong ability in aspect rating on different datasets.
QPLSA: Utilizing quad-tuples for aspect identification and rating
S0306457314000752
Online review mining has been used to help manufacturers and service providers improve their products and services, and to provide valuable support for consumer decision making. Product aspect extraction is fundamental to online review mining. This research is aimed to improve the performance of aspect extraction from online consumer reviews. To this end, we augment a frequency-based extraction method with PMI-IR, which utilizes web search in measuring the semantic similarity between aspect candidates and target entities. In addition, we extend RCut, an algorithm originally developed for text classification, to learn the threshold for selecting candidate aspects. Experiment results with Chinese online reviews show that our proposed method not only outperforms the state of the art frequency-based method for aspect extraction but also generalizes across different product domains and various data sizes.
Improving aspect extraction by augmenting a frequency-based method with web-based similarity measures
S0306457314000843
Research into unsupervised ways of stemming has resulted, in the past few years, in the development of methods that are reliable and perform well. Our approach further shifts the boundaries of the state of the art by providing more accurate stemming results. The idea of the approach consists in building a stemmer in two stages. In the first stage, a stemming algorithm based upon clustering, which exploits the lexical and semantic information of words, is used to prepare large-scale training data for the second-stage algorithm. The second-stage algorithm uses a maximum entropy classifier. The stemming-specific features help the classifier decide when and how to stem a particular word. In our research, we have pursued the goal of creating a multi-purpose stemming tool. Its design opens up possibilities of solving non-traditional tasks such as approximating lemmas or improving language modeling. However, we still aim at very good results in the traditional task of information retrieval. The conducted tests reveal exceptional performance in all the above mentioned tasks. Our stemming method is compared with three state-of-the-art statistical algorithms and one rule-based algorithm. We used corpora in the Czech, Slovak, Polish, Hungarian, Spanish and English languages. In the tests, our algorithm excels in stemming previously unseen words (the words that are not present in the training set). Moreover, it was discovered that our approach demands very little text data for training when compared with competing unsupervised algorithms.
HPS: High precision stemmer
S0306457314000855
Complex applications such as big data analytics involve different forms of coupling relationships that reflect interactions between factors related to technical, business (domain-specific) and environmental (including socio-cultural and economic) aspects. There are diverse forms of couplings embedded in poor-structured and ill-structured data. Such couplings are ubiquitous, implicit and/or explicit, objective and/or subjective, heterogeneous and/or homogeneous, presenting complexities to existing learning systems in statistics, mathematics and computer sciences, such as typical dependency, association and correlation relationships. Modeling and learning such couplings thus is fundamental but challenging. This paper discusses the concept of coupling learning, focusing on the involvement of coupling relationships in learning systems. Coupling learning has great potential for building a deep understanding of the essence of business problems and handling challenges that have not been addressed well by existing learning theories and tools. This argument is verified by several case studies on coupling learning, including handling coupling in recommender systems, incorporating couplings into coupled clustering, coupling document clustering, coupled recommender algorithms and coupled behavior analysis for groups.
Coupling learning of complex interactions
S0306457314000867
Media sharing applications, such as Flickr and Panoramio, contain a large amount of pictures related to real life events. For this reason, the development of effective methods to retrieve these pictures is important, but still a challenging task. Recognizing this importance, and to improve the retrieval effectiveness of tag-based event retrieval systems, we propose a new method to extract a set of geographical tag features from raw geo-spatial profiles of user tags. The main idea is to use these features to select the best expansion terms in a machine learning-based query expansion approach. Specifically, we apply rigorous statistical exploratory analysis of spatial point patterns to extract the geo-spatial features. We use the features both to summarize the spatial characteristics of the spatial distribution of a single term, and to determine the similarity between the spatial profiles of two terms – i.e., term-to-term spatial similarity. To further improve our approach, we investigate the effect of combining our geo-spatial features with temporal features on choosing the expansion terms. To evaluate our method, we perform several experiments, including well-known feature analyzes. Such analyzes show how much our proposed geo-spatial features contribute to improve the overall retrieval performance. The results from our experiments demonstrate the effectiveness and viability of our method.
Geo-temporal distribution of tag terms for event-related image retrieval
S0306457314000879
Social media websites, such as YouTube and Flicker, are currently gaining in popularity. A large volume of information is generated by online users and how to appropriately provide personalized content is becoming more challenging. Traditional recommendation models are overly dependent on preference ratings and often suffer from the problem of “data sparsity”. Recent research has attempted to integrate sentiment analysis results of online affective texts into recommendation models; however, these studies are still limited. The one class collaborative filtering (OCCF) method is more applicable in the social media scenario yet it is insufficient for item recommendation. In this study, we develop a novel sentiment-aware social media recommendation framework, referred to as SA_OCCF, in order to tackle the above challenges. We leverage inferred sentiment feedback information and OCCF models to improve recommendation performance. We conduct comprehensive experiments on a real social media web site to verify the effectiveness of the proposed framework and methods. The results show that the proposed methods are effective in improving the performance of the baseline OCCF methods.
Mining affective text to improve social media item recommendation
S0306457314000880
Social media is playing a growing role in elections world-wide. Thus, automatically analyzing electoral tweets has applications in understanding how public sentiment is shaped, tracking public sentiment and polarization with respect to candidates and issues, understanding the impact of tweets from various entities, etc. Here, for the first time, we automatically annotate a set of 2012 US presidential election tweets for a number of attributes pertaining to sentiment, emotion, purpose, and style by crowdsourcing. Overall, more than 100,000 crowdsourced responses were obtained for 13 questions on emotions, style, and purpose. Additionally, we show through an analysis of these annotations that purpose, even though correlated with emotions, is significantly different. Finally, we describe how we developed automatic classifiers, using features from state-of-the-art sentiment analysis systems, to predict emotion and purpose labels, respectively, in new unseen tweets. These experiments establish baseline results for automatic systems on this new data.
Sentiment, emotion, purpose, and style in electoral tweets
S0306457314000892
With the rise of Web 2.0 platforms, personal opinions, such as reviews, ratings, recommendations, and other forms of user-generated content, have fueled interest in sentiment classification in both academia and industry. In order to enhance the performance of sentiment classification, ensemble methods have been investigated by previous research and proven to be effective theoretically and empirically. We advance this line of research by proposing an enhanced Random Subspace method, POS-RS, for sentiment classification based on part-of-speech analysis. Unlike existing Random Subspace methods using a single subspace rate to control the diversity of base learners, POS-RS employs two important parameters, i.e. content lexicon subspace rate and function lexicon subspace rate, to control the balance between the accuracy and diversity of base learners. Ten publicly available sentiment datasets were investigated to verify the effectiveness of proposed method. Empirical results reveal that POS-RS achieves the best performance through reducing bias and variance simultaneously compared to the base learner, i.e., Support Vector Machine. These results illustrate that POS-RS can be used as a viable method for sentiment classification and has the potential of being successfully applied to other text classification problems.
POS-RS: A Random Subspace method for sentiment classification based on part-of-speech analysis