FileName
stringlengths
17
17
Abstract
stringlengths
163
6.01k
Title
stringlengths
12
421
S0306457314000909
Search task difficulty has been attracting much research attention in recent years, mostly regarding its relationship with searchers’ behaviors and the prediction of task difficulty from search behaviors. However, it remains unknown what makes searchers feel the difficulty. A study consisting of 48 undergraduate students was conducted to explore this question. Each participant was given 4 search tasks that were carefully designed following a task classification scheme. Questionnaires were used to elicit participants’ ratings on task difficulty and why they gave those ratings. Based on the collected difficulty reasons, a coding scheme was developed, which covered various aspects of task, user, and user–task interaction. Difficulty reasons were then categorized following this scheme. Results showed that searchers reported some common reasons leading to task difficulty in different tasks, but most of the difficulty reasons varied across tasks. In addition, task difficulty had some common reasons between searchers with low and high levels of topic knowledge, although there were also differences in top task difficulty reasons between high and low knowledge users. These findings further our understanding of search task difficulty, the relationship between task difficulty and task type, and that between task difficulty and knowledge level. The findings can also be helpful with designing tasks for information search experiments, and have implications on search system design both in general and for personalization based on task type and searchers’ knowledge.
Exploring search task difficulty reasons in different task types and user knowledge groups
S0306457314000910
One of the problems of opinion mining is the domain adaptation of the sentiment classifiers. There are several approaches to tackling this problem. One of these is the integration of a list of opinion bearing words for the specific domain. This paper presents the generation of several resources for domain adaptation to polarity detection. On the other hand, the lack of resources in languages different from English has orientated our work towards developing sentiment lexicons for polarity classifiers in Spanish. The results show the validity of the new sentiment lexicons, which can be used as part of a polarity classifier.
A Spanish semantic orientation approach to domain adaptation for polarity classification
S0306457314000922
We propose TripBuilder, an unsupervised framework for planning personalized sightseeing tours in cities. We collect categorized Points of Interests (PoIs) from Wikipedia and albums of geo-referenced photos from Flickr. By considering the photos as traces revealing the behaviors of tourists during their sightseeing tours, we extract from photo albums spatio-temporal information about the itineraries made by tourists, and we match these itineraries to the Points of Interest (PoIs) of the city. The task of recommending a personalized sightseeing tour is modeled as an instance of the Generalized Maximum Coverage (GMC) problem, where a measure of personal interest for the user given her preferences and visiting time-budget is maximized. The set of actual trajectories resulting from the GMC solution is scheduled on the tourist’s agenda by exploiting a particular instance of the Traveling Salesman Problem (TSP). Experimental results on three different cities show that our approach is effective, efficient and outperforms competitive baselines.
On planning sightseeing tours with TripBuilder
S0306457314000934
Nowadays opinion mining systems play a strategic role in different areas such as Marketing, Decision Support Systems or Policy Support. Since the arrival of the Web 2.0, more and more textual documents containing information that express opinions or comments in different languages are available. Given the proven importance of such documents, the use of effective multilingual opinion mining systems has become of high importance to different fields. This paper presents the experiments carried out with the objective to develop a multilingual sentiment analysis system. We present initial evaluations of methods and resources performed in two international evaluation campaigns for English and for Spanish. After our participation in both competitions, additional experiments were carried out with the aim of improving the performance of both Spanish and English systems by using multilingual machine-translated data. Based on our evaluations, we show that the use of hybrid features and multilingual, machine-translated data (even from other languages) can help to better distinguish relevant features for sentiment classification and thus increase the precision of sentiment analysis systems.
Sentiment analysis system adaptation for multilingual processing: The case of tweets
S0306457314001022
Websites can learn what their users do on their pages to provide better content and services to those users. A website can easily find out where a user has been, but in order to find out what content is consumed and how it was consumed at a sub-page level, prior work has proposed client-side tracking to record cursor activity, which is useful for computing the relevance for search results or determining user attention on a page. While recording cursor interactions can be done without disturbing the user, the overhead of recording the cursor trail and transmitting this data over the network can be substantial. In our work, we investigate methods to compress cursor data, taking advantage of the fact that not every cursor coordinate has equal value to the website developer. We evaluate 5 lossless and 5 lossy compression algorithms over two datasets, reporting results about client-side performance, space savings, and how well a lossy algorithm can replicate the original cursor trail. The results show that different compression techniques may be suitable for different goals: LZW offers reasonable lossless compression, but lossy algorithms such as piecewise linear interpolation and distance-thresholding offer better client-side performance and bandwidth reduction.
Building a better mousetrap: Compressing mouse cursor activity for web analytics
S0306457314001034
Applying natural language processing for mining and intelligent information access to tweets (a form of microblog) is a challenging, emerging research area. Unlike carefully authored news text and other longer content, tweets pose a number of new challenges, due to their short, noisy, context-dependent, and dynamic nature. Information extraction from tweets is typically performed in a pipeline, comprising consecutive stages of language identification, tokenisation, part-of-speech tagging, named entity recognition and entity disambiguation (e.g. with respect to DBpedia). In this work, we describe a new Twitter entity disambiguation dataset, and conduct an empirical analysis of named entity recognition and disambiguation, investigating how robust a number of state-of-the-art systems are on such noisy texts, what the main sources of error are, and which problems should be further investigated to improve the state of the art.
Analysis of named entity recognition and linking for tweets
S0306457314001046
Word sense ambiguity has been identified as a cause of poor precision in information retrieval (IR) systems. Word sense disambiguation and discrimination methods have been defined to help systems choose which documents should be retrieved in relation to an ambiguous query. However, the only approaches that show a genuine benefit for word sense discrimination or disambiguation in IR are generally supervised ones. In this paper we propose a new unsupervised method that uses word sense discrimination in IR. The method we develop is based on spectral clustering and reorders an initially retrieved document list by boosting documents that are semantically similar to the target query. For several TREC ad hoc collections we show that our method is useful in the case of queries which contain ambiguous terms. We are interested in improving the level of precision after 5, 10 and 30 retrieved documents (P@5, P@10, P@30) respectively. We show that precision can be improved by 8% above current state-of-the-art baselines. We also focus on poor performing queries.
Word sense discrimination in information retrieval: A spectral clustering-based approach
S0306457314001058
We consider the problem of searching posts in microblog environments. We frame this microblog post search problem as a late data fusion problem. Previous work on data fusion has mainly focused on aggregating document lists based on retrieval status values or ranks of documents without fully utilizing temporal features of the set of documents being fused. Additionally, previous work on data fusion has often worked on the assumption that only documents that are highly ranked in many of the lists are likely to be of relevance. We propose BurstFuseX, a fusion model that not only utilizes a microblog post’s ranking information but also exploits its publication time. BurstFuseX builds on an existing fusion method and rewards posts that are published in or near a burst of posts that are highly ranked in many of the lists being aggregated. We experimentally verify the effectiveness of the proposed late data fusion algorithm, and demonstrate that in terms of mean average precision it significantly outperforms the standard, state-of-the-art fusion approaches as well as burst or time-sensitive retrieval methods.
Burst-aware data fusion for microblog search
S0306457314001071
Teams are ranked to show their authority over each other. Existing methods rank the cricket teams using an ad-hoc points system entirely based on the winning and losing of matches and ignores number of runs or wickets from which a team wins. In this paper, adoptions of h-index and PageRank are proposed for ranking teams to overcome the weakness of existing methods. Each team is represented by a node in the graph with two teams creates a weighted directed edge between each other by playing a match and the losing team points to the winning team. The intuition is to get more points for a team winning from a stronger team than winning from a weaker team by considering the number of runs or wickets also in addition to just winning and losing matches. The results show that proposed ranking methods provide quite promising insights of one day and test team rankings. The effect of damping factor d is also studied on the performance of PageRank based methods on both ODI and test matches teams ranking and interesting trends are found.
Ranking cricket teams
S0306457314001083
In this work we develop new journal classification methods based on the h-index. The introduction of the h-index for research evaluation has attracted much attention in the bibliometric study and research quality evaluation. The main purpose of using an h-index is to compare the index for different research units (e.g. researchers, journals, etc.) to differentiate their research performance. However the h-index is defined by only comparing citations counts of one’s own publications, it is doubtful that the h index alone should be used for reliable comparisons among different research units, like researchers or journals. In this paper we propose a new global h-index (Gh-index), where the publications in the core are selected in comparison with all the publications of the units to be evaluated. Furthermore, we introduce some variants of the Gh-index to address the issue of discrimination power. We show that together with the original h-index, they can be used to evaluate and classify academic journals with some distinct advantages, in particular that they can produce an automatic classification into a number of categories without arbitrary cut-off points. We then carry out an empirical study for classification of operations research and management science (OR/MS) journals using this index, and compare it with other well-known journal ranking results such as the Association of Business Schools (ABS) Journal Quality Guide and the Committee of Professors in OR (COPIOR) ranking lists.
New journal classification methods based on the global h-index
S0306457314001095
Nowadays a large number of opinion reviews are posted on the Web. Such reviews are a very important source of information for customers and companies. The former rely more than ever on online reviews to make their purchase decisions, and the latter to respond promptly to their clients’ expectations. Unfortunately, due to the business that is behind, there is an increasing number of deceptive opinions, that is, fictitious opinions that have been deliberately written to sound authentic, in order to deceive the consumers promoting a low quality product (positive deceptive opinions) or criticizing a potentially good quality one (negative deceptive opinions). In this paper we focus on the detection of both types of deceptive opinions, positive and negative. Due to the scarcity of examples of deceptive opinions, we propose to approach the problem of the detection of deceptive opinions employing PU-learning. PU-learning is a semi-supervised technique for building a binary classifier on the basis of positive (i.e., deceptive opinions) and unlabeled examples only. Concretely, we propose a novel method that with respect to its original version is much more conservative at the moment of selecting the negative examples (i.e., not deceptive opinions) from the unlabeled ones. The obtained results show that the proposed PU-learning method consistently outperformed the original PU-learning approach. In particular, results show an average improvement of 8.2% and 1.6% over the original approach in the detection of positive and negative deceptive opinions respectively.
Detecting positive and negative deceptive opinions using PU-learning
S0306457314001113
Video summarization aims at producing a compact version of a full-length video while preserving the significant content of the original video. Movie summarization condenses a full-length movie into a summary that still retains the most significant and interesting content of the original movie. In the past, several movie summarization systems have been proposed to generate a movie summary based on low-level video features such as color, motion, texture, etc. However, a generic summary, which is common to everyone and is produced based only on low-level video features will not satisfy every user. As users’ preferences for the summary differ vastly for the same movie, there is a need for a personalized movie summarization system nowadays. To address this demand, this paper proposes a novel system to generate semantically meaningful video summaries for the same movie, which are tailored to the preferences and interests of a user. For a given movie, shots and scenes are automatically detected and their high-level features are semi-automatically annotated. Preferences over high-level movie features are explicitly collected from the user using a query interface. The user preferences are generated by means of a stored-query. Movie summaries are generated at shot level and scene level, where shots or scenes are selected for summary skim based on the similarity measured between shots and scenes, and the user’s preferences. The proposed movie summarization system is evaluated subjectively using a sample of 20 subjects with eight movies in the English language. The quality of the generated summaries is assessed by informativeness, enjoyability, relevance, and acceptance metrics and Quality of Perception measures. Further, the usability of the proposed summarization system is subjectively evaluated by conducting a questionnaire survey. The experimental results on the performance of the proposed movie summarization approach show the potential of the proposed system.
What do you wish to see? A summarization system for movies based on user preferences
S0306457314001125
Researchers have shown that a weighted linear combination in data fusion can produce better results than an unweighted combination. Many techniques have been used to determine the linear combination weights. In this work, we have used the Genetic Algorithm (GA) for the same purpose. The GA is not new and it has been used earlier in several other applications. But, to the best of our knowledge, the GA has not been used for fusion of runs in information retrieval. First, we use GA to learn the optimum fusion weights using the entire set of relevance assessment. Next, we learn the weights from the relevance assessments of the top retrieved documents only. Finally, we also learn the weights by a twofold training and testing on the queries. We test our method on the runs submitted in TREC. We see that our weight learning scheme, using both full and partial sets of relevance assessment, produces significant improvements over the best candidate run, CombSUM, CombMNZ, Z-Score, linear combination method with performance level, performance level square weighting scheme, multiple linear regression-based weight learning scheme, mixture model result merging scheme, LambdaMerge, ClustFuseCombSUM and ClustFuseCombMNZ. Furthermore, we study how the correlation among the scores in the runs can be used to eliminate redundant runs in a set of runs to be fused. We observe that similar runs have similar contributions in fusion. So, eliminating the redundant runs in a group of similar runs does not hurt fusion performance in any significant way.
Learning combination weights in data fusion using Genetic Algorithms
S0306457314001137
This study proposes a new 4D (i.e., spatial, temporal, breadth, and depth) framework for citation distribution analysis. The importance and differences in the breadth and depth of citation distribution are analyzed. Easily computable indices, X, Y, and XY, are proposed, which provide estimates of the breadth and depth of citation distribution. A knowledge unit can be an article, author, institution, journal, or a set of something. Index X, which represents the breadth of citation distribution, is the number of different knowledge units that cite special knowledge units. Index Y, which represents the depth of citation distribution, is the maximum number of citations among several knowledge units that refer to specific knowledge units. Index XY, which synthetically represents Indices X and Y, the feature and focus impacts of a knowledge unit, is index X divided by index Y. We analyze empirically the citation and reference distributions of 84 journals from the “Information science and library science” category of the Journal Citation Reports (2012) at the journal-to-journal level. Indices X, Y, and XY reflect the actual breadth and depth of citation distribution. Differences exist among Indices X, Y, and XY. Differences also exist between these indices and other bibliometric indicators. These indices cannot be replaced by existing bibliometric indicators. Specifically, the absolute values of indices X and Y are good supplements to existing bibliometric indicators. However, index XY and the relative values of Indices X and Y represent new aspects of bibliometric indicators.
Breadth and depth of citation distribution
S0306457315000023
Semantic similarity assessment between concepts is an important task in many language related applications. In the past, several approaches to assess similarity by evaluating the knowledge modeled in an (or multiple) ontology (or ontologies) have been proposed. However, there are some limitations such as the facts of relying on predefined ontologies and fitting non-dynamic domains in the existing measures. Wikipedia provides a very large domain-independent encyclopedic repository and semantic network for computing semantic similarity of concepts with more coverage than usual ontologies. In this paper, we propose some novel feature based similarity assessment methods that are fully dependent on Wikipedia and can avoid most of the limitations and drawbacks introduced above. To implement similarity assessment based on feature by making use of Wikipedia, firstly a formal representation of Wikipedia concepts is presented. We then give a framework for feature based similarity based on the formal representation of Wikipedia concepts. Lastly, we investigate several feature based approaches to semantic similarity measures resulting from instantiations of the framework. The evaluation, based on several widely used benchmarks and a benchmark developed in ourselves, sustains the intuitions with respect to human judgements. Overall, several methods proposed in this paper have good human correlation and constitute some effective ways of determining similarity between Wikipedia concepts.
Feature-based approaches to semantic similarity assessment of concepts using Wikipedia
S0306457315000035
We use extractive multi-document summarization techniques to perform complex question answering and formulate it as a reinforcement learning problem. Given a set of complex questions, a list of relevant documents per question, and the corresponding human generated summaries (i.e. answers to the questions) as training data, the reinforcement learning module iteratively learns a number of feature weights in order to facilitate the automatic generation of summaries i.e. answers to previously unseen complex questions. A reward function is used to measure the similarities between the candidate (machine generated) summary sentences and the abstract summaries. In the training stage, the learner iteratively selects the important document sentences to be included in the candidate summary, analyzes the reward function and updates the related feature weights accordingly. The final weights are used to generate summaries as answers to unseen complex questions in the testing stage. Evaluation results show the effectiveness of our system. We also incorporate user interaction into the reinforcement learner to guide the candidate summary sentence selection process. Experiments reveal the positive impact of the user interaction component on the reinforcement learning framework.
A reinforcement learning formulation to the complex question answering problem
S0306457315000047
Daily deals have emerged in the last three years as a successful form of online advertising. The downside of this success is that users are increasingly overloaded by the many thousands of deals offered each day by dozens of deal providers and aggregators. The challenge is thus offering the right deals to the right users i.e., the relevance ranking of deals. This is the problem we address in our paper. Exploiting the characteristics of deals data, we propose a combination of a term- and a concept-based retrieval model that closes the semantic gap between queries and documents expanding both of them with category information. The method consistently outperforms state-of-the-art methods based on term-matching alone and existing approaches for ad classification and ranking.
Ranking of daily deals with concept expansion
S0306457315000230
Vendors of Business Intelligence (BI) software have recently started extending their systems by features from social software. The generated reports may include profiles of report authors and later be supplemented by information about users who accessed the report, user evaluations of the report, or other social cues. With these features, users can support each other in discovering and filtering valuable information in the context of BI. Users who consider reusing an existing report that was not designed by or for them can now not only peruse the report content but also take the social cues into consideration. We analyze which report features influence their perception of report usefulness. Our analysis is based on the elaboration likelihood model (ELM) which assumes that information recipients are either influenced by the quality of information or peripheral cues. We conduct an experiment with knowledge workers from different companies. The results confirm most hypotheses derived from ELM in the context of BI reports but we also find a deviation from the basic ELM expectations. We find that even people who are able and motivated to scrutinize the report content use community cues to decide on report usefulness in addition to report quality considerations.
Influence of social software features on the reuse of Business Intelligence reports
S0306457315000242
Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4–5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
Contextual semantics for sentiment analysis of Twitter
S0306457315000254
Summary writing is a process for creating a short version of a source text. It can be used as a measure of understanding. As grading students’ summaries is a very time-consuming task, computer-assisted assessment can help teachers perform the grading more effectively. Several techniques, such as BLEU, ROUGE, N-gram co-occurrence, Latent Semantic Analysis (LSA), LSA_Ngram and LSA_ERB, have been proposed to support the automatic assessment of students’ summaries. Since these techniques are more suitable for long texts, their performance is not satisfactory for the evaluation of short summaries. This paper proposes a specialized method that works well in assessing short summaries. Our proposed method integrates the semantic relations between words, and their syntactic composition. As a result, the proposed method is able to obtain high accuracy and improve the performance compared with the current techniques. Experiments have displayed that it is to be preferred over the existing techniques. A summary evaluation system based on the proposed method has also been developed.
Automatic summarization assessment through a combination of semantic and syntactic information for intelligent educational systems
S0306457315000266
This paper examines the research patterns and trends of Recommendation System (RecSys) in China during the period of 2004–2013. Data (keywords in articles) was collected from the China Academic Journal Network Publishing Database (CAJD) and the China Science Periodical Database (CSPD). A co-word analysis was conducted to measure correlation among the extracted keywords. The cluster analysis and social network analysis revealed 12 theme-clusters, network characteristics (centrality and density) of the clusters, the strategic diagram, and the correlation network. The study results show that there are several important themes with a high correlation in Chinese RecSys research, which is considered to be relatively focused, mature, and well-developed overall. Some research themes have developed on a considerable scale, while others remain isolated and undeveloped. This study also identified a few emerging themes with great potential for development. It was also determined that studies overall on the applications of RecSys are increasing.
Research patterns and trends of Recommendation System in China using co-word analysis
S0306457315000278
This paper presents a Web intelligence portal that captures and aggregates news and social media coverage about “Game of Thrones”, an American drama television series created for the HBO television network based on George R.R. Martin’s series of fantasy novels. The system collects content from the Web sites of Anglo-American news media as well as from four social media platforms: Twitter, Facebook, Google+ and YouTube. An interactive dashboard with trend charts and synchronized visual analytics components not only shows how often Game of Thrones events and characters are being mentioned by journalists and viewers, but also provides a real-time account of concepts that are being associated with the unfolding storyline and each new episode. Positive or negative sentiment is computed automatically, which sheds light on the perception of actors and new plot elements.
Analyzing the public discourse on works of fiction – Detection and visualization of emotion in online coverage about HBO’s Game of Thrones
S0306457315000291
The uncertainty children experience when searching for information influences their information seeking behavior by stimulating curiosity or hindering their search efforts. This study explored the interactions and the usability of various search interfaces, and the enjoyment or uncertainty experienced by children when using them. Structural Equation Modeling was used to determine whether children feel uncertainty or a sense of control when using virtual game-like interfaces to search for information associated with entertainment or as a means to satisfy an assigned learning task. We then analyzed the weight relationships among three latent variables (information needs, interface media, and affective state) using statistical (path) analysis. Our results indicate that children prefer using a retrieval interface with situated affordance to satisfy entertainment-related information needs, as opposed to searching for information to solve specific problems. Furthermore, their perceptions of text and graphic icons determined the degree to which they experienced a sense of uncertainty or control. When searching for entertainment-related information, they were better able to deal with uncertainty and sought greater control in their search interface, compared to when they were searching for information related to assigned tasks. According to their information needs, children may regard a game-like interface as a toy or a tool for learning. The results of this study can serve as reference for the future development of information search interfaces aimed at arousing the interest of children. The use of virtual game-like interfaces to guide the IS behavior of children warrants further study.
Affective surfing in the visualized interface of a digital library for children
S0306457315000394
In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy
S0306457315000400
In order to successfully apply opinion mining (OM) to the large amounts of user-generated content produced every day, we need robust models that can handle the noisy input well yet can easily be adapted to a new domain or language. We here focus on opinion mining for YouTube by (i) modeling classifiers that predict the type of a comment and its polarity, while distinguishing whether the polarity is directed towards the product or video; (ii) proposing a robust shallow syntactic structure (STRUCT) that adapts well when tested across domains; and (iii) evaluating the effectiveness on the proposed structure on two languages, English and Italian. We rely on tree kernels to automatically extract and learn features with better generalization power than traditionally used bag-of-word models. Our extensive empirical evaluation shows that (i) STRUCT outperforms the bag-of-words model both within the same domain (up to 2.6% and 3% of absolute improvement for Italian and English, respectively); (ii) it is particularly useful when tested across domains (up to more than 4% absolute improvement for both languages), especially when little training data is available (up to 10% absolute improvement) and (iii) the proposed structure is also effective in a lower-resource language scenario, where only less accurate linguistic processing tools are available.
Multi-lingual opinion mining on YouTube
S0306457315000412
Questionnaires are commonly used to measure attitudes toward systems and perceptions of search experiences. Whilst the face validity of such measures has been established through repeated use in information retrieval research, their reliability and wider validity are not typically examined; this threatens internal validity. The evaluation of self-report questionnaires is important not only for the internal validity of studies and, by extension, increased confidence in the results, but also for examining constructs of interest over time and across different domains and systems. In this paper, we look at a specific questionnaire, the User Engagement Scale (UES), for its robustness as a measure. We describe three empirical studies conducted in the online news domain and investigate the reliability and validity of the UES. Our results demonstrate good reliability of the UES sub-scales; however, we argue that a four-factor structure may be more appropriate than the original six-factor structure proposed in earlier work. In addition, we found evidence to suggest that the UES can differentiate between systems (in this case, online news sources) and experimental conditions (i.e., the type of media used to present online content).
An empirical evaluation of the User Engagement Scale (UES) in online news environments
S0306457315000424
One of the major reasons why people find music so enjoyable is its emotional impact. Creating emotion-based playlists is a natural way of organizing music. The usability of online music streaming services could be greatly improved by developing emotion-based access methods, and automatic music emotion recognition (MER) is the most quick and feasible way of achieving it. When resorting to music for emotional regulation purposes, users are interested in the MER method to predict their induced, or felt emotion. The progress of MER in this area is impeded by the absence of publicly accessible ground-truth data on musically induced emotion. Also, there is no consensus on the question which emotional model best fits the demands of the users and can provide an unambiguous linguistic framework to describe musical emotions. In this paper we address these problems by creating a sizeable publicly available dataset of 400 musical excerpts from four genres annotated with induced emotion. We collected the data using an online “game with a purpose” Emotify, which attracted a big and varied sample of participants. We employed a nine item domain-specific emotional model GEMS (Geneva Emotional Music Scale). In this paper we analyze the collected data and report agreement of participants on different categories of GEMS. We also analyze influence of extra-musical factors on induced emotion (gender, mood, music preferences). We suggest that modifications in GEMS model are necessary.
Studying emotion induced by music through a crowdsourcing game
S0306457315000436
Document filtering is a popular task in information retrieval. A stream of documents arriving over time is filtered for documents relevant to a set of topics. The distinguishing feature of document filtering is the temporal aspect introduced by the stream of documents. Document filtering systems, up to now, have been evaluated in terms of traditional metrics like (micro- or macro-averaged) precision, recall, MAP, nDCG, F1 and utility. We argue that these metrics do not capture all relevant aspects of the systems being evaluated. In particular, they lack support for the temporal dimension of the task. We propose a time-sensitive way of measuring performance of document filtering systems over time by employing trend estimation. In short, the performance is calculated for batches, a trend line is fitted to the results, and the estimated performance of systems at the end of the evaluation period is used to compare systems. We detail the application of our proposed trend estimation framework and examine the assumptions that need to hold for valid significance testing. Additionally, we analyze the requirements a document filtering metric has to meet and show that traditional macro-averaged true-positive-based metrics, like precision, recall and utility fail to capture essential information when applied in a batch setting. In particular, false positives returned in a batch for topics that are absent from the ground truth in that batch go unnoticed. This is a serious flaw as over-generation of a system might be overlooked this way. We propose a new metric, aptness, that does capture false positives. We incorporate this metric in an overall score and show that this new score does meet all requirements. To demonstrate the results of our proposed evaluation methodology, we analyze the runs submitted to the two most recent editions of a document filtering evaluation campaign. We re-evaluate the runs submitted to the Cumulative Citation Recommendation task of the 2012 and 2013 editions of the TREC Knowledge Base Acceleration track, and show that important new insights emerge.
Evaluating document filtering systems over time
S0306457315000448
Temporal aspects have been receiving a great deal of interest in Information Retrieval and related fields. Although previous studies have proposed, designed and implemented temporal-aware systems and solutions, understanding of people’s temporal information searching behaviour is still limited. This paper reports the findings of a user study that explored temporal information searching behaviour and strategies in a laboratory setting. Information needs were grouped into three temporal classes (Past, Recency, and Future) to systematically study their characteristics. The main findings of our experiment are as follows. (1) It is intuitive for people to augment topical keywords with temporal expressions such as history, recent, or future as a tactic of temporal search. (2) However, such queries produce mixed results and the success of query reformulations appears to depend on topics to a large extent. (3) Search engine interfaces should detect temporal information needs to trigger the display of temporal search options. (4) Finding a relevant Wikipedia page or similar summary page is a popular starting point of past information needs. (5) Current search engines do a good job for information needs related to recent events, but more work is needed for past and future tasks. (6) Participants found it most difficult to find future information. Searching for domain experts was a key tactic in Future search, and file types of relevant documents are different from other temporal classes. Overall, the comparison of search across temporal classes indicated that Future search was the most difficult and the least successful followed by the search for the Past and then for Recency information. This paper discusses the implications of these findings on the design of future temporal IR systems.
Temporal information searching behaviour and strategies
S0306457315000461
Topic time reflects the temporal feature of topics in Web news pages, which can be used to establish and analyze topic models for many time-sensitive text mining tasks. However, there are two critical challenges in discovering topic time from Web news pages. The first issue is how to normalize different kinds of temporal expressions within a Web news page, e.g., explicit and implicit temporal expressions, into a unified representation framework. The second issue is how to determine the right topic time for topics in Web news. Aiming at solving these two problems, we propose a systematic framework for discovering topic time from Web news. In particular, for the first issue, we propose a new approach that can effectively determine the appropriate referential time for implicit temporal expressions and further present an effective defuzzification algorithm to find the right explanation for a fuzzy temporal expression. For the second issue, we propose a relation model to describe the relationship between news topics and topic time. Based on this model, we design a new algorithm to extract topic time from Web news. We build a prototype system called Topic Time Parser (TTP) and conduct extensive experiments to measure the effectiveness of our proposal. The results suggest that our proposal is effective in both temporal expression normalization and topic time extraction.
Discovering topic time from web news
S0306457315000473
In this paper, we propose a linguistically-motivated query expansion framework that recognizes and encodes significant query constituents characterizing query intent in order to improve retrieval performance. Concepts-of-Interest are recognized as the core concepts that represent the gist of the search goal whilst the remaining query constituents which serve to specify the search goal and complete the query structure are classified as descriptive, relational or structural. Acknowledging the need to form semantically-associated base pairs for the purpose of extracting related potential expansion concepts, an algorithm which capitalizes on syntactical dependencies to capture relationships between adjacent and non-adjacent query concepts is proposed. Lastly, a robust weighting scheme that duly emphasizes the importance of query constituents based on their linguistic role within the expanded query is presented. We demonstrate improvements in retrieval effectiveness in terms of increased mean average precision garnered by the proposed linguistic-based query expansion framework through experimentation on the TREC ad hoc test collections.
A linguistically driven framework for query expansion via grammatical constituent highlighting and role-based concept weighting
S0306457315000497
Social media represents an emerging challenging sector where the natural language expressions of people can be easily reported through blogs and short text messages. This is rapidly creating unique contents of massive dimensions that need to be efficiently and effectively analyzed to create actionable knowledge for decision making processes. A key information that can be grasped from social environments relates to the polarity of text messages. To better capture the sentiment orientation of the messages, several valuable expressive forms could be taken into account. In this paper, three expressive signals – typically used in microblogs – have been explored: (1) adjectives, (2) emoticon, emphatic and onomatopoeic expressions and (3) expressive lengthening. Once a text message has been normalized to better conform social media posts to a canonical language, the considered expressive signals have been used to enrich the feature space and train several baseline and ensemble classifiers aimed at polarity classification. The experimental results show that adjectives are more discriminative and impacting than the other considered expressive signals.
Expressive signals in social media languages to improve polarity detection
S0306457315000503
We propose two novel language models to improve the performance of sentence retrieval in Question Answering (QA): class-based language model and trained trigger language model. As the search in sentence retrieval is conducted over smaller segments of text than in document retrieval, the problems of data sparsity and exact matching become more critical. Different techniques such as the translation model are also proposed to overcome the word mismatch problem. Our class-based and trained trigger language models, however, use different approaches to this aim and are shown to outperform the exiting models. The class model uses word clustering algorithm to capture term relationships. In this model, we assume a relation between the terms that belong to the same clusters; as a result, they can be substituted when searching for relevant sentences. The trigger model captures pairs of trigger and target words while training on a large corpus. The model considers a relation between a question and a sentence, if a trigger word appears in the question and the sentence contains the corresponding target word. For both proposed models, we introduce different notions of co-occurrence to find word relations. In addition, we study the impact of corpus size and domain on the models. Our experiments on TREC QA collection verify that the proposed model significantly improves the sentence retrieval performance compared to the state-of-the-art translation model. While the translation model based on mutual information (Karimzadehgan and Zhai, 2010) has 0.3927 Mean Average Precision (MAP), the class model achieves 0.4174 MAP and the trigger model enhances the performance to 0.4381.
Bridging the vocabulary gap between questions and answer sentences
S0306457315000515
The Question Answering (QA) task aims to provide precise and quick answers to user questions from a collection of documents or a database. This kind of IR system is sorely needed with the dramatic growth of digital information. In this paper, we address the problem of QA in the medical domain where several specific conditions are met. We propose a semantic approach to QA based on (i) Natural Language Processing techniques, which allow a deep analysis of medical questions and documents and (ii) semantic Web technologies at both representation and interrogation levels. We present our Semantic Question-Answering System, called MEANS and our proposed method for “Answer Search” based on semantic search and query relaxation. We evaluate the overall system performance on real questions and answers extracted from MEDLINE articles. Our experiments show promising results and suggest that a query-relaxation strategy can further improve the overall performance.
MEANS: A medical question-answering system combining NLP techniques and semantic Web technologies
S0306457315000527
Time is an important aspect of text documents. While some documents are atemporal, many have strong temporal characteristics and contain contents related to time. Such documents can be mapped to their corresponding time periods. In this paper, we propose estimating the focus time of documents which is defined as the time period to which document’s content refers and which is considered complementary dimension to the document’s creation time. We propose several estimators of focus time by utilizing statistical knowledge from external resources such as news article collections. The advantage of our approach is that document focus time can be estimated even for documents that do not contain any temporal expressions or contain only few of them. We evaluate the effectiveness of our methods on the diverse datasets of documents about historical events related to 5 countries. Our approach achieves average error of less than 21years on collections of Wikipedia pages, extracts from history-related books and web pages, while using the total time frame of 113years. We also demonstrate an example classification method to distinguish temporal from atemporal documents.
Generic method for detecting focus time of documents
S0306457315000540
Urban legends are a genre of modern folklore, consisting of stories about rare and exceptional events, just plausible enough to be believed, which tend to propagate inexorably across communities. In our view, while urban legends represent a form of “sticky” deceptive text, they are marked by a tension between the credible and incredible. They should be credible like a news article and incredible like a fairy tale to go viral. In particular we will focus on the idea that urban legends should mimic the details of news (who, where, when) to be credible, while they should be emotional and readable like a fairy tale to be catchy and memorable. Using NLP tools we will provide a quantitative analysis of these prototypical characteristics. We also lay out some machine learning experiments showing that it is possible to recognize an urban legend using just these simple features.
Why do urban legends go viral?
S0306457315000552
This study addresses the impact of domain expertise (i.e. of prior knowledge of the domain) on the performance and query strategies used by users while searching for information. Twenty-four experts (psychology students) and 24 non-experts (students from other disciplines) had to search for psychology information from the Universalis website in order to perform six information problems of varying complexity: two simple problems (the keywords required to complete the task were provided in the problem statement), two more difficult problems (the keywords required had to be inferred) and two impossible problems (no answer was provided by the website). The results showed that participants with prior knowledge in the domain (experts in psychology) performed better (i.e. reached more correct answers after shorter search times) than non-experts. This difference was stronger as the complexity of the problems increased. This study also showed that experts and non-experts displayed different query strategies. Experts reformulated the impossible problems more often than non-experts, because they produced new queries with psychology-related keywords. The participants rarely used thematic category tool and when they did so this did not enhance their performance.
Query strategies during information searching: Effects of prior domain knowledge and complexity of the information problems to be solved
S0306457315000655
The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people’s opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.
Reprint of: Computational approaches for mining user’s opinions on the Web 2.0
S0306457315000667
This article describes in-depth research on machine learning methods for sentiment analysis of Czech social media. Whereas in English, Chinese, or Spanish this field has a long history and evaluation datasets for various domains are widely available, in the case of the Czech language no systematic research has yet been conducted. We tackle this issue and establish a common ground for further research by providing a large human-annotated Czech social media corpus. Furthermore, we evaluate state-of-the-art supervised machine learning methods for sentiment analysis. We explore different pre-processing techniques and employ various features and classifiers. We also experiment with five different feature selection algorithms and investigate the influence of named entity recognition and preprocessing on sentiment classification performance. Moreover, in addition to our newly created social media dataset, we also report results for other popular domains, such as movie and product reviews. We believe that this article will not only extend the current sentiment analysis research to another family of languages, but will also encourage competition, potentially leading to the production of high-end commercial solutions.
Reprint of “Supervised sentiment analysis in Czech social media”
S0306457315000679
In this paper, we present an efficient spectral clustering method for large-scale data sets, given a set of pairwise constraints. Our contribution is threefold: (a) clustering accuracy is increased by injecting prior knowledge of the data points’ constraints to a small affinity submatrix; (b) connected components are identified automatically based on the data points’ pairwise constraints, generating thus isolated “islands” of points; furthermore, local neighborhoods of points of the same connected component are adapted dynamically, and constraints propagation is performed so as to further increase the clustering accuracy; finally (c) the complexity is preserved low, by following a sparse coding strategy of a landmark spectral clustering. In our experiments with three benchmark shape, face and handwritten digit image data sets, we show that the proposed method outperforms competitive spectral clustering methods that either follow semi-supervised or scalable strategies.
Large-scale spectral clustering based on pairwise constraints
S0306457315000771
In the area of Information Retrieval, the task of automatic text summarization usually assumes a static underlying collection of documents, disregarding the temporal dimension of each document. However, in real world settings, collections and individual documents rarely stay unchanged over time. The World Wide Web is a prime example of a collection where information changes both frequently and significantly over time, with documents being added, modified or just deleted at different times. In this context, previous work addressing the summarization of web documents has simply discarded the dynamic nature of the web, considering only the latest published version of each individual document. This paper proposes and addresses a new challenge - the automatic summarization of changes in dynamic text collections. In standard text summarization, retrieval techniques present a summary to the user by capturing the major points expressed in the most recent version of an entire document in a condensed form. In this new task, the goal is to obtain a summary that describes the most significant changes made to a document during a given period. In other words, the idea is to have a summary of the revisions made to a document over a specific period of time. This paper proposes different approaches to generate summaries using extractive summarization techniques. First, individual terms are scored and then this information is used to rank and select sentences to produce the final summary. A system based on Latent Dirichlet Allocation model (LDA) is used to find the hidden topic structures of changes. The purpose of using the LDA model is to identify separate topics where the changed terms from each topic are likely to carry at least one significant change. The different approaches are then compared with the previous work in this area. A collection of articles from Wikipedia, including their revision history, is used to evaluate the proposed system. For each article, a temporal interval and a reference summary from the article’s content are selected manually. The articles and intervals in which a significant event occurred are carefully selected. The summaries produced by each of the approaches are evaluated comparatively to the manual summaries using ROUGE metrics. It is observed that the approach using the LDA model outperforms all the other approaches. Statistical tests reveal that the differences in ROUGE scores for the LDA-based approach is statistically significant at 99% over baseline.
Summarization of changes in dynamic text collections using Latent Dirichlet Allocation model
S0306457315000783
In this paper, we investigate the impact of emotions on author profiling, concretely identifying age and gender. Firstly, we propose the EmoGraph method for modelling the way people use the language to express themselves on the basis of an emotion-labelled graph. We apply this representation model for identifying gender and age in the Spanish partition of the PAN-AP-13 corpus, obtaining comparable results to the best performing systems of the PAN Lab of CLEF.
On the impact of emotions on author profiling
S0306457315000795
Analyzing and modeling users’ online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged to assist users in reaching their search task goals. We propose a framework for evaluating exploratory search based on implicit features and user search action sequences extracted from the transactional log data to model different aspects of exploratory search namely uncertainty, creativity, exploration, and knowledge discovery. We show the effectiveness of the proposed framework by demonstrating how it can be used to understand and evaluate user search performance and thereby make meaningful recommendations to improve the overall search performance of users. We used data collected from a user study consisting of 18 users conducting an exploratory search task for two sessions with two different topics in the experimental analysis. With this analysis we show that we can effectively model their behavior using implicit features to predict the user’s future performance level with above 70% accuracy in most cases. Further, using simulations we demonstrate that our search process based recommendations improve the search performance of low performing users over time and validate these findings using both qualitative and quantitative approaches.
Implicit search feature based approach to assist users in exploratory search tasks
S0306457315000801
Retrieval systems with non-deterministic output are widely used in information retrieval. Common examples include sampling, approximation algorithms, or interactive user input. The effectiveness of such systems differs not just for different topics, but also for different instances of the system. The inherent variance presents a dilemma – What is the best way to measure the effectiveness of a non-deterministic IR system? Existing approaches to IR evaluation do not consider this problem, or the potential impact on statistical significance. In this paper, we explore how such variance can affect system comparisons, and propose an evaluation framework and methodologies capable of doing this comparison. Using the context of distributed information retrieval as a case study for our investigation, we show that the approaches provide a consistent and reliable methodology to compare the effectiveness of a non-deterministic system with a deterministic or another non-deterministic system. In addition, we present a statistical best-practice that can be used to safely show how a non-deterministic IR system has equivalent effectiveness to another IR system, and how to avoid the common pitfall of misusing a lack of significance as a proof that two systems have equivalent effectiveness.
Statistical comparisons of non-deterministic IR systems using two dimensional variance
S0306457315000813
The rapid growth of information in the digital world especially on the web, calls for automated methods of organizing the digital information for convenient access and efficient information retrieval. Topic modeling is a branch of machine learning and probabilistic graphical modeling that helps in arranging the web pages according to their topical structure. The topic distribution over a set of documents (web pages) and the affinity of a document toward a specific topic can be revealed using topic modeling. Topic modeling algorithms are typically computationally expensive due to their iterative nature. Recent research efforts have attempted to parallelize specific topic models and are successful in their attempts. These parallel algorithms however have tightly-coupled parallel processes which require frequent synchronization and are also tightly coupled with the underlying topic model which is used for inferring the topic hierarchy. In this paper, we propose a parallel algorithm to infer topic hierarchies from a large scale document corpus. A key feature of the proposed algorithm is that it exploits coarse grained parallelism and the components running in parallel need not synchronize after every iteration, thus the algorithm lends itself to be implemented on a geographically dispersed set of processing elements interconnected through a network. The parallel algorithm realizes a speed up of 53.5 on a 32-node cluster of dual-core workstations and at the same time achieving approximately the same likelihood or predictive accuracy as that of the sequential algorithm, with respect to the performance of Information Retrieval tasks.
Design and evaluation of a parallel algorithm for inferring topic hierarchies
S0306457315000825
A series of events generates multiple types of time series data, such as numeric and text data over time, and the variations of the data types capture the events from different angles. This paper aims to integrate the analyses on such numerical and text time-series data influenced by common events with a single model to better understand the events. Specifically, we present a topic model, called an associative topic model (ATM), which finds the soft cluster of time-series text data guided by time-series numerical value. The identified clusters are represented as word distributions per clusters, and these word distributions indicate what the corresponding events were. We applied ATM to financial indexes and president approval rates. First, ATM identifies topics associated with the characteristics of time-series data from the multiple types of data. Second, ATM predicts numerical time-series data with a higher level of accuracy than does the iterative model, which is supported by lower mean squared errors.
Associative topic models with numerical time series
S0306457315000837
Recommender systems are filters which suggest items or information that might be interesting to users. These systems analyze the past behavior of a user, build her profile that stores information about her interests, and exploit that profile to find potentially interesting items. The main limitation of this approach is that it may provide accurate but likely obvious suggestions, since recommended items are similar to those the user already knows. In this paper we investigate this issue, known as overspecialization or serendipity problem, by proposing a strategy that fosters the suggestion of surprisingly interesting items the user might not have otherwise discovered. The proposed strategy enriches a graph-based recommendation algorithm with background knowledge that allows the system to deeply understand the items it deals with. The hypothesis is that the infused knowledge could help to discover hidden correlations among items that go beyond simple feature similarity and therefore promote non-obvious suggestions. Two evaluations are performed to validate this hypothesis: an in vitro experiment on a subset of the hetrec2011-movielens-2k dataset, and a preliminary user study. Those evaluations show that the proposed strategy actually promotes non-obvious suggestions, by narrowing the accuracy loss.
An investigation on the serendipity problem in recommender systems
S0306457315000849
We analyze the transitions from external search, searching on web search engines, to internal search, searching on websites. We categorize 295,571 search episodes composed of a query submitted to web search engines and the subsequent queries submitted to a single website search by the same users. There are a total of 1,136,390 queries from all searches, of which 295,571 are external search queries and 840,819 are internal search queries. We algorithmically classify queries into states and then use n-grams to categorize search patterns. We cluster the searching episodes into major patterns and identify the most commonly occurring, which are: (1) Explorers (43% of all patterns) with a broad external search query and then broad internal search queries, (2) Navigators (15%) with an external search query containing a URL component and then specific internal search queries, and (3) Shifters (15%) with a different, seemingly unrelated, query types when transitioning from external to internal search. The implications of this research are that external search and internal search sessions are part of a single search episode and that online businesses can leverage these search episodes to more effectively target potential customers.
External to internal search: Associating searching on search engines with searching on sites
S0306457315000850
The intention gap between users and queries results in ambiguous and broad queries. To solve these problems, subtopic mining has been studied, which returns a ranked list of possible subtopics according to their relevance, popularity, and diversity. This paper proposes a novel method to mine subtopics using simple patterns and a hierarchical structure of subtopic candidates. First, relevant and various phrases are extracted as subtopic candidates using simple patterns based on noun phrases and alternative partial-queries. Second, a hierarchical structure of the subtopic candidates is constructed using sets of relevant documents from a web document collection. Finally, the subtopic candidates are ranked considering a balance between popularity and diversity using this structure. In experiments, our proposed methods outperformed the baselines and even an external resource based method at high-ranked subtopics, which shows that our methods can be effective and useful in various search scenarios like result diversification.
Subtopic mining using simple patterns and hierarchical structure of subtopic candidates from web documents
S0306457315000862
Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered by the absence of a standard set of evaluation benchmark collections. In this paper we propose a way to compare learning to rank methods based on a sparse set of evaluation results on a set of benchmark datasets. Our comparison methodology consists of two components: (1) Normalized Winning Number, which gives insight in the ranking accuracy of the learning to rank method, and (2) Ideal Winning Number, which gives insight in the degree of certainty concerning its ranking accuracy. Evaluation results of 87 learning to rank methods on 20 well-known benchmark datasets are collected through a structured literature search. ListNet, SmoothRank, FenchelRank, FSMRank, LRUF and LARF are Pareto optimal learning to rank methods in the Normalized Winning Number and Ideal Winning Number dimensions, listed in increasing order of Normalized Winning Number and decreasing order of Ideal Winning Number.
A cross-benchmark comparison of 87 learning to rank methods
S0306457315000874
In reputation management, knowing what impact a tweet has on the reputation of a brand or company is crucial. The reputation polarity of a tweet is a measure of how the tweet influences the reputation of a brand or company. We consider the task of automatically determining the reputation polarity of a tweet. For this classification task, we propose a feature-based model based on three dimensions: the source of the tweet, the contents of the tweet and the reception of the tweet, i.e., how the tweet is being perceived. For evaluation purposes, we make use of the RepLab 2012 and 2013 datasets. We study and contrast three training scenarios. The first is independent of the entity whose reputation is being managed, the second depends on the entity at stake, but has over 90% fewer training samples per model, on average. The third is dependent on the domain of the entities. We find that reputation polarity is different from sentiment and that having less but entity-dependent training data is significantly more effective for predicting the reputation polarity of a tweet than an entity-independent training scenario. Features related to the reception of a tweet perform significantly better than most other features.
Estimating Reputation Polarity on Microblog Posts
S0306457315000990
Transductive classification is a useful way to classify texts when labeled training examples are insufficient. Several algorithms to perform transductive classification considering text collections represented in a vector space model have been proposed. However, the use of these algorithms is unfeasible in practical applications due to the independence assumption among instances or terms and the drawbacks of these algorithms. Network-based algorithms come up to avoid the drawbacks of the algorithms based on vector space model and to improve transductive classification. Networks are mostly used for label propagation, in which some labeled objects propagate their labels to other objects through the network connections. Bipartite networks are useful to represent text collections as networks and perform label propagation. The generation of this type of network avoids requirements such as collections with hyperlinks or citations, computation of similarities among all texts in the collection, as well as the setup of a number of parameters. In a bipartite heterogeneous network, objects correspond to documents and terms, and the connections are given by the occurrences of terms in documents. The label propagation is performed from documents to terms and then from terms to documents iteratively. Nevertheless, instead of using terms just as means of label propagation, in this article we propose the use of the bipartite network structure to define the relevance scores of terms for classes through an optimization process and then propagate these relevance scores to define labels for unlabeled documents. The new document labels are used to redefine the relevance scores of terms which consequently redefine the labels of unlabeled documents in an iterative process. We demonstrated that the proposed approach surpasses the algorithms for transductive classification based on vector space model or networks. Moreover, we demonstrated that the proposed algorithm effectively makes use of unlabeled documents to improve classification and it is faster than other transductive algorithms.
Optimization and label propagation in bipartite heterogeneous networks to improve transductive classification of texts
S0306457315001004
The International Classification of Diseases (ICD) is a type of meta-data found in many Electronic Patient Records. Research to explore the utility of these codes in medical Information Retrieval (IR) applications is new, and many areas of investigation remain, including the question of how reliable the assignment of the codes has been. This paper proposes two uses of the ICD codes in two different contexts of search: Pseudo-Relevance Judgments (PRJ) and Pseudo-Relevance Feedback (PRF). We find that our approach to evaluate the TREC challenge runs using simulated relevance judgments has a positive correlation with the TREC official results, and our proposed technique for performing PRF based on the ICD codes significantly outperforms a traditional PRF approach. The results are found to be consistent over the two years of queries from the TREC medical test collection.
Improving patient record search: A meta-data based approach
S0306457315001016
In the web environment, most of the queries issued by users are implicit by nature. Inferring the different temporal intents of this type of query enhances the overall temporal part of the web search results. Previous works tackling this problem usually focused on news queries, where the retrieval of the most recent results related to the query are usually sufficient to meet the user's information needs. However, few works have studied the importance of time in queries such as “Philip Seymour Hoffman” where the results may require no recency at all. In this work, we focus on this type of queries named “time-sensitive queries” where the results are preferably from a diversified time span, not necessarily the most recent one. Unlike related work, we follow a content-based approach to identify the most important time periods of the query and integrate time into a re-ranking model to boost the retrieval of documents whose contents match the query time period. For that purpose, we define a linear combination of topical and temporal scores, which reflects the relevance of any web document both in the topical and temporal dimensions, thus contributing to improve the effectiveness of the ranked results across different types of queries. Our approach relies on a novel temporal similarity measure that is capable of determining the most important dates for a query, while filtering out the non-relevant ones. Through extensive experimental evaluation over web corpora, we show that our model offers promising results compared to baseline approaches. As a result of our investigation, we publicly provide a set of web services and a web search interface so that the system can be graphically explored by the research community.
GTE-Rank: A time-aware search engine to answer time-sensitive queries
S0306457315001028
A main challenge in Cross-Language Information Retrieval (CLIR) is to estimate a proper translation model from available translation resources, since translation quality directly affects the retrieval performance. Among different translation resources, we focus on obtaining translation models from comparable corpora, because they provide appropriate translations for both languages and domains with limited linguistic resources. In this paper, we employ a two-step approach to build an effective translation model from comparable corpora, without requiring any additional linguistic resources, for the CLIR task. In the first step, translations are extracted by deriving correlations between source–target word pairs. These correlations are used to estimate word translation probabilities in the second step. We propose a language modeling approach for the first step, where modeling based on probability distribution provides two key advantages. First, our approach can be tuned easier in comparison with heuristically adjusted previous work. Second, it provides a principled basis for integrating additional lexical and translational relations to improve the accuracy of translations from comparable corpora. As an indication, we integrate monolingual relations of word co-occurrences into the process of translation extraction, which helps to extract more reliable translations for low-frequency words in a comparable corpus. Experimental results on an English–Persian comparable corpus show that our method outperforms the previous approaches in terms of both translation quality and the performance of CLIR. Indeed, the proposed method is naturally applicable to any comparable corpus, regardless of its languages. In addition, we demonstrate the significant impact of word translation probabilities, estimated in the second step of our approach, on the performance of CLIR.
Extracting translations from comparable corpora for Cross-Language Information Retrieval using the language modeling framework
S0306457315001053
The absence of diacritics in text documents or search queries is a serious problem for Turkish information retrieval because it creates homographic ambiguity. Thus, the inappropriate handling of diacritics reduces the retrieval performance in search engines. A straightforward solution to this problem is to normalize tokens by replacing diacritic characters with their American Standard Code for Information Interchange (ASCII) counterparts. However, this so-called ASCIIfication produces either synthetic words that are not legitimate Turkish words or legitimate words with meanings that are completely different from those of the original words. These non-valid synthetic words cannot be processed by morphological analysis components (such as stemmers or lemmatizers), which expect the input to be valid Turkish words. By contrast, synthetic words are not a problem when no stemmer or a simple first-n-characters-stemmer is used in the text analysis pipeline. This difference emphasizes the notion of the diacritic sensitivity of stemmers. In this study, we propose and evaluate an alternative solution based on the application of deASCIIfication, which restores accented letters in query terms or text documents. Our risk-sensitive evaluation results showed that the diacritics restoration approach yielded more effective and robust results compared with normalizing tokens to remove diacritics.
DeASCIIfication approach to handle diacritics in Turkish information retrieval
S0306457315001193
Question Answering (QA) systems are developed to answer human questions. In this paper, we have proposed a framework for answering definitional and factoid questions, enriched by machine learning and evolutionary methods and integrated in a web-based QA system. Our main purpose is to build new features by combining state-of-the-art features with arithmetic operators. To accomplish this goal, we have presented a Genetic Programming (GP)-based approach. The exact GP duty is to find the most promising formulas, made by a set of features and operators, which can accurately rank paragraphs, sentences, and words. We have also developed a QA system in order to test the new features. The input of our system is texts of documents retrieved by a search engine. To answer definitional questions, our system performs paragraph ranking and returns the most related paragraph. Moreover, in order to answer factoid questions, the system evaluates sentences of the filtered paragraphs ranked by the previous module of our framework. After this phase, the system extracts one or more words from the ranked sentences based on a set of hand-made patterns and ranks them to find the final answer. We have used Text Retrieval Conference (TREC) QA track questions, web data, and AQUAINT and AQUAINT-2 datasets for training and testing our system. Results show that the learned features can perform a better ranking in comparison with other evaluation formulas.
Genetic programming-based feature learning for question answering
S0306457315001211
Nowadays, the increasing demand for group recommendations can be observed. In this paper we address the problem of recommendation performance for groups of users (group recommendation). We focus on the performance of very Top-N recommendations, which are important when recommending the long lasting items (only a few such items are consumed per session, e.g. movie). To improve existing group recommenders we propose a mixed hybrid recommender for groups combining content-based and collaborative strategies. The principle of proposed group recommender is to generate content and collaborative recommendations for each user, apply an aggregation strategy to solve the group conflict preferences for the content and collaborative sets separately, and finally reorder the collaborative candidates based on the content-based ones. It is based on an idea that candidates recommended by both recommendation strategies at the same time are presumably more appropriate for the group than the candidates recommended by individual strategies. The evaluation is performed by several experiments in the multimedia domain (as typical representative for group recommendations). Both, online and offline experiments were performed in order to compare real users’ satisfaction to the standard group recommenders and also, to compare performance of proposed approach to the state-of-the-art recommenders based on the MovieLens dataset. Finally, we experimented with the proposed hybrid recommender to generate the recommendation for a group of size one (i.e. single user recommendation). Obtained results, support our hypothesis that proposed mixed hybrid approach improves the precision of the recommendation for groups of users and for the single-user recommendation respectively on very Top-N recommended items.
Personalized hybrid recommendation for group of users: Top-N multimedia recommender
S0306457315001223
The paper reports on some of the results of a research project into how changes in digital behaviour and services impacts on concepts of trust and authority held by researchers in the sciences and social sciences in the UK and the USA. Interviews were used in conjunction with a group of focus groups to establish the form and topic of questions put to a larger international sample in an online questionnaire. The results of these 87 interviews were analysed to determine whether or not attitudes have indeed changed in terms of sources of information used, citation behaviour in choosing references, and in dissemination practices. It was found that there was marked continuity in attitudes though an increased emphasis on personal judgement over established and new metrics. Journals (or books in some disciplines) were more highly respected than other sources and still the vehicle for formal scholarly communication. The interviews confirmed that though an open access model did not in most cases lead to mistrust of a journal, a substantial number of researchers were worried about the approaches from what are called predatory OA journals. Established researchers did not on the whole use social media in their professional lives but a question about outreach revealed that it was recognised as effective in reaching a wider audience. There was a remarkable similarity in practice across research attitudes in all the disciplines covered and in both the countries where interviews were held.
Changes in the digital scholarly environment and issues of trust: An exploratory, qualitative analysis
S0306457315001235
In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik’s circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
Ontology-based affective models to organize artworks in the social semantic web
S0306457315001247
The explosion of online user-generated content (UGC) and the development of big data analysis provide a new opportunity and challenge to understand and respond to public opinions in the G2C e-government context. To better understand semantic searching of public comments on an online platform for citizens’ opinions about urban affairs issues, this paper proposed an approach based on the latent Dirichlet allocation (LDA), a probabilistic topic modeling method, and designed a practical system to provide users—municipal administrators of B-city—with satisfying searching results and the longitudinal changing curves of related topics. The system is developed to respond to actual demand from B-city's local government, and the user evaluation experiment results show that a system based on the LDA method could provide information that is more helpful to relevant staff members. Municipal administrators could better understand citizens’ online comments based on the proposed semantic search approach and could improve their decision-making process by considering public opinions.
Semantic search for public opinions on urban affairs: A probabilistic topic modeling-based approach
S0306457315001259
XML is a pervasive technology for representing and accessing semi-structured data. XPath is the standard language for navigational queries on XML documents and there is a growing demand for its efficient processing. In order to increase the efficiency in executing four navigational XML query primitives, namely descendants, ancestors, children and parent, we introduce a new paradigm where traditional approaches based on the efficient traversing of nodes and edges to reconstruct the requested subtrees are replaced by a brand new one based on basic set operations which allow us to directly return the desired subtree, avoiding to create it passing through nodes and edges. Our solution stems from the NEsted SeTs for Object hieRarchies (NEASTOR) formal model, which makes use of set-inclusion relations for representing and providing access to hierarchical data. We define in-memory efficient data structures to implement NESTOR, we develop algorithms to perform the descendants, ancestors, children and parent query primitives and we study their computational complexity. We conduct an extensive experimental evaluation by using several datasets: digital archives (EAD collections), INEX 2009 Wikipedia collection, and two widely-used synthetic datasets (XMark and XGen). We show that NESTOR-based data structures and query primitives consistently outperform state-of-the-art solutions for XPath processing at execution time and they are competitive in terms of both memory occupation and pre-processing time.
Descendants, ancestors, children and parent: A set-based approach to efficiently address XPath primitives
S0306457315001351
Many problems in data mining involve datasets with multiple views where the feature space consists of multiple feature groups. Previous studies employed view weighting method to find a shared cluster structure underneath different views. However, most of these studies applied gradient optimization method to optimize the cluster centroids and feature weights iteratively and made the final partition local optimal. In this work, we proposed a novel bi-level weighted multi-view clustering method with emphasizing fuzzy weighting on both view and feature. Furthermore, an efficient global search strategy that combines particle swarm optimization and gradient optimization was proposed to solve the induced non-convex loss function. In the experimental analysis, the performance of the proposed method was compared with five state-of-the-art weighted clustering algorithms on three real-world high-dimensional multi-view datasets.
Bi-level weighted multi-view clustering via hybrid particle swarm optimization
S0306457315001363
With constant growth in size of analyzable data, ranking of academic entities is becoming an attention grabbing task. For ranking of authors, this study considers the author's own contribution, as well as the impact of mutual influence of the co-authors, along with exclusivity in their received citations. The ranking of researchers is influenced by the ranking of their co-authors, more so if co-authors are seniors. Tracking the citations received by an author is also an important factor to measure standing of an author. This study proposes Mutual Influence and Citation Exclusivity Author Rank (MuICE) algorithm. We performed a sequence of experiments to calculate the MuICE Rank. First, we calculated Mutual Influence (MuInf) considering three different factors: the number of papers, the number of citations and the author's appearance as first author. Secondly, we computed MuICE incorporating all three factors of MuInf along with the exclusivity in citations received by an author. Empirically, it is shown that the proposed methods generate substantial results.
MuICE: Mutual Influence and Citation Exclusivity Author Rank
S0306457315001375
The purpose of this article is to validate, through two empirical studies, a new method for automatic evaluation of written texts, called Inbuilt Rubric, based on the Latent Semantic Analysis (LSA) technique, which constitutes an innovative and distinct turn with respect to LSA application so far. In the first empirical study, evidence of the validity of the method to identify and evaluate the conceptual axes of a text in a sample of 78 summaries by secondary school students is sought. Results show that the proposed method has a significantly higher degree of reliability than classic LSA methods of text evaluation, and displays very high sensitivity to identify which conceptual axes are included or not in each summary. A second study evaluates the method's capacity to interact and provide feedback about quality in a real online system on a sample of 924 discursive texts written by university students. Results show that students improved the quality of their written texts using this system, and also rated the experience very highly. The final conclusion is that this new method opens a very interesting way regarding the role of automatic assessors in the identification of presence/absence and quality of elaboration of relevant conceptual information in texts written by students with lower time costs than the usual LSA-based methods.
Transforming LSA space dimensions into a rubric for an automatic assessment and feedback system
S0306457315001387
This investigation deals with the problem of language identification of noisy texts, which could represent the primary step of many natural language processing or information retrieval tasks. Language identification is the task of automatically identifying the language of a given text. Although there exists several methods in the literature, their performances are not so convincing in practice. In this contribution, we propose two statistical approaches: the high frequency approach and the nearest prototype approach. In the first one, 5 algorithms of language identification are proposed and implemented, namely: character based identification (CBA), word based identification (WBA), special characters based identification (SCA), sequential hybrid algorithm (HA1) and parallel hybrid algorithm (HA2). In the second one, we use 11 similarity measures combined with several types of character N-Grams. For the evaluation task, the proposed methods are tested on forum datasets containing 32 different languages. Furthermore, an experimental comparison is made between the proposed approaches and some referential language identification tools such as: LIGA, NTC, Google translate and Microsoft Word. Results show that the proposed approaches are interesting and outperform the baseline methods of language identification on forum texts.
Effective language identification of forum texts based on statistical approaches
S0306457315001399
Information filtering has been a major task of study in the field of information retrieval (IR) for a long time, focusing on filtering well-formed documents such as news articles. Recently, more interest was directed towards applying filtering tasks to user-generated content such as microblogs. Several earlier studies investigated microblog filtering for focused topics. Another vital filtering scenario in microblogs targets the detection of posts that are relevant to long-standing broad and dynamic topics, i.e., topics spanning several subtopics that change over time. This type of filtering in microblogs is essential for many applications such as social studies on large events and news tracking of temporal topics. In this paper, we introduce an adaptive microblog filtering task that focuses on tracking topics of broad and dynamic nature. We propose an entirely-unsupervised approach that adapts to new aspects of the topic to retrieve relevant microblogs. We evaluated our filtering approach using 6 broad topics, each tested on 4 different time periods over 4 months. Experimental results showed that, on average, our approach achieved 84% increase in recall relative to the baseline approach, while maintaining an acceptable precision that showed a drop of about 8%. Our filtering method is currently implemented on TweetMogaz, a news portal generated from tweets. The website compiles the stream of Arabic tweets and detects the relevant tweets to different regions in the Middle East to be presented in the form of comprehensive reports that include top stories and news in each region.
Unsupervised adaptive microblog filtering for broad dynamic topics
S0306457315001405
The task of finding groups or teams has recently received increased attention, as a natural and challenging extension of search tasks aimed at retrieving individual entities. We introduce a new group finding task: given a query topic, we try to find knowledgeable groups that have expertise on that topic. We present five general strategies for this group finding task, given a heterogenous document repository. The models are formalized using generative language models. Two of the models aggregate expertise scores of the experts in the same group for the task, one locates documents associated with experts in the group and then determines how closely the documents are associated with the topic, whilst the remaining two models directly estimate the degree to which a group is a knowledgeable group for a given topic. For evaluation purposes we construct a test collection based on the TREC 2005 and 2006 Enterprise collections, and define three types of ground truth for our task. Experimental results show that our five knowledgeable group finding models achieve high absolute scores. We also find significant differences between different ways of estimating the association between a topic and a group.
Formal language models for finding groups of experts
S0306457315001417
Cross-language plagiarism detection aims to detect plagiarised fragments of text among documents in different languages. In this paper, we perform a systematic examination of Cross-language Knowledge Graph Analysis; an approach that represents text fragments using knowledge graphs as a language independent content model. We analyse the contributions to cross-language plagiarism detection of the different aspects covered by knowledge graphs: word sense disambiguation, vocabulary expansion, and representation by similarities with a collection of concepts. In addition, we study both the relevance of concepts and their relations when detecting plagiarism. Finally, as a key component of the knowledge graph construction, we present a new weighting scheme of relations between concepts based on distributed representations of concepts. Experimental results in Spanish–English and German–English plagiarism detection show state-of-the-art performance and provide interesting insights on the use of knowledge graphs.
A systematic study of knowledge graph analysis for cross-language plagiarism detection
S0306457315001429
This work presents a content based semantics and image retrieval system for semantically categorized hierarchical image databases. Each module is designed with an aim to develop a system that works closer to human perception. Images are mapped to a multidimensional feature space, where images belonging a semantic are clustered and indexed to acquire its efficient representation. This helps in handling the existing variability or heterogeneity within this semantic. Adaptive combinations of the obtained depictions are utilized by the branch selection and pruning algorithms to identify some closer semantics and select only a part of the large hierarchical search space for actual search. So obtained search space is finally used to retrieve desired semantics and similar images corresponding to them. The system is evaluated in terms of accuracy of the retrieved semantics and precision-recall curves. Experiments show promising semantics and image retrieval results on hierarchical image databases. The results reported with non-hierarchical but categorized image databases further prove the efficacy of the proposed system.
A semantics and image retrieval system for hierarchical image databases
S0306457315001430
This practical study aims to enrich the current literature by providing new practical evidence of the positive and negative influence factors of the Internet on generations (Gens) Y and Z in Australia and Portugal. The Internet has become a powerful force among these Gens in relation to communication, cooperation, collaboration and connection, but numerous problems from cognitive, social and physical developments' perspective are developed throughout the Internet usage. A quantitative approach was used to collect new, practical evidence from 180 Australian and 85 Portuguese respondents, with a total of 265 respondents completing an online survey. This study identifies new positive factors to the Internet usage, as problem-solving skills, proactive study, information gathering, and awareness globally and locally; communication and collaboration with their peers and family were improved and enhanced. Alternatively, this study identifies new negative factors as physical contact and physical activities were prevented, thinking, concentrating and memory skills were reduced, depressed and isolated, laziness having increased, nevertheless; the Internet encourages Gens Y and Z to play physical and virtual games (e.g. Wii). Finally, this study concluded that the Internet is becoming an essential part of the everyday routines and practices of Gens Y and Z.
Internet factors influencing generations Y and Z in Australia and Portugal: A practical study
S0306457315001442
Cluster analysis using multiple representations of data is known as multi-view clustering and has attracted much attention in recent years. The major drawback of existing multi-view algorithms is that their clustering performance depends heavily on hyperparameters which are difficult to set. In this paper, we propose the Multi-View Normalized Cuts (MVNC) approach, a two-step algorithm for multi-view clustering. In the first step, an initial partitioning is performed using a spectral technique. In the second step, a local search procedure is used to refine the initial clustering. MVNC has been evaluated and compared to state-of-the-art multi-view clustering approaches using three real-world datasets. Experimental results have shown that MVNC significantly outperforms existing algorithms in terms of clustering quality and computational efficiency. In addition to its superior performance, MVNC is parameter-free which makes it easy to use.
Multi-view clustering via spectral partitioning and local refinement
S0306457315001454
Query auto completion (QAC) models recommend possible queries to web search users when they start typing a query prefix. Most of today’s QAC models rank candidate queries by popularity (i.e., frequency), and in doing so they tend to follow a strict query matching policy when counting the queries. That is, they ignore the contributions from so-called homologous queries, queries with the same terms but ordered differently or queries that expand the original query. Importantly, homologous queries often express a remarkably similar search intent. Moreover, today’s QAC approaches often ignore semantically related terms. We argue that users are prone to combine semantically related terms when generating queries. We propose a learning to rank-based QAC approach, where, for the first time, features derived from homologous queries and semantically related terms are introduced. In particular, we consider: (i) the observed and predicted popularity of homologous queries for a query candidate; and (ii) the semantic relatedness of pairs of terms inside a query and pairs of queries inside a session. We quantify the improvement of the proposed new features using two large-scale real-world query logs and show that the mean reciprocal rank and the success rate can be improved by up to 9% over state-of-the-art QAC models.
Learning from homologous queries and semantically related terms for query auto completion
S0306457315001478
In contrast with their monolingual counterparts, little attention has been paid to the effects that misspelled queries have on the performance of Cross-Language Information Retrieval (CLIR) systems. The present work makes a first attempt to fill this gap by extending our previous work on monolingual retrieval in order to study the impact that the progressive addition of misspellings to input queries has, this time, on the output of CLIR systems. Two approaches for dealing with this problem are analyzed in this paper. Firstly, the use of automatic spelling correction techniques for which, in turn, we consider two algorithms: the first one for the correction of isolated words and the second one for a correction based on the linguistic context of the misspelled word. The second approach to be studied is the use of character n-grams both as index terms and translation units, seeking to take advantage of their inherent robustness and language-independence. All these approaches have been tested on a from-Spanish-to-English CLIR system, that is, Spanish queries on English documents. Real, user-generated spelling errors have been used under a methodology that allows us to study the effectiveness of the different approaches to be tested and their behavior when confronted with different error rates. The results obtained show the great sensitiveness of classic word-based approaches to misspelled queries, although spelling correction techniques can mitigate such negative effects. On the other hand, the use of character n-grams provides great robustness against misspellings.
Studying the effect and treatment of misspelled queries in Cross-Language Information Retrieval
S0306457315001491
General graph random walk has been successfully applied in multi-document summarization, but it has some limitations to process documents by this way. In this paper, we propose a novel hypergraph based vertex-reinforced random walk framework for multi-document summarization. The framework first exploits the Hierarchical Dirichlet Process (HDP) topic model to learn a word-topic probability distribution in sentences. Then the hypergraph is used to capture both cluster relationship based on the word-topic probability distribution and pairwise similarity among sentences. Finally, a time-variant random walk algorithm for hypergraphs is developed to rank sentences which ensures sentence diversity by vertex-reinforcement in summaries. Experimental results on the public available dataset demonstrate the effectiveness of our framework.
Query-focused multi-document summarization using hypergraph-based ranking
S0306457315001508
Object matching is an important task for finding the correspondence between objects in different domains, such as documents in different languages and users in different databases. In this paper, we propose probabilistic latent variable models that offer many-to-many matching without correspondence information or similarity measures between different domains. The proposed model assumes that there is an infinite number of latent vectors that are shared by all domains, and that each object is generated from one of the latent vectors and a domain-specific projection. By inferring the latent vector used for generating each object, objects in different domains are clustered according to the vectors that they share. Thus, we can realize matching between groups of objects in different domains in an unsupervised manner. We give learning procedures of the proposed model based on a stochastic EM algorithm. We also derive learning procedures in a semi-supervised setting, where correspondence information for some objects are given. The effectiveness of the proposed models is demonstrated by experiments on synthetic and real data sets.
Probabilistic latent variable models for unsupervised many-to-many object matching
S0306457316000029
Electronic word-of-mouth communication (eWOM) is an important force in building a digital marketplace. The study of eWOM has implications for how to build an online community through social media design, web communication and knowledge exchange. Innovative use of eWOM has significant benefits, especially for start-up firms. We focus on how users on the web communicate value related to online products. It is the premise of this paper that generating emotional value (E-value) in social media and networking sites (SMNS) is critical for the survival of new e-service ventures. Hence, by introducing a formal value theory as a coding scheme, we report a study on E-value in SMNS by analyzing how a Swedish start-up industrial design company attempted to build a global presence by creating followers on the web. The aim of the study was to investigate how the company's website design and communication can affect eWOM over time. This was done by capturing a series of “emoticon and value expressions” generated by community members from three different e-communication campaigns (2011–2012) with changing website content, hence giving different stimuli to viewers. Those members who expressed emotional value, often incorporating emoticons, displayed both shorter verbal expressions and reaction time. These value expressions, we suggest, are important aspects of eWOM and need to be actively taken into account. The study has implications for information management strategies through using eWOM.
Exploring emotional expressions in e-word-of-mouth from online communities
S0306457316300073
Query suggestion is generally an integrated part of web search engines. In this study, we first redefine and reduce the query suggestion problem as “comparison of queries”. We then propose a general modular framework for query suggestion algorithm development. We also develop new query suggestion algorithms which are used in our proposed framework, exploiting query, session and user features. As a case study, we use query logs of a real educational search engine that targets K-12 students in Turkey. We also exploit educational features (course, grade) in our query suggestion algorithms. We test our framework and algorithms over a set of queries by an experiment and demonstrate a 66–90% statistically significant increase in relevance of query suggestions compared to a baseline method.
New query suggestion framework and algorithms: A case study for an educational search engine
S0306457316300085
Bibliographic collections in traditional libraries often compile records from distributed sources where variable criteria have been applied to the normalization of the data. Furthermore, the source records often follow classical standards, such as MARC21, where a strict normalization of author names is not enforced. The identification of equivalent records in large catalogues is therefore required, for example, when migrating the data to new repositories which apply modern specifications for cataloguing, such as the FRBR and RDA standards. An open-source tool has been implemented to assist authority control in bibliographic catalogues when external features (such as the citations found in scientific articles) are not available for the disambiguation of creator names. This tool is based on similarity measures between the variants of author names combined with a parser which interprets the dates and periods associated with the creator. An efficient data structure (the unigram frequency vector trie) has been used to accelerate the identification of variants. The algorithms employed and the attribute grammar are described in detail and their implementation is distributed as an open-source resource to allow for an easier uptake.
A parser for authority control of author names in bibliographic records
S0306457316300371
In this paper we propose and evaluate the Block Max WAND with Candidate Selection and Preserving Top-K Results algorithm, or BMW-CSP. It is an extension of BMW-CS, a method previously proposed by us. Although very efficient, BMW-CS does not guarantee preserving the top-k results for a given query. Algorithms that do not preserve the top results may reduce the quality of ranking results in search systems. BMW-CSP extends BMW-CS to ensure that the top-k results will have their rankings preserved. In the experiments we performed for computing the top-10 results, the final average time required for processing queries with BMW-CSP was lesser than the ones required by the baselines adopted. For instance, when computing top-10 results, the average time achieved by MBMW, the best multi-tier baseline we found in the literature, was 36.29 ms per query, while the average time achieved by BMW-CSP was 19.64 ms per query. The price paid by BMW-CSP is an extra memory required to store partial scores of documents. As we show in the experiments, this price is not prohibitive and, in cases where it is acceptable, BMW-CSP may constitute an excellent alternative query processing method.
Fast top-k preserving query processing using two-tier indexes
S0306457316300450
In social networks, identifying influential nodes is essential to control the social networks. Identifying influential nodes has been among one of the most intensively studies of analyzing the structure of networks. There are a multitude of evaluation indicators of node importance in social networks, such as degree, betweenness and cumulative nomination and so on. But most of the indicators only reveal one characteristic of the node. In fact, in social networks, node importance is not affected by a single factor, but is affected by a number of factors. Therefore, the paper puts forward a relatively comprehensive and effective method of evaluation node importance in social networks by using the multi-objective decision method. Firstly, we select several different representative indicators given a certain weight. We regard each node as a solution and different indicators of each node as the solution properties. Then through calculating the closeness degree of each node to the ideal solution, we obtain evaluation indicator of node importance in social networks. Finally, we verify the effectiveness of the proposed method experimentally on a few actual social networks.
Efficient identification of node importance in social networks
S0308596113000724
This paper investigates the contributions of digital infrastructure policies of provincial governments in Canada to the development of broadband networks. Using measurements of broadband network speeds between 2007 and 2011, the paper analyzes potential causes for observed differences in network performance growth across the provinces, including geography, Internet use intensity, platform competition, and provincial broadband policies. The analysis suggests provincial policies that employed public sector procurement power to open access to essential facilities and channeled public investments in Internet backbone infrastructure were associated with the emergence of relatively high quality broadband networks. However, a weak essential facilities regime and regulatory barriers to entry at the national level limit the scope for decentralized policy solutions.
Multilevel governance and broadband infrastructure development: Evidence from Canada
S0308596113001900
As demand for mobile broadband services continues to explode, mobile wireless networks must expand greatly their capacities. This paper describes and quantifies the economic and technical challenges associated with deepening wireless networks to meet this growing demand. Methods of capacity expansion divide into three general categories: the deployment of more radio spectrum; more intensive geographic reuse of spectrum; and increasing the throughput capacity of each MHz of spectrum within a given geographic area. The paper describes these several basic methods to deepen mobile wireless capacity. It goes on to measure the contribution of each of these methods to historical capacity growth within U.S. networks. The paper then describes the capabilities of 4G LTE wireless technology, and further innovations off of it, to further improve network capacity. These capacity expansion capabilities of LTE-Advanced along with traditional spectrum reuse are quantified and compared to forecasts of future demand to evaluate the ability of U.S. networks to match future demand. Without significantly increasing current spectrum allocations by 560MHz over the 2014–2022 period, the presented model suggests that U.S. wireless capacity expansion will be inadequate to accommodate expected demand growth. This conclusion is in contrast to claims that the U.S. faces no spectrum shortage.
Expanding mobile wireless capacity: The challenges presented by technology and economics
S0308596115001238
Cloud computing combines established computing technologies and outsourcing advantages into a new ICT paradigm that is generally expected to foster productivity and economic growth. However, despite a series of studies on the drivers of cloud adoption, evidence of its economic effects is lacking, possibly because many of the datasets on cloud computing are of insufficient size and often lack a time dimension as well as precise definitions of cloud computing, thus making them unsuitable for rigorous quantitative analysis. To overcome these limitations, we propose a proxy variable for cloud computing usage—cloud adaptiveness—based on survey panel data from European firms. Observations based on a descriptive analysis suggest three important aspects for further research. First, cloud studies should be conducted at the industry level as cloud computing adaptiveness differs widely across industry sectors. Second, it is important to know what firms do with cloud computing to understand the economic mechanisms and effects triggered by this innovation. And third, cloud adaptiveness is potentially correlated to a firm’s position in the supply chain and thus the type of output it produces as well as the market in which it operates. Our indicator can be employed to further analyze the effects of cloud computing in the context of firm heterogeneity.
Cloud adaptiveness within industry sectors – Measurement and observations
S0377221713001902
Statistical process control and maintenance planning have long been treated as two separate problems. The interdependence between these two activities has not been adequately addressed in the literature, despite their apparent connections. Information obtained in the course of statistical process control signals the need for possible maintenance actions, and thus, affects the preventive maintenance schedules. Preventive maintenance actions can prevent a production process from further deterioration and improve product quality in conjunction with statistical process control. This paper presents an integrated model for the joint optimization of statistical process control and preventive maintenance. The proposed model is developed for a production process that deteriorates according to a discrete-time Markov chain. It is assumed that preventive maintenance is imperfect, and both preventive and corrective maintenance are instantaneous. The formulation of the deterioration process with maintenance interventions, formulated as a Markov chain, provides a breakthrough in designing an efficient solution algorithm and obtaining analytical results. A numerical example is used to illustrate the proposed integrated statistical process control and preventive maintenance policies. Sensitivity analysis is conducted to analyze the impact of model parameters on optimal policies. Sensitivity analysis further indicates the interrelationship between statistical process control and maintenance actions. Numerical results indicate that potential cost savings can be achieved from the proposed integrated policies.
Joint optimization of X ¯ control chart and preventive maintenance policies: A discrete-time Markov chain approach
S0377221713001914
Majority of parallel machine scheduling studies consider machine as the only resource. However, in most real-life manufacturing environments, jobs may require additional resources, such as automated guided vehicles, machine operators, tools, pallets, dies, and industrial robots, for their handling and processing. This paper presents a review and discussion of studies on the parallel machine scheduling problems with additional resources. Papers are surveyed in five main categories: machine environment, additional resource, objective functions, complexity results and solution methods, and other important issues. The strengths and weaknesses of the literature together with open areas for future studies are also emphasized. Finally, extensions of integer programming models for two main classes of related problems are given and conclusions are drawn based on computational studies. number of jobs set of jobs number of machines set of machines indices of jobs, i, h =1,…, n the set of possible (resource) modes belonging to job i index of resource modes (k ∊ Ki ) index of machines, j =1,…, m the set of jobs already assigned to machine j number of time periods in the scheduling horizon index of time periods, t =1,…, T processing time of job i on machine j processing time of job i (independent of machine j) processing time of job i when processed in mode k ∊ Ki processing time of job i on machine j when processed in mode k ∊ Ki the size of the single additional resource type the earliest time at which job i can start its processing, i.e., release time due date of job i weight of job i, denoting the importance of job i relative to other jobs completion time of job i maximum completion time of all jobs in the system, i.e., makespan tardiness of job i equals to 1 if job i is tardy, 0, otherwise maximum lateness
Parallel machine scheduling with additional resources: Notation, classification, models and solution methods
S0377221713001938
The worldwide propagation of mobile phone and the rapid development of location technologies have provided the chance to monitor freeway traffic conditions without requiring extra infrastructure investment. Over the past decade, a number of research studies and operational tests have attempted to investigate the methods to estimate traffic measures using information from mobile phone. However, most of these works ignored the fact that each vehicle has more than one phone due to the rapid popularity of mobile phone. This paper considered the circumstance of multi-phones and proposed a relatively simplistic clustering technique to identify whether phones travel in the same vehicle. By using this technique, mobile phone data can be used to determine not only speed, but also vehicle counts by type, and therefore density. A complex simulation covering different traffic condition and location accuracy of mobile phone has been developed to evaluate the proposed approach. Simulation results indicate that location accuracy of mobile phone is a crucial factor to estimate accurate traffic measures in case of a given location frequency and the number of continuous location data. In addition, traffic demand and clustering method have a certain effect on the accuracy of traffic measures.
Estimating freeway traffic measures from mobile phone location data
S0377221713001963
To reduce labor-intensive and costly order picking activities, many distribution centers are subdivided into a forward area and a reserve (or bulk) area. The former is a small area where most popular stock keeping units (SKUs) can conveniently be picked, and the latter is applied for replenishing the forward area and storing SKUs that are not assigned to the forward area at all. Clearly, reducing SKUs stored in forward area enables a more compact forward area (with reduced picking effort) but requires a more frequent replenishment. To tackle this basic trade-off, different versions of forward–reserve problems determine the SKUs to be stored in forward area, the space allocated to each SKU, and the overall size of the forward area. As previous research mainly focuses on simplified problem versions (denoted as fluid models), where the forward area can continuously be subdivided, we investigate discrete forward–reserve problems. Important subproblems are defined and computation complexity is investigated. Furthermore, we experimentally analyze the model gaps between the different fluid models and their discrete counterparts.
The discrete forward–reserve problem – Allocating space, selecting products, and area sizing in forward order picking
S0377221713001975
This study proposes an efficient exact algorithm for the precedence-constrained single-machine scheduling problem to minimize total job completion cost where machine idle time is forbidden. The proposed algorithm is based on the SSDP (Successive Sublimation Dynamic Programming) method and is an extension of the authors’ previous algorithms for the problem without precedence constraints. In this method, a lower bound is computed by solving a Lagrangian relaxation of the original problem via dynamic programming and then it is improved successively by adding constraints to the relaxation until the gap between the lower and upper bounds vanishes. Numerical experiments will show that the algorithm can solve all instances with up to 50 jobs of the precedence-constrained total weighted tardiness and total weighted earliness–tardiness problems, and most instances with 100 jobs of the former problem.
An exact algorithm for the precedence-constrained single-machine scheduling problem
S0377221713001987
The Team Orienteering Problem (TOP) is a particular vehicle routing problem in which the aim is to maximize the profit gained from visiting customers without exceeding a travel cost/time limit. This paper proposes a new and fast evaluation process for TOP based on an interval graph model and a Particle Swarm Optimization inspired Algorithm (PSOiA) to solve the problem. Experiments conducted on the standard benchmark of TOP clearly show that our algorithm outperforms the existing solving methods. PSOiA reached a relative error of 0.0005% whereas the best known relative error in the literature is 0.0394%. Our algorithm detects all but one of the best known solutions. Moreover, a strict improvement was found for one instance of the benchmark and a new set of larger instances was introduced.
An effective PSO-inspired algorithm for the team orienteering problem
S0377221713001999
Robust optimization problems, which have uncertain data, are considered. We prove surrogate duality theorems for robust quasiconvex optimization problems and surrogate min–max duality theorems for robust convex optimization problems. We give necessary and sufficient constraint qualifications for surrogate duality and surrogate min–max duality, and show some examples at which such duality results are used effectively. Moreover, we obtain a surrogate duality theorem and a surrogate min–max duality theorem for semi-definite optimization problems in the face of data uncertainty.
Surrogate duality for robust optimization
S0377221713002002
Production ramp-up is an important phase in the lifecycle of a manufacturing system which still has significant potential for improvement and thereby reducing the time-to-market of new and updated products. Production systems today are mostly one-of-a-kind complex, engineered-to-order systems. Their ramp-up is a complex order of physical and logical adjustments which are characterised by try and error decision making resulting in frequent reiterations and unnecessary repetitions. Studies have shown that clear goal setting and feedback can significantly improve the effectiveness of decision-making in predominantly human decision processes such as ramp-up. However, few measurement-driven decision aides have been reported which focus on ramp-up improvement and no systematic approach for ramp-up time reduction has yet been defined. In this paper, a framework for measuring the performance during ramp-up is proposed in order to support decision making by providing clear metrics based on the measurable and observable status of the technical system. This work proposes a systematic framework for data preparation, ramp-up formalisation, and performance measurement. A model for defining the ramp-up state of a system has been developed in order to formalise and capture its condition. Functionality, quality and performance based metrics have been identified to formalise a clear ramp-up index as a measurement to guide and support the human decision making. For the validation of the proposed framework, two ramp-up processes of an assembly station were emulated and their comparison was used to evaluate this work.
A framework for performance measurement during production ramp-up of assembly stations
S0377221713002014
This paper considers the uncapacitated lot sizing problem with batch delivery, focusing on the general case of time-dependent batch sizes. We study the complexity of the problem, depending on the other cost parameters, namely the setup cost, the fixed cost per batch, the unit procurement cost and the unit holding cost. We establish that if any one of the cost parameters is allowed to be time-dependent, the problem is NP-hard. On the contrary, if all the cost parameters are stationary, and assuming no unit holding cost, we show that the problem is polynomially solvable in time O(T 3), where T denotes the number of periods of the horizon. We also show that, in the case of divisible batch sizes, the problem with time varying setup costs, a stationary fixed cost per batch and no unit procurement nor holding cost can be solved in time O(T 3 logT).
The single item uncapacitated lot-sizing problem with time-dependent batch sizes: NP-hard and polynomial cases
S0377221713002026
The attributes of vehicle routing problems are additional characteristics or constraints that aim to better take into account the specificities of real applications. The variants thus formed are supported by a well-developed literature, including a large variety of heuristics. This article first reviews the main classes of attributes, providing a survey of heuristics and meta-heuristics for Multi-Attribute Vehicle Routing Problems (MAVRP). It then takes a closer look at the concepts of 64 remarkable meta-heuristics, selected objectively for their outstanding performance on 15 classic MAVRP with different attributes. This cross-analysis leads to the identification of “winning strategies” in designing effective heuristics for MAVRP. This is an important step in the development of general and efficient solution methods for dealing with the large range of vehicle routing variants.
Heuristics for multi-attribute vehicle routing problems: A survey and synthesis
S0377221713002038
We consider single-item (r, q) and (s, T) inventory systems with integer-valued demand processes. While most of the inventory literature studies continuous approximations of these models and establishes joint convexity properties of the policy parameters in the continuous space, we show that these properties no longer hold in the discrete space, in the sense of linear interpolation extension and L ♮-convexity. This nonconvexity can lead to failure of optimization techniques based on local optimality to obtain the optimal inventory policies. It can also make certain comparative properties established previously using continuous variables invalid. We revise these properties in the discrete space.
On properties of discrete (r, q) and (s, T) inventory systems
S0377221713002051
This article introduces and solves a new rich routing problem integrated with practical operational constraints. The problem examined calls for the determination of the optimal routes for a vehicle fleet to satisfy a mix of two different request types. Firstly, vehicles must transport three-dimensional, rectangular and stackable boxes from a depot to a set of predetermined customers. In addition, vehicles must also transfer products between pairs of pick-up and delivery locations. Service of both request types is subject to hard time window constraints. In addition, feasible palletization patterns must be identified for the transported products. A practical application of the problem arises in the transportation systems of chain stores, where vehicles replenish the retail points by delivering products stored at a central depot, while they are also responsible for transferring stock between pairs of the retailer network. To solve this very complex combinatorial optimization problem, our major objective was to develop an efficient methodology whose required computational effort is kept within reasonable limits. To this end, we propose a local search-based framework for optimizing vehicle routes, in which feasible loading arrangements are identified via a simple-structured packing heuristic. The algorithmic framework is enhanced with various memory components which store and retrieve useful information gathered through the search process, in order to avoid any duplicate unnecessary calculations. The proposed solution approach is assessed on newly introduced benchmark instances.
Designing vehicle routes for a mix of different request types, under time windows and loading constraints
S0377221713002063
This paper highlights the subject of integrated projects planning (IPP) in contemporary IS departments, and presents a multi-period, multi-project selection and assignment approach (MPPA) to assist the departments in handling continuous project-based IS requests. The MPPA features a model to optimize the selection and assignment of IS projects. In the scope of multi-project, multi-period planning, the model innovatively considers the losses due to (1) the accumulated postponement of a previously unselected IS request and (2) the expected delay of ongoing projects when inserting a new project request. The MPPA also features an event-based decisional process for cumulative selection and assignment on a multi-period basis. Due to the complex and contextual nature of data in this paper, a computerized system is implemented for aiding the execution of the model and the process. The paper reports on an industrial case for a demonstration of the proposed work. Finally the paper compares the MPPA with related work to summarize the value and role it may play in the IPP context.
Integrated projects planning in IS departments: A multi-period multi-project selection and assignment approach with a computerized implementation
S0377221713002075
Data envelopment analysis (DEA) allows us to evaluate the relative efficiency of each of a set of decision-making units (DMUs). However, the methodology does not permit us to identify specific sources of inefficiency because DEA views the DMU as a “black box” that consumes a mix of inputs and produces a mix of outputs. Thus, DEA does not provide a DMU manager with insight regarding the internal source of the organization’s inefficiency. Recent methodological developments have extended the basic DEA methodology to allow the analyst to “look inside” the DMU and model the network of production processes that comprise the organization. In such models, sub-DMUs consume inputs from outside the DMU and intermediate products from other sub-DMUs to produce outputs that flow out of the DMU and intermediate products that flow into other sub-DMUs. In this paper, we present an unoriented two-stage DEA model to measure efficiency in situations in which analysts seek to simultaneously reduce input quantities and increase output quantities. The methodology extends previous work in which the model must be either input-oriented or output-oriented. The key to the methodology is an iterative algorithm that alternates between an input-oriented “push backward” step and an output-oriented “push forward” step that is characterized by damped oscillations in the intermediate products. We apply the methodology to Major League Baseball teams during the 2009 season to demonstrate how this approach provides a deeper understanding of each team’s operations.
Unoriented two-stage DEA: The case of the oscillating intermediate products
S0377221713002087
Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis.
Nonsmooth nonconvex optimization approach to clusterwise linear regression problems
S0377221713002099
This paper clarifies the relation between decisions of a risk-averse decision maker, based on expected utility theory on the one hand, and spectral risk measures on the other. We first demonstrate that recent approaches to this problem generally do not provide strongly consistent results, i.e. they fail to induce identical preference orders simultaneously with both concepts. Then we detail the relation between risk-averse decisions under the dual theory of choice and spectral risk measures. This relation is identified as the fundamental reason why it is not in general possible to establish a simple one-to-one mapping between expected utility theory and spectral risk measures. We are nonetheless able to use spectral risk measures to model decisions obtained using expected utility theory. Interestingly, this implies that a given utility function corresponds to a whole family of risk spectra.
Consistent modeling of risk averse behavior with spectral risk measures