Text
stringlengths 1
19.1k
| Language
stringclasses 17
values |
---|---|
[55] In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. | English |
[56] Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. | English |
Anomalies are referred to as outliers, novelties, noise, deviations and exceptions. | English |
[57] In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. | English |
This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. | English |
Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns. | English |
[58] Three broad categories of anomaly detection techniques exist. | English |
[59] Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set. | English |
Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier (the key difference to many other statistical classification problems is the inherently unbalanced nature of outlier detection). | English |
Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set and then test the likelihood of a test instance to be generated by the model. | English |
In developmental robotics, robot learning algorithms generate their own sequences of learning experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans. | English |
These robots use guidance mechanisms such as active learning, maturation, motor synergies and imitation. | English |
Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. | English |
It is intended to identify strong rules discovered in databases using some measure of "interestingness". | English |
[60] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. | English |
The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. | English |
This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. | English |
[61] Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. | English |
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. | English |
[62] For example, the rule
{
o
n
i
o
n
s
,
p
o
t
a
t
o
e
s
}
⇒
{
b
u
r
g
e
r
}
{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}
found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. | English |
Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. | English |
In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. | English |
In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. | English |
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. | English |
They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions. | English |
[63] Inductive logic programming (ILP) is an approach to rule-learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. | English |
Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. | English |
Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. | English |
Inductive logic programming is particularly useful in bioinformatics and natural language processing. | English |
Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. | English |
[64][65][66] Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. | English |
[67] The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. | English |
Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. | English |
Various types of models have been used and researched for machine learning systems. | English |
Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. | English |
Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. | English |
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. | English |
Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. | English |
An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. | English |
In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. | English |
The connections between artificial neurons are called "edges". | English |
Artificial neurons and edges typically have a weight that adjusts as learning proceeds. | English |
The weight increases or decreases the strength of the signal at a connection. | English |
Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. | English |
Typically, artificial neurons are aggregated into layers. | English |
Different layers may perform different kinds of transformations on their inputs. | English |
Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. | English |
The original goal of the ANN approach was to solve problems in the same way that a human brain would. | English |
However, over time, attention moved to performing specific tasks, leading to deviations from biology. | English |
Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. | English |
Deep learning consists of multiple hidden layers in an artificial neural network. | English |
This approach tries to model the way the human brain processes light and sound into vision and hearing. | English |
Some successful applications of deep learning are computer vision and speech recognition. | English |
[68] Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). | English |
It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. | English |
Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. | English |
Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. | English |
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. | English |
In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. | English |
Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. | English |
Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. | English |
[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. | English |
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. | English |
Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. | English |
Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. | English |
The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. | English |
When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. | English |
A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). | English |
For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. | English |
Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. | English |
Efficient algorithms exist that perform inference and learning. | English |
Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. | English |
Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. | English |
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. | English |
In machine learning, genetic algorithms were used in the 1980s and 1990s. | English |
[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms. | English |
[73] Usually, machine learning models require a lot of data in order for them to perform well. | English |
Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. | English |
Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. | English |
Overfitting is something to watch out for when training a machine learning model. | English |
Trained models derived from biased data can result in skewed or undesired predictions. | English |
Algorithmic bias is a potential result from data not fully prepared for training. | English |
Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. | English |
This also increases efficiency by decentralizing the training process to many devices. | English |
For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google. | English |
[74] There are many applications for machine learning, including: In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. | English |
A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. | English |
[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. | English |
[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. | English |
[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software. | English |
[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists. | English |
[80] In 2019 Springer Nature published the first research book created using machine learning. | English |
[81] Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. | English |
[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. | English |
[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. | English |
[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. | English |
[87][88] Machine learning has been used as a strategy to update the evidence related to systematic review and increased reviewer burden related to the growth of biomedical literature. | English |
While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. | English |
[89] Machine learning approaches in particular can suffer from different data biases. | English |
A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. | English |
Subsets and Splits