page_title
stringlengths
4
49
page_text
stringlengths
134
44.1k
Kernel density estimation
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.
Kernel embedding of distributions
In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space Ω {\displaystyle \Omega } on which a sensible kernel function (measuring similarity between elements of Ω {\displaystyle \Omega } ) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in R d {\displaystyle \mathbb {R} ^{d}} , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in. The analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or Kullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings. Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages: Data may be modeled without restrictive assumptions about the form of the distributions and relationships between variables Intermediate density estimation is not needed Practitioners may specify the properties of a distribution most relevant for their problem (incorporating prior knowledge via choice of the kernel) If a characteristic kernel is used, then the embedding can uniquely preserve all information about a distribution, while thanks to the kernel trick, computations on the potentially infinite-dimensional RKHS can be implemented in practice as simple Gram matrix operations Dimensionality-independent rates of convergence for the empirical kernel mean (estimated using samples from the distribution) to the kernel embedding of the true underlying distribution can be proven. Learning algorithms based on this framework exhibit good generalization ability and finite sample convergence, while often being simpler and more effective than information theoretic methods Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.
Knowledge graph embedding
In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction.
Knowledge integration
Knowledge integration is the process of synthesizing multiple knowledge models (or representations) into a common model (representation). Compared to information integration, which involves merging information having different schemas and representation models, knowledge integration focuses more on synthesizing the understanding of a given subject from different perspectives. For example, multiple interpretations are possible of a set of student grades, typically each from a certain perspective. An overall, integrated view and understanding of this information can be achieved if these interpretations can be put under a common model, say, a student performance index. The Web-based Inquiry Science Environment (WISE), from the University of California at Berkeley has been developed along the lines of knowledge integration theory. Knowledge integration has also been studied as the process of incorporating new information into a body of existing knowledge with an interdisciplinary approach. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to accommodate the new information, and how the new information should be modified in light of the existing knowledge. A learning agent that actively investigates the consequences of new information can detect and exploit a variety of learning opportunities; e.g., to resolve knowledge conflicts and to fill knowledge gaps. By exploiting these learning opportunities the learning agent is able to learn beyond the explicit content of the new information. The machine learning program KI, developed by Murray and Porter at the University of Texas at Austin, was created to study the use of automated and semi-automated knowledge integration to assist knowledge engineers constructing a large knowledge base. A possible technique which can be used is semantic matching. More recently, a technique useful to minimize the effort in mapping validation and visualization has been presented which is based on Minimal Mappings. Minimal mappings are high quality mappings such that i) all the other mappings can be computed from them in time linear in the size of the input graphs, and ii) none of them can be dropped without losing property i). The University of Waterloo operates a Bachelor of Knowledge Integration undergraduate degree program as an academic major or minor. The program started in 2008.
Labeled data
Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor. Labels can be obtained by asking humans to make judgments about a given piece of unlabeled data. Labeled data is significantly more expensive to obtain than the raw unlabeled data. The quality of labeled data directly influences the performance of supervised machine learning models in operation, as these models learn from the provided labels.
Lazy learning
In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to eager learning, where the system tries to generalize the training data before receiving queries. The primary motivation for employing lazy learning, as in the K-nearest neighbors algorithm, used by online recommendation systems ("people who viewed/purchased/listened to this movie/item/tune also ...") is that the data set is continuously updated with new entries (e.g., new items for sale at Amazon, new movies to view at Netflix, new clips at YouTube, new music at Spotify or Pandora). Because of the continuous update, the "training data" would be rendered obsolete in a relatively short time especially in areas like books and movies, where new best-sellers or hit movies/music are published/released continuously. Therefore, one cannot really talk of a "training phase". Lazy classifiers are most useful for large, continuously changing datasets with few attributes that are commonly queried. Specifically, even if a large set of attributes exist - for example, books have a year of publication, author/s, publisher, title, edition, ISBN, selling price, etc. - recommendation queries rely on far fewer attributes - e.g., purchase or viewing co-occurrence data, and user ratings of items purchased/viewed.
Leakage (machine learning)
In statistics and machine learning, leakage (also known as data leakage or target leakage) is the use of information in the model training process which would not be expected to be available at prediction time, causing the predictive scores (metrics) to overestimate the model's utility when run in a production environment. Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model.
Learnable function class
In statistical learning theory, a learnable function class is a set of functions for which an algorithm can be devised to asymptotically minimize the expected risk, uniformly over all probability distributions. The concept of learnable classes are closely related to regularization in machine learning, and provides large sample justifications for certain learning algorithms.
Learning automaton
A learning automaton is one type of machine learning algorithm studied since 1970s. Learning automata select their current action based on past experiences from the environment. It will fall into the range of reinforcement learning if the environment is stochastic and a Markov decision process (MDP) is used.
Learning curve (machine learning)
In machine learning, a learning curve (or training curve) plots the optimal value of a model's loss function for a training set against this loss function evaluated on a validation data set with same parameters as produced the optimal function. Synonyms include error curve, experience curve, improvement curve and generalization curve. More abstractly, the learning curve is a curve of (learning effort)-(predictive performance), where usually learning effort means number of training samples and predictive performance means accuracy on testing samples. The machine learning curve is useful for many purposes including comparing different algorithms, choosing model parameters during design, adjusting optimization to improve convergence, and determining the amount of data used for training.
Learning rate
In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns". In the adaptive control literature, the learning rate is commonly referred to as gain. In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum. In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method. The learning rate is related to the step length determined by inexact line search in quasi-Newton methods and related optimization algorithms. Initial rate can be left as system default or can be selected using a range of techniques. A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. There are many different learning rate schedules but the most common are time-based, step-based and exponential. Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyperparameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose. The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras. Time-based learning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is: η n + 1 = η n 1 + d n {\displaystyle \eta _{n+1}={\frac {\eta _{n}}{1+dn}}} where η {\displaystyle \eta } is the learning rate, d {\displaystyle d} is a decay parameter and n {\displaystyle n} is the iteration step. Step-based learning schedules changes the learning rate according to some predefined steps. The decay application formula is here defined as: η n = η 0 d ⌊ 1 + n r ⌋ {\displaystyle \eta _{n}=\eta _{0}d^{\left\lfloor {\frac {1+n}{r}}\right\rfloor }} where η n {\displaystyle \eta _{n}} is the learning rate at iteration n {\displaystyle n} , η 0 {\displaystyle \eta _{0}} is the initial learning rate, d {\displaystyle d} is how much the learning rate should change at each drop (0.5 corresponds to a halving) and r {\displaystyle r} corresponds to the drop rate, or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). The floor function ( ⌊ … ⌋ {\displaystyle \lfloor \dots \rfloor } ) here drops the value of its input to 0 for all values smaller than 1. Exponential learning schedules are similar to step-based, but instead of steps, a decreasing exponential function is used. The mathematical formula for factoring in the decay is: η n = η 0 e − d n {\displaystyle \eta _{n}=\eta _{0}e^{-dn}} where d {\displaystyle d} is a decay parameter. The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types of adaptive gradient descent algorithms such as Adagrad, Adadelta, RMSprop, and Adam which are generally built into deep learning libraries such as Keras. Géron, Aurélien (2017). "Gradient Descent". Hands-On Machine Learning with Scikit-Learn and TensorFlow. O'Reilly. pp. 113–124. ISBN 978-1-4919-6229-9. Plagianakos, V. P.; Magoulas, G. D.; Vrahatis, M. N. (2001). "Learning Rate Adaptation in Stochastic Gradient Descent". Advances in Convex Analysis and Global Optimization. Kluwer. pp. 433–444. ISBN 0-7923-6942-4. de Freitas, Nando (February 12, 2015). "Optimization". Deep Learning Lecture 6. University of Oxford – via YouTube.
Learning to rank
Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Learning with errors
In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear n {\displaystyle n} -ary function f {\displaystyle f} over a finite ring from given samples y i = f ( x i ) {\displaystyle y_{i}=f(\mathbf {x} _{i})} some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography. More precisely, the LWE problem is defined as follows. Let Z q {\displaystyle \mathbb {Z} _{q}} denote the ring of integers modulo q {\displaystyle q} and let Z q n {\displaystyle \mathbb {Z} _{q}^{n}} denote the set of n {\displaystyle n} -vectors over Z q {\displaystyle \mathbb {Z} _{q}} . There exists a certain unknown linear function f : Z q n → Z q {\displaystyle f:\mathbb {Z} _{q}^{n}\rightarrow \mathbb {Z} _{q}} , and the input to the LWE problem is a sample of pairs ( x , y ) {\displaystyle (\mathbf {x} ,y)} , where x ∈ Z q n {\displaystyle \mathbf {x} \in \mathbb {Z} _{q}^{n}} and y ∈ Z q {\displaystyle y\in \mathbb {Z} _{q}} , so that with high probability y = f ( x ) {\displaystyle y=f(\mathbf {x} )} . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function f {\displaystyle f} , or some close approximation thereof, with high probability. The LWE problem was introduced by Oded Regev in 2005 (who won the 2018 Gödel Prize for this work); it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems, such as the ring learning with errors key exchange by Peikert.
Life-time of correlation
In probability theory and related fields, the life-time of correlation measures the timespan over which there is appreciable autocorrelation or cross-correlation in stochastic processes.
Linear predictor function
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis), as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights". The basic form of a linear predictor function f ( i ) {\displaystyle f(i)} for data point i (consisting of p explanatory variables), for i = 1, ..., n, is f ( i ) = β 0 + β 1 x i 1 + ⋯ + β p x i p , {\displaystyle f(i)=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip},} where x i k {\displaystyle x_{ik}} , for k = 1, ..., p, is the value of the k-th explanatory variable for data point i, and β 0 , … , β p {\displaystyle \beta _{0},\ldots ,\beta _{p}} are the coefficients (regression coefficients, weights, etc.) indicating the relative effect of a particular explanatory variable on the outcome. It is common to write the predictor function in a more compact form as follows: The coefficients β0, β1, ..., βp are grouped into a single vector β of size p + 1. For each data point i, an additional explanatory pseudo-variable xi0 is added, with a fixed value of 1, corresponding to the intercept coefficient β0. The resulting explanatory variables xi0(= 1), xi1, ..., xip are then grouped into a single vector xi of size p + 1. This makes it possible to write the linear predictor function as follows: f ( i ) = β ⋅ x i {\displaystyle f(i)={\boldsymbol {\beta }}\cdot \mathbf {x} _{i}} using the notation for a dot product between two vectors. An equivalent form using matrix notation is as follows: f ( i ) = β T x i = x i T β {\displaystyle f(i)={\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}=\mathbf {x} _{i}^{\mathrm {T} }{\boldsymbol {\beta }}} where β {\displaystyle {\boldsymbol {\beta }}} and x i {\displaystyle \mathbf {x} _{i}} are assumed to be a (p+1)-by-1 column vectors, β T {\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }} is the matrix transpose of β {\displaystyle {\boldsymbol {\beta }}} (so β T {\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }} is a 1-by-(p+1) row vector), and β T x i {\displaystyle {\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}} indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar. An example of the usage of a linear predictor function is in linear regression, where each data point is associated with a continuous outcome yi, and the relationship written y i = f ( i ) + ε i = β T x i + ε i , {\displaystyle y_{i}=f(i)+\varepsilon _{i}={\boldsymbol {\beta }}^{\mathrm {T} }\mathbf {x} _{i}\ +\varepsilon _{i},} where ε i {\displaystyle \varepsilon _{i}} is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function. In some models (standard linear regression, in particular), the equations for each of the data points i = 1, ..., n are stacked together and written in vector form as y = X β + ε , {\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\,} where y = ( y 1 y 2 ⋮ y n ) , X = ( x 1 ′ x 2 ′ ⋮ x n ′ ) = ( x 11 ⋯ x 1 p x 21 ⋯ x 2 p ⋮ ⋱ ⋮ x n 1 ⋯ x n p ) , β = ( β 1 ⋮ β p ) , ε = ( ε 1 ε 2 ⋮ ε n ) . {\displaystyle \mathbf {y} ={\begin{pmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{pmatrix}},\quad \mathbf {X} ={\begin{pmatrix}\mathbf {x} '_{1}\\\mathbf {x} '_{2}\\\vdots \\\mathbf {x} '_{n}\end{pmatrix}}={\begin{pmatrix}x_{11}&\cdots &x_{1p}\\x_{21}&\cdots &x_{2p}\\\vdots &\ddots &\vdots \\x_{n1}&\cdots &x_{np}\end{pmatrix}},\quad {\boldsymbol {\beta }}={\begin{pmatrix}\beta _{1}\\\vdots \\\beta _{p}\end{pmatrix}},\quad {\boldsymbol {\varepsilon }}={\begin{pmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{pmatrix}}.} The matrix X is known as the design matrix and encodes all known information about the independent variables. The variables ε i {\displaystyle \varepsilon _{i}} are random variables, which in standard linear regression are distributed according to a standard normal distribution; they express the influence of any unknown factors on the outcome. This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations. In particular, the optimal coefficients β ^ {\displaystyle {\boldsymbol {\hat {\beta }}}} as estimated by least squares can be written as follows: β ^ = ( X T X ) − 1 X T y . {\displaystyle {\boldsymbol {\hat {\beta }}}=(X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }\mathbf {y} .} The matrix ( X T X ) − 1 X T {\displaystyle (X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }} is known as the Moore–Penrose pseudoinverse of X. The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others). In such cases, the singular value decomposition can be used to compute the pseudoinverse. When a fixed set of nonlinear functions are used to transform the value(s) of a data point, these functions are known as basis functions. An example is polynomial regression, which uses a linear predictor function to fit an arbitrary degree polynomial relationship (up to a given order) between two sets of data points (i.e. a single real-valued explanatory variable and a related real-valued dependent variable), by adding multiple explanatory variables corresponding to various powers of the existing explanatory variable. Mathematically, the form looks like this: y i = β 0 + β 1 x i + β 2 x i 2 + ⋯ + β p x i p . {\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+\cdots +\beta _{p}x_{i}^{p}.} In this case, for each data point i, a set of explanatory variables is created as follows: ( x i 1 = x i , x i 2 = x i 2 , … , x i p = x i p ) {\displaystyle (x_{i1}=x_{i},\quad x_{i2}=x_{i}^{2},\quad \ldots ,\quad x_{ip}=x_{i}^{p})} and then standard linear regression is run. The basis functions in this example would be ϕ ( x ) = ( ϕ 1 ( x ) , ϕ 2 ( x ) , … , ϕ p ( x ) ) = ( x , x 2 , … , x p ) . {\displaystyle {\boldsymbol {\phi }}(x)=(\phi _{1}(x),\phi _{2}(x),\ldots ,\phi _{p}(x))=(x,x^{2},\ldots ,x^{p}).} This example shows that a linear predictor function can actually be much more powerful than it first appears: It only really needs to be linear in the coefficients. All sorts of non-linear functions of the explanatory variables can be fit by the model. There is no particular need for the inputs to basis functions to be univariate or single-dimensional (or their outputs, for that matter, although in such a case, a K-dimensional output value is likely to be treated as K separate scalar-output basis functions). An example of this is radial basis functions (RBF's), which compute some transformed version of the distance to some fixed point: ϕ ( x ; c ) = ϕ ( | | x − c | | ) = ϕ ( ( x 1 − c 1 ) 2 + … + ( x K − c K ) 2 ) {\displaystyle \phi (\mathbf {x} ;\mathbf {c} )=\phi (||\mathbf {x} -\mathbf {c} ||)=\phi ({\sqrt {(x_{1}-c_{1})^{2}+\ldots +(x_{K}-c_{K})^{2}}})} An example is the Gaussian RBF, which has the same functional form as the normal distribution: ϕ ( x ; c ) = e − b | | x − c | | 2 {\displaystyle \phi (\mathbf {x} ;\mathbf {c} )=e^{-b||\mathbf {x} -\mathbf {c} ||^{2}}} which drops off rapidly as the distance from c increases. A possible usage of RBF's is to create one for every observed data point. This means that the result of an RBF applied to a new data point will be close to 0 unless the new point is near to the point around which the RBF was applied. That is, the application of the radial basis functions will pick out the nearest point, and its regression coefficient will dominate. The result will be a form of nearest neighbor interpolation, where predictions are made by simply using the prediction of the nearest observed data point, possibly interpolating between multiple nearby data points when they are all similar distances away. This type of nearest neighbor method for prediction is often considered diametrically opposed to the type of prediction used in standard linear regression: But in fact, the transformations that can be applied to the explanatory variables in a linear predictor function are so powerful that even the nearest neighbor method can be implemented as a type of linear regression. It is even possible to fit some functions that appear non-linear in the coefficients by transforming the coefficients into new coefficients that do appear linear. For example, a function of the form a + b 2 x i 1 + c x i 2 {\displaystyle a+b^{2}x_{i1}+{\sqrt {c}}x_{i2}} for coefficients a , b , c {\displaystyle a,b,c} could be transformed into the appropriate linear function by applying the substitutions b ′ = b 2 , c ′ = c , {\displaystyle b'=b^{2},c'={\sqrt {c}},} leading to a + b ′ x i 1 + c ′ x i 2 , {\displaystyle a+b'x_{i1}+c'x_{i2},} which is linear. Linear regression and similar techniques could be applied and will often still find the optimal coefficients, but their error estimates and such will be wrong. The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (e.g. income, age, blood pressure, etc.) and discrete variables (e.g. sex, race, political party, etc.). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), i.e. separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have the given value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" would be converted to separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. Note that, for K categories, not all K dummy variables are independent of each other. For example, in the above blood type example, only three of the four dummy variables are independent, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it's really only necessary to encode three of the four possibilities as dummy variables, and in fact if all four possibilities are encoded, the overall model becomes non-identifiable. This causes problems for a number of methods, such as the simple closed-form solution used in linear regression. The solution is either to avoid such cases by eliminating one of the dummy variables, and/or introduce a regularization constraint (which necessitates a more powerful, typically iterative, method for finding the optimal coefficients). Linear model Linear regression
Linear separability
In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane. The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept.
Local case-control sampling
In machine learning, local case-control sampling is an algorithm used to reduce the complexity of training a logistic regression classifier. The algorithm reduces the training complexity by selecting a small subsample of the original dataset for training. It assumes the availability of a (unreliable) pilot estimation of the parameters. It then performs a single pass over the entire dataset using the pilot estimation to identify the most "surprising" samples. In practice, the pilot may come from prior knowledge or training using a subsample of the dataset. The algorithm is most effective when the underlying dataset is imbalanced. It exploits the structures of conditional imbalanced datasets more efficiently than alternative methods, such as case control sampling and weighted case control sampling.
M-theory (learning framework)
In machine learning and computer vision, M-theory is a learning framework inspired by feed-forward processing in the ventral stream of visual cortex and originally developed for recognition and classification of objects in visual scenes. M-theory was later applied to other areas, such as speech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-theory, HMAX, achieved human-level performance. The core principle of M-theory is extracting representations invariant under various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-theory they are not hardcoded into the algorithms, but learned. M-theory also shares some principles with compressed sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex.
Machine Learning (journal)
Machine Learning is a peer-reviewed scientific journal, published since 1986. In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet. Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
Machine learning control
Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory which solves optimal control problems with methods of machine learning. Key applications are complex nonlinear systems for which linear control theory methods are not applicable.
Machine learning in bioinformatics
Machine learning in bioinformatics is the application of machine learning algorithms to bioinformatics, including genomics, proteomics, microarrays, systems biology, evolution, and text mining. Prior to the emergence of machine learning, bioinformatics algorithms had to be programmed by hand; for problems such as protein structure prediction, this proved difficult. Machine learning techniques, such as deep learning can learn features of data sets, instead of requiring the programmer to define them individually. The algorithm can further learn how to combine low-level features into more abstract features, and so on. This multi-layered approach allows such systems to make sophisticated predictions when appropriately trained. These methods contrast with other computational biology approaches which, while exploiting existing datasets, do not allow the data to be interpreted and analyzed in unanticipated ways.
Machine learning in earth sciences
Applications of machine learning in earth sciences include geological mapping, gas leakage detection and geological features identification. Machine learning (ML) is a type of artificial intelligence (AI) that enables computer systems to classify, cluster, identify and analyze vast and complex sets of data while eliminating the need for explicit instructions and programming. Earth science is the study of the origin, evolution, and future of the planet Earth. The Earth system can be subdivided into four major components including the solid earth, atmosphere, hydrosphere and biosphere. A variety of algorithms may be applied depending on the nature of the earth science exploration. Some algorithms may perform significantly better than others for particular objectives. For example, convolutional neural networks (CNN) are good at interpreting images, artificial neural networks (ANN) perform well in soil classification but more computationally expensive to train than support-vector machine (SVM) learning. The application of machine learning has been popular in recent decades, as the development of other technologies such as unmanned aerial vehicles (UAVs), ultra-high resolution remote sensing technology and high-performance computing units lead to the availability of large high-quality datasets and more advanced algorithms.
Machine learning in physics
Applying classical methods of machine learning to the study of quantum systems is the focus of an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. Classical machine learning is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technologies development, and computational materials design. In this context, it can be used for example as a tool to interpolate pre-calculated interatomic potentials or directly solving the Schrödinger equation with a variational method.
Machine learning in video games
Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems. Information on machine learning techniques in the field of games is mostly known to public through research projects as most gaming companies choose not to publish specific information about their intellectual property. The most publicly known application of machine learning in games is likely the use of deep learning agents that compete with professional human players in complex strategy games. There has been a significant application of machine learning on games such as Atari/ALE, Doom, Minecraft, StarCraft, and car racing. Other games that did not originally exists as video games, such as chess and Go have also been affected by the machine learning.
Machine-learned interatomic potential
Beginning in the 1990s, researchers have employed machine learning programs to construct interatomic potentials, mapping atomic structures to their potential energies. These potentials are generally referred to as 'machine-learned interatomic potentials' (MLIPs) or simply 'machine learning potentials' (MLPs). Such machine learning potentials promised to fill the gap between density functional theory, a highly-accurate but computationally-intensive simulation program, and empirically derived or intuitively-approximated potentials, which were far computationally lighter but substantially less accurate. Improvements in artificial intelligence technology have served to heighten the accuracy of MLPs while lowering their computational cost, increasing machine learning's role in fitting potentials. Machine learning potentials began by using neural networks to tackle low dimensional systems. While promising, these models could not systematically account for interatomic energy interactions; they could be applied to small molecules in a vacuum and molecules interacting with frozen surfaces, but not much else, and even in these applications often relied on force fields or potentials derived empirically or with simulations. These models thus remained confined to academia. Modern neural networks construct highly-accurate, computationally-light potentials because theoretical understanding of materials science was increasingly built into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. There exist some nonlocal models, but these have been experimental for almost a decade. For most systems, reasonable cutoff radii enable highly accurate results. Almost all neural networks intake atomic coordinates and output potential energies. For some, these atomic coordinates are converted into atom-centered symmetry functions. From this data, a separate atomic neural network is trained for each element; each atomic neural network is evaluated whenever that element occurs in the given structure, and then the results are pooled together at the end. This process - in particular, the atom-centered symmetry functions, which convey translational, rotational, and permutational invariances - has greatly improved machine learning potentials by significantly constraining the neural networks' search space. Other models use a similar process but emphasize bonds over atoms, using pair symmetry functions and training one neural network per atom pair. Still other models, rather than using predetermined symmetry-dictating functions, prefer to learn their own descriptors instead. These models, called message-passing neural networks (MPNNs), are graph neural networks. Treating molecules as three-dimensional graphs (where atoms are nodes and bonds are edges), the model intakes feature vectors describing the atoms, and iteratively updates these feature vectors as information about neighboring atoms is processed through message functions and convolutions. These feature vectors are then used to predict the final potentials. This method gives more flexibility to the artificial intelligences, often resulting in stronger and more generalizable models. In 2017, the first-ever MPNN model, a deep tensor neural network, was used to calculate the properties of small organic molecules. Such technology was commercialized, leading to the development of Matlantis in 2022, which extracts properties through both the forward and backward passes. Matlantis, which can simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than density functional theory with almost indistinguishable accuracy, showcases the power of machine learning potentials in the age of artificial intelligence.
Manifold hypothesis
The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space. As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features. The manifold hypothesis is related to the effectiveness of nonlinear dimensionality reduction techniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such as manifold sculpting, manifold alignment, and manifold regularization. The major implications of this hypothesis is that Machine learning models only have to fit relatively simple, low-dimensional, highly structured subspaces within their potential input space (latent manifolds). Within one of these manifolds, it’s always possible to interpolate between two inputs, that is to say, morph one into another via a continuous path along which all points fall on the manifold. The ability to interpolate between samples is the key to generalization in deep learning.
Manifold regularization
In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is smooth: data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.
The Master Algorithm
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World is a book by Pedro Domingos released in 2015. Domingos wrote the book in order to generate interest from people outside the field.
Matchbox Educable Noughts and Crosses Engine
The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961. It was designed to play human opponents in games of noughts and crosses (tic-tac-toe) by returning a move for any given state of play and to refine its strategy through reinforcement learning. Michie did not have a computer readily available, so he worked around this restriction by building it out of matchboxes. The matchboxes used by Michie each represented a single possible layout of a noughts and crosses grid. When the computer first played, it would randomly choose moves based on the current layout. As it played more games, through a reinforcement loop, it disqualified strategies that led to losing games, and supplemented strategies that led to winning games. Michie held a tournament against MENACE in 1961, wherein he experimented with different openings. Following MENACE's maiden tournament against Michie, it demonstrated successful artificial intelligence in its strategy. Michie's essays on MENACE's weight initialisation and the BOXES algorithm used by MENACE became popular in the field of computer science research. Michie was honoured for his contribution to machine learning research, and was twice commissioned to program a MENACE simulation on an actual computer.
Matrix regularization
In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over min x ‖ A x − y ‖ 2 + λ ‖ x ‖ 2 {\displaystyle \min _{x}\left\|Ax-y\right\|^{2}+\lambda \left\|x\right\|^{2}} to find a vector x {\displaystyle x} that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as min X ‖ A X − Y ‖ 2 + λ ‖ X ‖ 2 , {\displaystyle \min _{X}\left\|AX-Y\right\|^{2}+\lambda \left\|X\right\|^{2},} where the vector norm enforcing a regularization penalty on x {\displaystyle x} has been extended to a matrix norm on X {\displaystyle X} . Matrix regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning. Consider a matrix W {\displaystyle W} to be learned from a set of examples, S = ( X i t , y i t ) {\displaystyle S=(X_{i}^{t},y_{i}^{t})} , where i {\displaystyle i} goes from 1 {\displaystyle 1} to n {\displaystyle n} , and t {\displaystyle t} goes from 1 {\displaystyle 1} to T {\displaystyle T} . Let each input matrix X i {\displaystyle X_{i}} be ∈ R D T {\displaystyle \in \mathbb {R} ^{DT}} , and let W {\displaystyle W} be of size D × T {\displaystyle D\times T} . A general model for the output y {\displaystyle y} can be posed as y i t = ⟨ W , X i t ⟩ F , {\displaystyle y_{i}^{t}=\left\langle W,X_{i}^{t}\right\rangle _{F},} where the inner product is the Frobenius inner product. For different applications the matrices X i {\displaystyle X_{i}} will have different forms, but for each of these the optimization problem to infer W {\displaystyle W} can be written as min W ∈ H E ( W ) + R ( W ) , {\displaystyle \min _{W\in {\mathcal {H}}}E(W)+R(W),} where E {\displaystyle E} defines the empirical error for a given W {\displaystyle W} , and R ( W ) {\displaystyle R(W)} is a matrix regularization penalty. The function R ( W ) {\displaystyle R(W)} is typically chosen to be convex and is often selected to enforce sparsity (using ℓ 1 {\displaystyle \ell ^{1}} -norms) and/or smoothness (using ℓ 2 {\displaystyle \ell ^{2}} -norms). Finally, W {\displaystyle W} is in the space of matrices H {\displaystyle {\mathcal {H}}} with Frobenius inner product ⟨ … ⟩ F {\displaystyle \langle \dots \rangle _{F}} . In the problem of matrix completion, the matrix X i t {\displaystyle X_{i}^{t}} takes the form X i t = e t ⊗ e i ′ , {\displaystyle X_{i}^{t}=e_{t}\otimes e_{i}',} where ( e t ) t {\displaystyle (e_{t})_{t}} and ( e i ′ ) i {\displaystyle (e_{i}')_{i}} are the canonical basis in R T {\displaystyle \mathbb {R} ^{T}} and R D {\displaystyle \mathbb {R} ^{D}} . In this case the role of the Frobenius inner product is to select individual elements w i t {\displaystyle w_{i}^{t}} from the matrix W {\displaystyle W} . Thus, the output y {\displaystyle y} is a sampling of entries from the matrix W {\displaystyle W} . The problem of reconstructing W {\displaystyle W} from a small set of sampled entries is possible only under certain restrictions on the matrix, and these restrictions can be enforced by a regularization function. For example, it might be assumed that W {\displaystyle W} is low-rank, in which case the regularization penalty can take the form of a nuclear norm. R ( W ) = λ ‖ W ‖ ∗ = λ ∑ i | σ i | , {\displaystyle R(W)=\lambda \left\|W\right\|_{*}=\lambda \sum _{i}\left|\sigma _{i}\right|,} where σ i {\displaystyle \sigma _{i}} , with i {\displaystyle i} from 1 {\displaystyle 1} to min D , T {\displaystyle \min D,T} , are the singular values of W {\displaystyle W} . Models used in multivariate regression are parameterized by a matrix of coefficients. In the Frobenius inner product above, each matrix X {\displaystyle X} is X i t = e t ⊗ x i {\displaystyle X_{i}^{t}=e_{t}\otimes x_{i}} such that the output of the inner product is the dot product of one row of the input with one column of the coefficient matrix. The familiar form of such models is Y = X W + b {\displaystyle Y=XW+b} Many of the vector norms used in single variable regression can be extended to the multivariate case. One example is the squared Frobenius norm, which can be viewed as an ℓ 2 {\displaystyle \ell ^{2}} -norm acting either entrywise, or on the singular values of the matrix: R ( W ) = λ ‖ W ‖ F 2 = λ ∑ i ∑ j | w i j | 2 = λ Tr ⁡ ( W ∗ W ) = λ ∑ i σ i 2 . {\displaystyle R(W)=\lambda \left\|W\right\|_{F}^{2}=\lambda \sum _{i}\sum _{j}\left|w_{ij}\right|^{2}=\lambda \operatorname {Tr} \left(W^{*}W\right)=\lambda \sum _{i}\sigma _{i}^{2}.} In the multivariate case the effect of regularizing with the Frobenius norm is the same as the vector case; very complex models will have larger norms, and, thus, will be penalized more. The setup for multi-task learning is almost the same as the setup for multivariate regression. The primary difference is that the input variables are also indexed by task (columns of Y {\displaystyle Y} ). The representation with the Frobenius inner product is then X i t = e t ⊗ x i t . {\displaystyle X_{i}^{t}=e_{t}\otimes x_{i}^{t}.} The role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problem min W ‖ X W − Y ‖ 2 2 + λ ‖ W ‖ 2 2 {\displaystyle \min _{W}\left\|XW-Y\right\|_{2}^{2}+\lambda \left\|W\right\|_{2}^{2}} the solutions corresponding to each column of Y {\displaystyle Y} are decoupled. That is, the same solution can be found by solving the joint problem, or by solving an isolated regression problem for each column. The problems can be coupled by adding an additional regularization penalty on the covariance of solutions min W , Ω ‖ X W − Y ‖ 2 2 + λ 1 ‖ W ‖ 2 2 + λ 2 Tr ⁡ ( W T Ω − 1 W ) {\displaystyle \min _{W,\Omega }\left\|XW-Y\right\|_{2}^{2}+\lambda _{1}\left\|W\right\|_{2}^{2}+\lambda _{2}\operatorname {Tr} \left(W^{T}\Omega ^{-1}W\right)} where Ω {\displaystyle \Omega } models the relationship between tasks. This scheme can be used to both enforce similarity of solutions across tasks, and to learn the specific structure of task similarity by alternating between optimizations of W {\displaystyle W} and Ω {\displaystyle \Omega } . When the relationship between tasks is known to lie on a graph, the Laplacian matrix of the graph can be used to couple the learning problems. Regularization by spectral filtering has been used to find stable solutions to problems such as those discussed above by addressing ill-posed matrix inversions (see for example Filter function for Tikhonov regularization). In many cases the regularization function acts on the input (or kernel) to ensure a bounded inverse by eliminating small singular values, but it can also be useful to have spectral norms that act on the matrix that is to be learned. There are a number of matrix norms that act on the singular values of the matrix. Frequently used examples include the Schatten p-norms, with p = 1 or 2. For example, matrix regularization with a Schatten 1-norm, also called the nuclear norm, can be used to enforce sparsity in the spectrum of a matrix. This has been used in the context of matrix completion when the matrix in question is believed to have a restricted rank. In this case the optimization problem becomes: min ‖ W ‖ ∗ subject to W i , j = Y i j . {\displaystyle \min \left\|W\right\|_{*}~~{\text{ subject to }}~~W_{i,j}=Y_{ij}.} Spectral Regularization is also used to enforce a reduced rank coefficient matrix in multivariate regression. In this setting, a reduced rank coefficient matrix can be found by keeping just the top n {\displaystyle n} singular values, but this can be extended to keep any reduced set of singular values and vectors. Sparse optimization has become the focus of much research interest as a way to find solutions that depend on a small number of variables (see e.g. the Lasso method). In principle, entry-wise sparsity can be enforced by penalizing the entry-wise ℓ 0 {\displaystyle \ell ^{0}} -norm of the matrix, but the ℓ 0 {\displaystyle \ell ^{0}} -norm is not convex. In practice this can be implemented by convex relaxation to the ℓ 1 {\displaystyle \ell ^{1}} -norm. While entry-wise regularization with an ℓ 1 {\displaystyle \ell ^{1}} -norm will find solutions with a small number of nonzero elements, applying an ℓ 1 {\displaystyle \ell ^{1}} -norm to different groups of variables can enforce structure in the sparsity of solutions. The most straightforward example of structured sparsity uses the ℓ p , q {\displaystyle \ell _{p,q}} norm with p = 2 {\displaystyle p=2} and q = 1 {\displaystyle q=1} : ‖ W ‖ 2 , 1 = ∑ i ‖ w i ‖ 2 . {\displaystyle \left\|W\right\|_{2,1}=\sum _{i}\left\|w_{i}\right\|_{2}.} For example, the ℓ 2 , 1 {\displaystyle \ell _{2,1}} norm is used in multi-task learning to group features across tasks, such that all the elements in a given row of the coefficient matrix can be forced to zero as a group. The grouping effect is achieved by taking the ℓ 2 {\displaystyle \ell ^{2}} -norm of each row, and then taking the total penalty to be the sum of these row-wise norms. This regularization results in rows that will tend to be all zeros, or dense. The same type of regularization can be used to enforce sparsity column-wise by taking the ℓ 2 {\displaystyle \ell ^{2}} -norms of each column. More generally, the ℓ 2 , 1 {\displaystyle \ell _{2,1}} norm can be applied to arbitrary groups of variables: R ( W ) = λ ∑ g G ∑ j | G g | | w g j | 2 = λ ∑ g G ‖ w g ‖ g {\displaystyle R(W)=\lambda \sum _{g}^{G}{\sqrt {\sum _{j}^{|G_{g}|}\left|w_{g}^{j}\right|^{2}}}=\lambda \sum _{g}^{G}\left\|w_{g}\right\|_{g}} where the index g {\displaystyle g} is across groups of variables, and | G g | {\displaystyle |G_{g}|} indicates the cardinality of group g {\displaystyle g} . Algorithms for solving these group sparsity problems extend the more well-known Lasso and group Lasso methods by allowing overlapping groups, for example, and have been implemented via matching pursuit: and proximal gradient methods. By writing the proximal gradient with respect to a given coefficient, w g i {\displaystyle w_{g}^{i}} , it can be seen that this norm enforces a group-wise soft threshold prox λ , R g ⁡ ( w g ) i = ( w g i − λ w g i ‖ w g ‖ g ) 1 ‖ w g ‖ g ≥ λ . {\displaystyle \operatorname {prox} _{\lambda ,R_{g}}\left(w_{g}\right)^{i}=\left(w_{g}^{i}-\lambda {\frac {w_{g}^{i}}{\left\|w_{g}\right\|_{g}}}\right)\mathbf {1} _{\|w_{g}\|_{g}\geq \lambda }.} where 1 ‖ w g ‖ g ≥ λ {\displaystyle \mathbf {1} _{\|w_{g}\|_{g}\geq \lambda }} is the indicator function for group norms ≥ λ {\displaystyle \geq \lambda } . Thus, using ℓ 2 , 1 {\displaystyle \ell _{2,1}} norms it is straightforward to enforce structure in the sparsity of a matrix either row-wise, column-wise, or in arbitrary blocks. By enforcing group norms on blocks in multivariate or multi-task regression, for example, it is possible to find groups of input and output variables, such that defined subsets of output variables (columns in the matrix Y {\displaystyle Y} ) will depend on the same sparse set of input variables. The ideas of structured sparsity and feature selection can be extended to the nonparametric case of multiple kernel learning. This can be useful when there are multiple types of input data (color and texture, for example) with different appropriate kernels for each, or when the appropriate kernel is unknown. If there are two kernels, for example, with feature maps A {\displaystyle A} and B {\displaystyle B} that lie in corresponding reproducing kernel Hilbert spaces H A , H B {\displaystyle {\mathcal {H_{A}}},{\mathcal {H_{B}}}} , then a larger space, H D {\displaystyle {\mathcal {H_{D}}}} , can be created as the sum of two spaces: H D : f = h + h ′ ; h ∈ H A , h ′ ∈ H B {\displaystyle {\mathcal {H_{D}}}:f=h+h';h\in {\mathcal {H_{A}}},h'\in {\mathcal {H_{B}}}} assuming linear independence in A {\displaystyle A} and B {\displaystyle B} . In this case the ℓ 2 , 1 {\displaystyle \ell _{2,1}} -norm is again the sum of norms: ‖ f ‖ H D , 1 = ‖ h ‖ H A + ‖ h ′ ‖ H B {\displaystyle \left\|f\right\|_{{\mathcal {H_{D}}},1}=\left\|h\right\|_{\mathcal {H_{A}}}+\left\|h'\right\|_{\mathcal {H_{B}}}} Thus, by choosing a matrix regularization function as this type of norm, it is possible to find a solution that is sparse in terms of which kernels are used, but dense in the coefficient of each used kernel. Multiple kernel learning can also be used as a form of nonlinear variable selection, or as a model aggregation technique (e.g. by taking the sum of squared norms and relaxing sparsity constraints). For example, each kernel can be taken to be the Gaussian kernel with a different width. Regularization (mathematics)
Maximum inner-product search
Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommendation algorithms and machine learning. Formally, for a database of vectors x i {\displaystyle x_{i}} defined over a set of labels S {\displaystyle S} in an inner product space with an inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } defined on it, MIPS search can be defined as the problem of determining a r g m a x i ∈ S ⟨ x i , q ⟩ {\displaystyle {\underset {i\in S}{\operatorname {arg\,max} }}\ \langle x_{i},q\rangle } for a given query q {\displaystyle q} . Although there is an obvious linear-time implementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search. Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to a nearest neighbor search (NNS) problem in which maximizing the inner product is equivalent to minimizing the corresponding distance metric in the NNS problem. Like other forms of NNS, MIPS algorithms may be approximate or exact. MIPS search is used as part of DeepMind's RETRO algorithm.
Meta-learning (computer science)
Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn. Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood. By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987) and Yoshua Bengio et al.'s work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc. A proposed definition for a meta-learning system combines three requirements: The system must include a learning subsystem. Experience is gained by exploiting meta knowledge extracted in a previous learning episode on a single dataset, or from different domains. Learning bias must be chosen dynamically. Bias refers to the assumptions that influence the choice of explanatory hypotheses and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias. Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space (e.g., represent hypotheses using linear functions only). Procedural bias imposes constraints on the ordering of the inductive hypotheses (e.g., preferring smaller hypotheses). There are three common approaches: using (cyclic) networks with external or internal memory (model-based) learning effective distance metrics (metrics-based) explicitly optimizing model parameters for fast learning (optimization-based). Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model. A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples. Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving. Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters. Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results. What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples. LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training. Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent. Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic. Some approaches which have been viewed as instances of meta-learning: Recurrent neural networks (RNNs) are universal computers. In 1993, Jürgen Schmidhuber showed how "self-referential" RNNs can in principle learn by backpropagation to run their own weight change algorithm, which may be quite different from backpropagation. In 2001, Sepp Hochreiter & A.S. Younger & P.R. Conwell built a successful supervised meta-learner based on Long short-term memory RNNs. It learned through backpropagation a learning algorithm for quadratic functions that is much faster than backpropagation. Researchers at Deepmind (Marcin Andrychowicz et al.) extended this approach to optimization in 2017. In the 1990s, Meta Reinforcement Learning or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy. An extreme type of Meta Reinforcement Learning is embodied by the Gödel machine, a theoretical construct which can inspect and modify any part of its own software which also contains a general theorem prover. It can achieve recursive self-improvement in a provably optimal way. Model-Agnostic Meta-Learning (MAML) was introduced in 2017 by Chelsea Finn et al. Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune." MAML was successfully applied to few-shot image classification benchmarks and to policy-gradient-based reinforcement learning. Variational Bayes-Adaptive Deep RL (VariBAD) was introduced in 2019. While MAML is optimization-based, VariBAD is a model-based method for meta reinforcement learning, and leverages a variational autoencoder to capture the task information in an internal memory, thus conditioning its decision making on the task. When addressing a set of tasks, most meta learning approaches optimize the average score across all tasks. Hence, certain tasks may be sacrificed in favor of the average score, which is often unacceptable in real-world applications. By contrast, Robust Meta Reinforcement Learning (RoML) focuses on improving low-score tasks, increasing robustness to the selection of task. RoML works as a meta-algorithm, as it can be applied on top of other meta learning algorithms (such as MAML and VariBAD) to increase their robustness. It is applicable to both supervised meta learning and meta reinforcement learning. Discovering meta-knowledge works by inducing knowledge (e.g. rules) that expresses how each learning method will perform on different learning problems. The metadata is formed by characteristics of the data (general, statistical, information-theoretic,... ) in the learning problem, and characteristics of the learning algorithm (type, parameter settings, performance measures,...). Another learning algorithm then learns how the data characteristics relate to the algorithm characteristics. Given a new learning problem, the data characteristics are measured, and the performance of different learning algorithms are predicted. Hence, one can predict the algorithms best suited for the new problem. Stacked generalisation works by combining multiple (different) learning algorithms. The metadata is formed by the predictions of those different algorithms. Another learning algorithm learns from this metadata to predict which combinations of algorithms give generally good results. Given a new learning problem, the predictions of the selected set of algorithms are combined (e.g. by (weighted) voting) to provide the final prediction. Since each algorithm is deemed to work on a subset of problems, a combination is hoped to be more flexible and able to make good predictions. Boosting is related to stacked generalisation, but uses the same algorithm multiple times, where the examples in the training data get different weights over each run. This yields different predictions, each focused on rightly predicting a subset of the data, and combining those predictions leads to better (but more expensive) results. Dynamic bias selection works by altering the inductive bias of a learning algorithm to match the given problem. This is done by altering key aspects of the learning algorithm, such as the hypothesis representation, heuristic formulae, or parameters. Many different approaches exist. Inductive transfer studies how the learning process can be improved over time. Metadata consists of knowledge about previous learning episodes and is used to efficiently develop an effective hypothesis for a new task. A related approach is called learning to learn, in which the goal is to use acquired knowledge from one domain to help learning in other domains. Other approaches using metadata to improve automatic learning are learning classifier systems, case-based reasoning and constraint satisfaction. Some initial, theoretical work has been initiated to use Applied Behavioral Analysis as a foundation for agent-mediated meta-learning about the performances of human learners, and adjust the instructional course of an artificial agent. AutoML such as Google Brain's "AI building AI" project, which according to Google briefly exceeded existing ImageNet benchmarks in 2017. Metalearning article in Scholarpedia Vilalta, R.; Drissi, Y. (2002). "A perspective view and survey of meta-learning" (PDF). Artificial Intelligence Review. 18 (2): 77–95. doi:10.1023/A:1019956318069. Giraud-Carrier, C.; Keller, J. (2002). "Meta-Learning". In Meij, J. (ed.). Dealing with the data flood. The Hague: STT/Beweton. Brazdil, P.; Giraud-Carrier, C.; Soares, C.; Vilalta, R. (2009). "Metalearning: Concepts and Systems". Metalearning: applications to data mining. Springer. ISBN 978-3-540-73262-4. Video courses about Meta-Learning with step-by-step explanation of MAML, Prototypical Networks, and Relation Networks.
MLOps
In ancient Roman religion, Ops or Opis (Latin: "Plenty") was a fertility deity and earth goddess of Sabine origin. Her equivalent in Greek mythology was Rhea.
Mountain car problem
Mountain Car, a standard testing domain in Reinforcement learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various reinforcement learning papers. The mountain car problem, although fairly simple, is commonly applied because it requires a reinforcement learning agent to learn on two continuous variables: position and velocity. For any given state (position and velocity) of the car, the agent is given the possibility of driving left, driving right, or not using the engine at all. In the standard version of the problem, the agent receives a negative reward at every time step when the goal is not reached; the agent has no information about the goal until an initial success. The mountain car problem appeared first in Andrew Moore's PhD thesis (1990). It was later more strictly defined in Singh and Sutton's reinforcement learning paper with eligibility traces. The problem became more widely studied when Sutton and Barto added it to their book Reinforcement Learning: An Introduction (1998). Throughout the years many versions of the problem have been used, such as those which modify the reward function, termination condition, and the start state. Q-learning and similar techniques for mapping discrete states to discrete actions need to be extended to be able to deal with the continuous state space of the problem. Approaches often fall into one of two categories, state space discretization or function approximation. In this approach, two continuous state variables are pushed into discrete states by bucketing each continuous variable into multiple discrete states. This approach works with properly tuned parameters but a disadvantage is information gathered from one state is not used to evaluate another state. Tile coding can be used to improve discretization and involves continuous variables mapping into sets of buckets offset from one another. Each step of training has a wider impact on the value function approximation because when the offset grids are summed, the information is diffused. Function approximation is another way to solve the mountain car. By choosing a set of basis functions beforehand, or by generating them as the car drives, the agent can approximate the value function at each state. Unlike the step-wise version of the value function created with discretization, function approximation can more cleanly estimate the true smooth function of the mountain car domain. One aspect of the problem involves the delay of actual reward. The agent is not able to learn about the goal until a successful completion. Given a naive approach for each trial the car can only backup the reward of the goal slightly. This is a problem for naive discretization because each discrete state will only be backed up once, taking a larger number of episodes to learn the problem. This problem can be alleviated via the mechanism of eligibility traces, which will automatically backup the reward given to states before, dramatically increasing the speed of learning. Eligibility traces can be viewed as a bridge from temporal difference learning methods to Monte Carlo methods. The mountain car problem has undergone many iterations. This section focuses on the standard well-defined version from Sutton (2008). Two-dimensional continuous state space. V e l o c i t y = ( − 0.07 , 0.07 ) {\displaystyle Velocity=(-0.07,0.07)} P o s i t i o n = ( − 1.2 , 0.6 ) {\displaystyle Position=(-1.2,0.6)} One-dimensional discrete action space. m o t o r = ( l e f t , n e u t r a l , r i g h t ) {\displaystyle motor=(left,neutral,right)} For every time step: r e w a r d = − 1 {\displaystyle reward=-1} For every time step: A c t i o n = [ − 1 , 0 , 1 ] {\displaystyle Action=[-1,0,1]} V e l o c i t y = V e l o c i t y + ( A c t i o n ) ∗ 0.001 + cos ⁡ ( 3 ∗ P o s i t i o n ) ∗ ( − 0.0025 ) {\displaystyle Velocity=Velocity+(Action)*0.001+\cos(3*Position)*(-0.0025)} P o s i t i o n = P o s i t i o n + V e l o c i t y {\displaystyle Position=Position+Velocity} Optionally, many implementations include randomness in both parameters to show better generalized learning. P o s i t i o n = − 0.5 {\displaystyle Position=-0.5} V e l o c i t y = 0.0 {\displaystyle Velocity=0.0} End the simulation when: P o s i t i o n ≥ 0.6 {\displaystyle Position\geq 0.6} There are many versions of the mountain car which deviate in different ways from the standard model. Variables that vary include but are not limited to changing the constants (gravity and steepness) of the problem so specific tuning for specific policies become irrelevant and altering the reward function to affect the agent's ability to learn in a different manner. An example is changing the reward to be equal to the distance from the goal, or changing the reward to zero everywhere and one at the goal. Additionally, a 3D mountain car can be used, with a 4D continuous state space. C++ Mountain Car Software. Richard s. Sutton. Java Mountain Car with support for RL Glue Python, with good discussion (blog post - down page) Sutton, Richard S. (1996). Mountain Car with Sparse Coarse Coding. Advances in Neural Information Processing Systems. MIT Press. pp. 1038–1044. CiteSeerx: 10.1.1.51.4764. Mountain Car with Replacing Eligibility Traces "More discussion on Continuous State Spaces". 2000. pp. 903–910. CiteSeerX 10.1.1.97.9314. Gaussian Processes with Mountain Car
Multi-armed bandit
In probability theory and machine learning, the multi-armed bandit problem (sometimes called the K- or N-armed bandit problem) is a problem in which a decision maker iteratively selects one of multiple fixed choices (i.e., arms or actions) when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms. Instances of the multi-armed bandit problem include the task of iteratively allocating a fixed, limited set of resources between competing (alternative) choices in a way that minimizes the regret. A notable alternative setup for the multi-armed bandit problem include the "best arm identification" problem where the goal is instead to identify the best choice by the end of a finite number of rounds. The multi-armed bandit problem is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. In contrast to general RL, the selected actions in bandit problems do not affect the reward distribution of the arms. The name comes from imagining a gambler at a row of slot machines (sometimes known as "one-armed bandits"), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine. The multi-armed bandit problem also falls into the broad category of stochastic scheduling. In the problem, each machine provides a random reward from a probability distribution specific to that machine, that is not known a priori. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in machine learning. In practice, multi-armed bandits have been used to model problems such as managing research projects in a large organization, like a science foundation or a pharmaceutical company. In early versions of the problem, the gambler begins with no initial knowledge about the machines. Herbert Robbins in 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments". A theorem, the Gittins index, first published by John C. Gittins, gives an optimal policy for maximizing the expected discounted reward. The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize their decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize their total value over the period of time considered. There are many practical applications of the bandit model, for example: clinical trials investigating the effects of different experimental treatments while minimizing patient losses, adaptive routing efforts for minimizing delays in a network, financial portfolio design In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in machine learning. The model has also been used to control dynamic allocation of resources to different projects, answering the question of which project to work on, given uncertainty about the difficulty and payoff of each possibility. Originally considered by Allied scientists in World War II, it proved so intractable that, according to Peter Whittle, the problem was proposed to be dropped over Germany so that German scientists could also waste their time on it. The version of the problem now commonly analyzed was formulated by Herbert Robbins in 1952. The multi-armed bandit (short: bandit or MAB) can be seen as a set of real distributions B = { R 1 , … , R K } {\displaystyle B=\{R_{1},\dots ,R_{K}\}} , each distribution being associated with the rewards delivered by one of the K ∈ N + {\displaystyle K\in \mathbb {N} ^{+}} levers. Let μ 1 , … , μ K {\displaystyle \mu _{1},\dots ,\mu _{K}} be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon H {\displaystyle H} is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret ρ {\displaystyle \rho } after T {\displaystyle T} rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: ρ = T μ ∗ − ∑ t = 1 T r ^ t {\displaystyle \rho =T\mu ^{*}-\sum _{t=1}^{T}{\widehat {r}}_{t}} , where μ ∗ {\displaystyle \mu ^{*}} is the maximal reward mean, μ ∗ = max k { μ k } {\displaystyle \mu ^{*}=\max _{k}\{\mu _{k}\}} , and r ^ t {\displaystyle {\widehat {r}}_{t}} is the reward in round t. A zero-regret strategy is a strategy whose average regret per round ρ / T {\displaystyle \rho /T} tends to zero with probability 1 when the number of played rounds tends to infinity. Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played. A common formulation is the Binary multi-armed bandit or Bernoulli multi-armed bandit, which issues a reward of one with probability p {\displaystyle p} , and otherwise a reward of zero. Another formulation of the multi-armed bandit has each arm representing an independent Markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the Markov state evolution probabilities. There is a reward depending on the current state of the machine. In a generalization called the "restless bandit problem", the states of non-played arms can also evolve over time. There has also been discussion of systems where the number of choices (about which arm to play) increases over time. Computer science researchers have studied multi-armed bandits under worst-case assumptions, obtaining algorithms to minimize regret in both finite and infinite (asymptotic) time horizons for both stochastic and non-stochastic arm payoffs. An important variation of the classical regret minimization problem in multi-armed bandits is the one of Best Arm Identification (BAI), also known as pure exploration. This problem is crucial in various applications, including clinical trials, adaptive routing, recommendation systems, and A/B testing. In BAI, the objective is to identify the arm having the highest expected reward. An algorithm in this setting is characterized by a sampling rule, a decision rule, and a stopping rule, described as follows: Sampling rule: ( a t ) t ≥ 1 {\displaystyle (a_{t})_{t\geq 1}} is a sequence of actions at each time step Stopping rule: τ {\displaystyle \tau } is a (random) stopping time which suggests when to stop collecting samples Decision rule: a ^ τ {\displaystyle {\hat {a}}_{\tau }} is a guess on the best arm based on the data collected up to time τ {\displaystyle \tau } There are two predominant settings in BAI: Fixed confidence setting: Given a confidence level δ ∈ ( 0 , 1 ) {\displaystyle \delta \in (0,1)} , the objective is to identify the arm with the highest expected reward a ⋆ ∈ arg ⁡ max k μ k {\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}} with the least possible amount of trials and with probability of error P ( a ^ τ ≠ a ⋆ ) ≤ δ {\displaystyle \mathbb {P} ({\hat {a}}_{\tau }\neq a^{\star })\leq \delta } . Fixed budget setting: Given a time horizon T ≥ 1 {\displaystyle T\geq 1} , the objective is to identify the arm with the highest expected reward a ⋆ ∈ arg ⁡ max k μ k {\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}} minimizing probability of error δ {\displaystyle \delta } . A major breakthrough was the construction of optimal population selection strategies, or policies (that possess uniformly maximum convergence rate to the population with highest mean) in the work described below. In the paper "Asymptotically efficient adaptive allocation rules", Lai and Robbins (following papers of Robbins and his co-workers going back to Robbins in the year 1952) constructed convergent population selection policies that possess the fastest rate of convergence (to the population with highest mean) for the case that the population reward distributions are the one-parameter exponential family. Then, in Katehakis and Robbins simplifications of the policy and the main proof were given for the case of normal populations with known variances. The next notable progress was obtained by Burnetas and Katehakis in the paper "Optimal adaptive policies for sequential allocation problems", where index based policies with uniformly maximum convergence rate were constructed, under more general conditions that include the case in which the distributions of outcomes from each population depend on a vector of unknown parameters. Burnetas and Katehakis (1996) also provided an explicit solution for the important case in which the distributions of outcomes follow arbitrary (i.e., non-parametric) discrete, univariate distributions. Later in "Optimal adaptive policies for Markov decision processes" Burnetas and Katehakis studied the much larger model of Markov Decision Processes under partial information, where the transition law and/or the expected one period rewards may depend on unknown parameters. In this work, the authors constructed an explicit form for a class of adaptive policies with uniformly maximum convergence rate properties for the total expected finite horizon reward under sufficient assumptions of finite state-action spaces and irreducibility of the transition law. A main feature of these policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right-hand side of the estimated average reward optimality equations. These inflations have recently been called the optimistic approach in the work of Tewari and Bartlett, Ortner Filippi, Cappé, and Garivier, and Honda and Takemura. For Bernoulli multi-armed bandits, Pilarski et al. studied computation methods of deriving fully optimal solutions (not just asymptotically) using dynamic programming in the paper "Optimal Policy for Bernoulli Bandits: Computation and Algorithm Gauge." Via indexing schemes, lookup tables, and other techniques, this work provided practically applicable optimal solutions for Bernoulli bandits provided that time horizons and numbers of arms did not become excessively large. Pilarski et al. later extended this work in "Delayed Reward Bernoulli Bandits: Optimal Policy and Predictive Meta-Algorithm PARDI" to create a method of determining the optimal policy for Bernoulli bandits when rewards may not be immediately revealed following a decision and may be delayed. This method relies upon calculating expected values of reward outcomes which have not yet been revealed and updating posterior probabilities when rewards are revealed. When optimal solutions to multi-arm bandit tasks are used to derive the value of animals' choices, the activity of neurons in the amygdala and ventral striatum encodes the values derived from these policies, and can be used to decode when the animals make exploratory versus exploitative choices. Moreover, optimal policies better predict animals' choice behavior than alternative strategies (described below). This suggests that the optimal solutions to multi-arm bandit problems are biologically plausible, despite being computationally demanding. Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the four broad categories detailed below. Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken. Epsilon-greedy strategy: The best lever is selected for a proportion 1 − ϵ {\displaystyle 1-\epsilon } of the trials, and a lever is selected at random (with uniform probability) for a proportion ϵ {\displaystyle \epsilon } . A typical parameter value might be ϵ = 0.1 {\displaystyle \epsilon =0.1} , but this can vary widely depending on circumstances and predilections. Epsilon-first strategy: A pure exploration phase is followed by a pure exploitation phase. For N {\displaystyle N} trials in total, the exploration phase occupies ϵ N {\displaystyle \epsilon N} trials and the exploitation phase ( 1 − ϵ ) N {\displaystyle (1-\epsilon )N} trials. During the exploration phase, a lever is randomly selected (with uniform probability); during the exploitation phase, the best lever is always selected. Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of ϵ {\displaystyle \epsilon } decreases as the experiment progresses, resulting in highly explorative behaviour at the start and highly exploitative behaviour at the finish. Adaptive epsilon-greedy strategy based on value differences (VDBE): Similar to the epsilon-decreasing strategy, except that epsilon is reduced on basis of the learning progress instead of manual tuning (Tokic, 2010). High fluctuations in the value estimates lead to a high epsilon (high exploration, low exploitation); low fluctuations to a low epsilon (low exploration, high exploitation). Further improvements can be achieved by a softmax-weighted action selection in case of exploratory actions (Tokic & Palm, 2011). Adaptive epsilon-greedy strategy based on Bayesian ensembles (Epsilon-BMC): An adaptive epsilon adaptation strategy for reinforcement learning similar to VBDE, with monotone convergence guarantees. In this framework, the epsilon parameter is viewed as the expectation of a posterior distribution weighting a greedy agent (that fully trusts the learned reward) and uniform learning agent (that distrusts the learned reward). This posterior is approximated using a suitable Beta distribution under the assumption of normality of observed rewards. In order to address the possible risk of decreasing epsilon too quickly, uncertainty in the variance of the learned reward is also modeled and updated using a normal-gamma model. (Gimelfarb et al., 2019). Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever. Probability matching strategies are also known as Thompson sampling or Bayesian Bandits, and are surprisingly easy to implement if you can sample from the posterior for the mean value of each alternative. Probability matching strategies also admit solutions to so-called contextual bandit problems. Pricing strategies establish a price for each lever. For example, as illustrated with the POKER algorithm, the price can be the sum of the expected reward plus an estimation of extra future rewards that will gain through the additional knowledge. The lever of highest price is always pulled. A useful generalization of the multi-armed bandit is the contextual multi-armed bandit. At each iteration an agent still has to choose between arms, but they also see a d-dimensional feature vector, the context vector they can use together with the rewards of the arms played in the past to make the choice of the arm to play. Over time, the learner's aim is to collect enough information about how the context vectors and rewards relate to each other, so that it can predict the next best arm to play by looking at the feature vectors. Many strategies exist that provide an approximate solution to the contextual bandit problem, and can be put into two broad categories detailed below. LinUCB (Upper Confidence Bound) algorithm: the authors assume a linear dependency between the expected reward of an action and its context and model the representation space using a set of linear predictors. LinRel (Linear Associative Reinforcement Learning) algorithm: Similar to LinUCB, but utilizes Singular-value decomposition rather than Ridge regression to obtain an estimate of confidence. UCBogram algorithm: The nonlinear reward functions are estimated using a piecewise constant estimator called a regressogram in nonparametric regression. Then, UCB is employed on each constant piece. Successive refinements of the partition of the context space are scheduled or chosen adaptively. Generalized linear algorithms: The reward distribution follows a generalized linear model, an extension to linear bandits. KernelUCB algorithm: a kernelized non-linear version of linearUCB, with efficient implementation and finite-time analysis. Bandit Forest algorithm: a random forest is built and analyzed w.r.t the random forest built knowing the joint distribution of contexts and rewards. Oracle-based algorithm: The algorithm reduces the contextual bandit problem into a series of supervised learning problem, and does not rely on typical realizability assumption on the reward function. In practice, there is usually a cost associated with the resource consumed by each action and the total cost is limited by a budget in many applications such as crowdsourcing and clinical trials. Constrained contextual bandit (CCB) is such a model that considers both the time and budget constraints in a multi-armed bandit setting. A. Badanidiyuru et al. first studied contextual bandits with budget constraints, also referred to as Resourceful Contextual Bandits, and show that a O ( T ) {\displaystyle O({\sqrt {T}})} regret is achievable. However, their work focuses on a finite set of policies, and the algorithm is computationally inefficient. A simple algorithm with logarithmic regret is proposed in: UCB-ALP algorithm: The framework of UCB-ALP is shown in the right figure. UCB-ALP is a simple algorithm that combines the UCB method with an Adaptive Linear Programming (ALP) algorithm, and can be easily deployed in practical systems. It is the first work that show how to achieve logarithmic regret in constrained contextual bandits. Although is devoted to a special case with single budget constraint and fixed cost, the results shed light on the design and analysis of algorithms for more general CCB problems. Another variant of the multi-armed bandit problem is called the adversarial bandit, first introduced by Auer and Cesa-Bianchi (1998). In this variant, at each iteration, an agent chooses an arm and an adversary simultaneously chooses the payoff structure for each arm. This is one of the strongest generalizations of the bandit problem as it removes all assumptions of the distribution and a solution to the adversarial bandit problem is a generalized solution to the more specific bandit problems. An example often considered for adversarial bandits is the iterated prisoner's dilemma. In this example, each adversary has two arms to pull. They can either Deny or Confess. Standard stochastic bandit algorithms don't work very well with these iterations. For example, if the opponent cooperates in the first 100 rounds, defects for the next 200, then cooperate in the following 300, etc. then algorithms such as UCB won't be able to react very quickly to these changes. This is because after a certain point sub-optimal arms are rarely pulled to limit exploration and focus on exploitation. When the environment changes the algorithm is unable to adapt or may not even detect the change. EXP3 is a popular algorithm for adversarial multiarmed bandits, suggested and analyzed in this setting by Auer et al. [2002b]. Recently there was an increased interest in the performance of this algorithm in the stochastic setting, due to its new applications to stochastic multi-armed bandits with side information [Seldin et al., 2011] and to multi-armed bandits in the mixed stochastic-adversarial setting [Bubeck and Slivkins, 2012]. The paper presented an empirical evaluation and improved analysis of the performance of the EXP3 algorithm in the stochastic setting, as well as a modification of the EXP3 algorithm capable of achieving "logarithmic" regret in stochastic environment. Parameters: Real γ ∈ ( 0 , 1 ] {\displaystyle \gamma \in (0,1]} Initialisation: ω i ( 1 ) = 1 {\displaystyle \omega _{i}(1)=1} for i = 1 , . . . , K {\displaystyle i=1,...,K} For each t = 1, 2, ..., T 1. Set p i ( t ) = ( 1 − γ ) ω i ( t ) ∑ j = 1 K ω j ( t ) + γ K {\displaystyle p_{i}(t)=(1-\gamma ){\frac {\omega _{i}(t)}{\sum _{j=1}^{K}\omega _{j}(t)}}+{\frac {\gamma }{K}}} i = 1 , . . . , K {\displaystyle i=1,...,K} 2. Draw i t {\displaystyle i_{t}} randomly according to the probabilities p 1 ( t ) , . . . , p K ( t ) {\displaystyle p_{1}(t),...,p_{K}(t)} 3. Receive reward x i t ( t ) ∈ [ 0 , 1 ] {\displaystyle x_{i_{t}}(t)\in [0,1]} 4. For j = 1 , . . . , K {\displaystyle j=1,...,K} set: x ^ j ( t ) = { x j ( t ) / p j ( t ) if j = i t 0 , otherwise {\displaystyle {\hat {x}}_{j}(t)={\begin{cases}x_{j}(t)/p_{j}(t)&{\text{if }}j=i_{t}\\0,&{\text{otherwise}}\end{cases}}} ω j ( t + 1 ) = ω j ( t ) exp ⁡ ( γ x ^ j ( t ) / K ) {\displaystyle \omega _{j}(t+1)=\omega _{j}(t)\exp(\gamma {\hat {x}}_{j}(t)/K)} Exp3 chooses an arm at random with probability ( 1 − γ ) {\displaystyle (1-\gamma )} it prefers arms with higher weights (exploit), it chooses with probability γ {\displaystyle \gamma } to uniformly randomly explore. After receiving the rewards the weights are updated. The exponential growth significantly increases the weight of good arms. The (external) regret of the Exp3 algorithm is at most O ( K T l o g ( K ) ) {\displaystyle O({\sqrt {KTlog(K)}})} Parameters: Real η {\displaystyle \eta } Initialisation: ∀ i : R i ( 1 ) = 0 {\displaystyle \forall i:R_{i}(1)=0} For each t = 1,2,...,T 1. For each arm generate a random noise from an exponential distribution ∀ i : Z i ( t ) ∼ E x p ( η ) {\displaystyle \forall i:Z_{i}(t)\sim Exp(\eta )} 2. Pull arm I ( t ) {\displaystyle I(t)} : I ( t ) = a r g max i { R i ( t ) + Z i ( t ) } {\displaystyle I(t)=arg\max _{i}\{R_{i}(t)+Z_{i}(t)\}} Add noise to each arm and pull the one with the highest value 3. Update value: R I ( t ) ( t + 1 ) = R I ( t ) ( t ) + x I ( t ) ( t ) {\displaystyle R_{I(t)}(t+1)=R_{I(t)}(t)+x_{I(t)}(t)} The rest remains the same We follow the arm that we think has the best performance so far adding exponential noise to it to provide exploration. In the original specification and in the above variants, the bandit problem is specified with a discrete and finite number of arms, often indicated by the variable K {\displaystyle K} . In the infinite armed case, introduced by Agrawal (1995), the "arms" are a continuous variable in K {\displaystyle K} dimensions. This framework refers to the multi-armed bandit problem in a non-stationary setting (i.e., in presence of concept drift). In the non-stationary setting, it is assumed that the expected reward for an arm k {\displaystyle k} can change at every time step t ∈ T {\displaystyle t\in {\mathcal {T}}} : μ t − 1 k ≠ μ t k {\displaystyle \mu _{t-1}^{k}\neq \mu _{t}^{k}} . Thus, μ t k {\displaystyle \mu _{t}^{k}} no longer represents the whole sequence of expected (stationary) rewards for arm k {\displaystyle k} . Instead, μ k {\displaystyle \mu ^{k}} denotes the sequence of expected rewards for arm k {\displaystyle k} , defined as μ k = { μ t k } t = 1 T {\displaystyle \mu ^{k}=\{\mu _{t}^{k}\}_{t=1}^{T}} . A dynamic oracle represents the optimal policy to be compared with other policies in the non-stationary setting. The dynamic oracle optimises the expected reward at each step t ∈ T {\displaystyle t\in {\mathcal {T}}} by always selecting the best arm, with expected reward of μ t ∗ {\displaystyle \mu _{t}^{*}} . Thus, the cumulative expected reward D ( T ) {\displaystyle {\mathcal {D}}(T)} for the dynamic oracle at final time step T {\displaystyle T} is defined as: D ( T ) = ∑ t = 1 T μ t ∗ {\displaystyle {\mathcal {D}}(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}} Hence, the regret ρ π ( T ) {\displaystyle \rho ^{\pi }(T)} for policy π {\displaystyle \pi } is computed as the difference between D ( T ) {\displaystyle {\mathcal {D}}(T)} and the cumulative expected reward at step T {\displaystyle T} for policy π {\displaystyle \pi } : ρ π ( T ) = ∑ t = 1 T μ t ∗ − E π μ [ ∑ t = 1 T r t ] = D ( T ) − E π μ [ ∑ t = 1 T r t ] {\displaystyle \rho ^{\pi }(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right]={\mathcal {D}}(T)-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right]} Garivier and Moulines derive some of the first results with respect to bandit problems where the underlying model can change during play. A number of algorithms were presented to deal with this case, including Discounted UCB and Sliding-Window UCB. A similar approach based on Thompson Sampling algorithm is the f-Discounted-Sliding-Window Thompson Sampling (f-dsw TS) proposed by Cavenaghi et al. The f-dsw TS algorithm exploits a discount factor on the reward history and an arm-related sliding window to contrast concept drift in non-stationary environments. Another work by Burtini et al. introduces a weighted least squares Thompson sampling approach (WLS-TS), which proves beneficial in both the known and unknown non-stationary cases. Many variants of the problem have been proposed in recent years. The dueling bandit variant was introduced by Yue et al. (2012) to model the exploration-versus-exploitation tradeoff for relative feedback. In this variant the gambler is allowed to pull two levers at the same time, but they only get a binary feedback telling which lever provided the best reward. The difficulty of this problem stems from the fact that the gambler has no way of directly observing the reward of their actions. The earliest algorithms for this problem are InterleaveFiltering, Beat-The-Mean. The relative feedback of dueling bandits can also lead to voting paradoxes. A solution is to take the Condorcet winner as a reference. More recently, researchers have generalized algorithms from traditional MAB to dueling bandits: Relative Upper Confidence Bounds (RUCB), Relative EXponential weighing (REX3), Copeland Confidence Bounds (CCB), Relative Minimum Empirical Divergence (RMED), and Double Thompson Sampling (DTS). Approaches using multiple bandits that cooperate sharing knowledge in order to better optimize their performance started in 2013 with "A Gang of Bandits", an algorithm relying on a similarity graph between the different bandit problems to share knowledge. The need of a similarity graph was removed in 2014 by the work on the CLUB algorithm. Following this work, several other researchers created algorithms to learn multiple models at the same time under bandit feedback. For example, COFIBA was introduced by Li and Karatzoglou and Gentile (SIGIR 2016), where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. The Combinatorial Multiarmed Bandit (CMAB) problem arises when instead of a single discrete variable to choose from, an agent needs to choose values for a set of variables. Assuming each variable is discrete, the number of possible choices per iteration is exponential in the number of variables. Several CMAB settings have been studied in the literature, from settings where the variables are binary to more general setting where each variable can take an arbitrary set of values. Gittins index – a powerful, general strategy for analyzing bandit problems. Greedy algorithm Optimal stopping Search theory Stochastic scheduling Guha, S.; Munagala, K.; Shi, P. (2010), "Approximation algorithms for restless bandit problems", Journal of the ACM, 58: 1–50, arXiv:0711.3861, doi:10.1145/1870103.1870106, S2CID 1654066 Dayanik, S.; Powell, W.; Yamazaki, K. (2008), "Index policies for discounted bandit problems with availability constraints", Advances in Applied Probability, 40 (2): 377–400, doi:10.1239/aap/1214950209. Powell, Warren B. (2007), "Chapter 10", Approximate Dynamic Programming: Solving the Curses of Dimensionality, New York: John Wiley and Sons, ISBN 978-0-470-17155-4. Robbins, H. (1952), "Some aspects of the sequential design of experiments", Bulletin of the American Mathematical Society, 58 (5): 527–535, doi:10.1090/S0002-9904-1952-09620-8. Sutton, Richard; Barto, Andrew (1998), Reinforcement Learning, MIT Press, ISBN 978-0-262-19398-6, archived from the original on 2013-12-11. Allesiardo, Robin (2014), "A Neural Networks Committee for the Contextual Bandit Problem", Neural Information Processing – 21st International Conference, ICONIP 2014, Malaisia, November 03-06,2014, Proceedings, Lecture Notes in Computer Science, vol. 8834, Springer, pp. 374–381, arXiv:1409.8191, doi:10.1007/978-3-319-12637-1_47, ISBN 978-3-319-12636-4, S2CID 14155718. Weber, Richard (1992), "On the Gittins index for multiarmed bandits", Annals of Applied Probability, 2 (4): 1024–1033, doi:10.1214/aoap/1177005588, JSTOR 2959678. Katehakis, M.; C. Derman (1986), "Computing optimal sequential allocation rules in clinical trials", Adaptive statistical procedures and related topics, Institute of Mathematical Statistics Lecture Notes - Monograph Series, vol. 8, pp. 29–39, doi:10.1214/lnms/1215540286, ISBN 978-0-940600-09-6, JSTOR 4355518. Katehakis, Michael N.; Veinott, Jr., Arthur F. (1987), "The multi-armed bandit problem: decomposition and computation", Mathematics of Operations Research, 12 (2): 262–268, doi:10.1287/moor.12.2.262, JSTOR 3689689, S2CID 656323 MABWiser, open source Python implementation of bandit strategies that supports context-free, parametric and non-parametric contextual policies with built-in parallelization and simulation capability. PyMaBandits, open source implementation of bandit strategies in Python and Matlab. Contextual, open source R package facilitating the simulation and evaluation of both context-free and contextual Multi-Armed Bandit policies. bandit.sourceforge.net Bandit project, open source implementation of bandit strategies. Banditlib, Open-Source implementation of bandit strategies in C++. Leslie Pack Kaelbling and Michael L. Littman (1996). Exploitation versus Exploration: The Single-State Case. Tutorial: Introduction to Bandits: Algorithms and Theory. Part1. Part2. Feynman's restaurant problem, a classic example (with known answer) of the exploitation vs. exploration tradeoff. Bandit algorithms vs. A-B testing. S. Bubeck and N. Cesa-Bianchi A Survey on Bandits. A Survey on Contextual Multi-armed Bandits, a survey/tutorial for Contextual Bandits. Blog post on multi-armed bandit strategies, with Python code. Animated, interactive plots illustrating Epsilon-greedy, Thompson sampling, and Upper Confidence Bound exploration/exploitation balancing strategies.
Multi-task learning
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints". In a widely cited 1997 paper, Rich Caruana gave the following characterization:Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification. Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks. The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge: Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a linear combination of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with sparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases. Task relatedness can be imposed a priori or learned from the data. Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly. For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains. One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods. Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet, an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task. Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL). Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents. Multitask optimization: In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models. Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics. These methods include subtracting the projection of conflicted gradients, applying techniques from game theory, and using Bayesian modeling to get a distribution over gradients. The MTL problem can be cast within the context of RKHSvv (a complete inner product space of vector-valued functions equipped with a reproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015. Suppose the training data set is S t = { ( x i t , y i t ) } i = 1 n t {\displaystyle {\mathcal {S}}_{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{n_{t}}} , with x i t ∈ X {\displaystyle x_{i}^{t}\in {\mathcal {X}}} , y i t ∈ Y {\displaystyle y_{i}^{t}\in {\mathcal {Y}}} , where t indexes task, and t ∈ 1 , . . . , T {\displaystyle t\in 1,...,T} . Let n = ∑ t = 1 T n t {\displaystyle n=\sum _{t=1}^{T}n_{t}} . In this setting there is a consistent input and output space and the same loss function L : R × R → R + {\displaystyle {\mathcal {L}}:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R} _{+}} for each task: . This results in the regularized machine learning problem: where H {\displaystyle {\mathcal {H}}} is a vector valued reproducing kernel Hilbert space with functions f : X → Y T {\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}^{T}} having components f t : X → Y {\displaystyle f_{t}:{\mathcal {X}}\rightarrow {\mathcal {Y}}} . The reproducing kernel for the space H {\displaystyle {\mathcal {H}}} of functions f : X → R T {\displaystyle f:{\mathcal {X}}\rightarrow \mathbb {R} ^{T}} is a symmetric matrix-valued function Γ : X × X → R T × T {\displaystyle \Gamma :{\mathcal {X}}\times {\mathcal {X}}\rightarrow \mathbb {R} ^{T\times T}} , such that Γ ( ⋅ , x ) c ∈ H {\displaystyle \Gamma (\cdot ,x)c\in {\mathcal {H}}} and the following reproducing property holds: The reproducing kernel gives rise to a representer theorem showing that any solution to equation 1 has the form: The form of the kernel Γ induces both the representation of the feature space and structures the output across tasks. A natural simplification is to choose a separable kernel, which factors into separate kernels on the input space X and on the tasks { 1 , . . . , T } {\displaystyle \{1,...,T\}} . In this case the kernel relating scalar components f t {\displaystyle f_{t}} and f s {\displaystyle f_{s}} is given by γ ( ( x i , t ) , ( x j , s ) ) = k ( x i , x j ) k T ( s , t ) = k ( x i , x j ) A s , t {\textstyle \gamma ((x_{i},t),(x_{j},s))=k(x_{i},x_{j})k_{T}(s,t)=k(x_{i},x_{j})A_{s,t}} . For vector valued functions f ∈ H {\displaystyle f\in {\mathcal {H}}} we can write Γ ( x i , x j ) = k ( x i , x j ) A {\displaystyle \Gamma (x_{i},x_{j})=k(x_{i},x_{j})A} , where k is a scalar reproducing kernel, and A is a symmetric positive semi-definite T × T {\displaystyle T\times T} matrix. Henceforth denote S + T = { PSD matrices } ⊂ R T × T {\displaystyle S_{+}^{T}=\{{\text{PSD matrices}}\}\subset \mathbb {R} ^{T\times T}} . This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely by A. Methods for non-separable kernels Γ is a current field of research. For the separable case, the representation theorem is reduced to f ( x ) = ∑ i = 1 N k ( x , x i ) A c i {\textstyle f(x)=\sum _{i=1}^{N}k(x,x_{i})Ac_{i}} . The model output on the training data is then KCA , where K is the n × n {\displaystyle n\times n} empirical kernel matrix with entries K i , j = k ( x i , x j ) {\textstyle K_{i,j}=k(x_{i},x_{j})} , and C is the n × T {\displaystyle n\times T} matrix of rows c i {\displaystyle c_{i}} . With the separable kernel, equation 1 can be rewritten as where V is a (weighted) average of L applied entry-wise to Y and KCA. (The weight is zero if Y i t {\displaystyle Y_{i}^{t}} is a missing observation). Note the second term in P can be derived as follows: ‖ f ‖ H 2 = ⟨ ∑ i = 1 n k ( ⋅ , x i ) A c i , ∑ j = 1 n k ( ⋅ , x j ) A c j ⟩ H = ∑ i , j = 1 n ⟨ k ( ⋅ , x i ) A c i , k ( ⋅ , x j ) A c j ⟩ H (bilinearity) = ∑ i , j = 1 n ⟨ k ( x i , x j ) A c i , c j ⟩ R T (reproducing property) = ∑ i , j = 1 n k ( x i , x j ) c i ⊤ A c j = t r ( K C A C ⊤ ) {\displaystyle {\begin{aligned}\|f\|_{\mathcal {H}}^{2}&=\left\langle \sum _{i=1}^{n}k(\cdot ,x_{i})Ac_{i},\sum _{j=1}^{n}k(\cdot ,x_{j})Ac_{j}\right\rangle _{\mathcal {H}}\\&=\sum _{i,j=1}^{n}\langle k(\cdot ,x_{i})Ac_{i},k(\cdot ,x_{j})Ac_{j}\rangle _{\mathcal {H}}&{\text{(bilinearity)}}\\&=\sum _{i,j=1}^{n}\langle k(x_{i},x_{j})Ac_{i},c_{j}\rangle _{\mathbb {R} ^{T}}&{\text{(reproducing property)}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})c_{i}^{\top }Ac_{j}=tr(KCAC^{\top })\end{aligned}}} There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping. Via the regularizer formulation, one can represent a variety of task structures easily. Letting A † = γ I T + ( γ − λ ) 1 T 1 1 ⊤ {\textstyle A^{\dagger }=\gamma I_{T}+(\gamma -\lambda ){\frac {1}{T}}\mathbf {1} \mathbf {1} ^{\top }} (where I T {\displaystyle I_{T}} is the TxT identity matrix, and 1 1 ⊤ {\textstyle \mathbf {1} \mathbf {1} ^{\top }} is the TxT matrix of ones) is equivalent to letting Γ control the variance ∑ t | | f t − f ¯ | | H k {\textstyle \sum _{t}||f_{t}-{\bar {f}}||_{{\mathcal {H}}_{k}}} of tasks from their mean 1 T ∑ t f t {\textstyle {\frac {1}{T}}\sum _{t}f_{t}} . For example, blood levels of some biomarker may be taken on T patients at n t {\displaystyle n_{t}} time points during the course of a day and interest may lie in regularizing the variance of the predictions across patients. Letting A † = α I T + ( α − λ ) M {\displaystyle A^{\dagger }=\alpha I_{T}+(\alpha -\lambda )M} , where M t , s = 1 | G r | I ( t , s ∈ G r ) {\displaystyle M_{t,s}={\frac {1}{|G_{r}|}}\mathbb {I} (t,s\in G_{r})} is equivalent to letting α {\displaystyle \alpha } control the variance measured with respect to a group mean: ∑ r ∑ t ∈ G r | | f t − 1 | G r | ∑ s ∈ G r ) f s | | {\displaystyle \sum _{r}\sum _{t\in G_{r}}||f_{t}-{\frac {1}{|G_{r}|}}\sum _{s\in G_{r})}f_{s}||} . (Here | G r | {\displaystyle |G_{r}|} the cardinality of group r, and I {\displaystyle \mathbb {I} } is the indicator function). For example, people in different political parties (groups) might be regularized together with respect to predicting the favorability rating of a politician. Note that this penalty reduces to the first when all tasks are in the same group. Letting A † = δ I T + ( δ − λ ) L {\displaystyle A^{\dagger }=\delta I_{T}+(\delta -\lambda )L} , where L = D − M {\displaystyle L=D-M} is the Laplacian for the graph with adjacency matrix M giving pairwise similarities of tasks. This is equivalent to giving a larger penalty to the distance separating tasks t and s when they are more similar (according to the weight M t , s {\displaystyle M_{t,s}} ,) i.e. δ {\displaystyle \delta } regularizes ∑ t , s | | f t − f s | | H k 2 M t , s {\displaystyle \sum _{t,s}||f_{t}-f_{s}||_{{\mathcal {H}}_{k}}^{2}M_{t,s}} . All of the above choices of A also induce the additional regularization term λ ∑ t | | f | | H k 2 {\textstyle \lambda \sum _{t}||f||_{{\mathcal {H}}_{k}}^{2}} which penalizes complexity in f more broadly. Learning problem P can be generalized to admit learning task matrix A as follows: Choice of F : S + T → R + {\displaystyle F:S_{+}^{T}\rightarrow \mathbb {R} _{+}} must be designed to learn matrices A of a given type. See "Special cases" below. Restricting to the case of convex losses and coercive penalties Ciliberto et al. have shown that although Q is not convex jointly in C and A, a related problem is jointly convex. Specifically on the convex set C = { ( C , A ) ∈ R n × T × S + T | R a n g e ( C ⊤ K C ) ⊆ R a n g e ( A ) } {\displaystyle {\mathcal {C}}=\{(C,A)\in \mathbb {R} ^{n\times T}\times S_{+}^{T}|Range(C^{\top }KC)\subseteq Range(A)\}} , the equivalent problem is convex with the same minimum value. And if ( C R , A R ) {\displaystyle (C_{R},A_{R})} is a minimizer for R then ( C R A R † , A R ) {\displaystyle (C_{R}A_{R}^{\dagger },A_{R})} is a minimizer for Q. R may be solved by a barrier method on a closed set by introducing the following perturbation: The perturbation via the barrier δ 2 t r ( A † ) {\displaystyle \delta ^{2}tr(A^{\dagger })} forces the objective functions to be equal to + ∞ {\displaystyle +\infty } on the boundary of R n × T × S + T {\displaystyle R^{n\times T}\times S_{+}^{T}} . S can be solved with a block coordinate descent method, alternating in C and A. This results in a sequence of minimizers ( C m , A m ) {\displaystyle (C_{m},A_{m})} in S that converges to the solution in R as δ m → 0 {\displaystyle \delta _{m}\rightarrow 0} , and hence gives the solution to Q. Spectral penalties - Dinnuzo et al suggested setting F as the Frobenius norm t r ( A ⊤ A ) {\displaystyle {\sqrt {tr(A^{\top }A)}}} . They optimized Q directly using block coordinate descent, not accounting for difficulties at the boundary of R n × T × S + T {\displaystyle \mathbb {R} ^{n\times T}\times S_{+}^{T}} . Clustered tasks learning - Jacob et al suggested to learn A in the setting where T tasks are organized in R disjoint clusters. In this case let E ∈ { 0 , 1 } T × R {\displaystyle E\in \{0,1\}^{T\times R}} be the matrix with E t , r = I ( task t ∈ group r ) {\displaystyle E_{t,r}=\mathbb {I} ({\text{task }}t\in {\text{group }}r)} . Setting M = I − E † E T {\displaystyle M=I-E^{\dagger }E^{T}} , and U = 1 T 11 ⊤ {\displaystyle U={\frac {1}{T}}\mathbf {11} ^{\top }} , the task matrix A † {\displaystyle A^{\dagger }} can be parameterized as a function of M {\displaystyle M} : A † ( M ) = ϵ M U + ϵ B ( M − U ) + ϵ ( I − M ) {\displaystyle A^{\dagger }(M)=\epsilon _{M}U+\epsilon _{B}(M-U)+\epsilon (I-M)} , with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation S c = { M ∈ S + T : I − M ∈ S + T ∧ t r ( M ) = r } {\displaystyle {\mathcal {S}}_{c}=\{M\in S_{+}^{T}:I-M\in S_{+}^{T}\land tr(M)=r\}} . In this formulation, F ( A ) = I ( A ( M ) ∈ { A : M ∈ S C } ) {\displaystyle F(A)=\mathbb {I} (A(M)\in \{A:M\in {\mathcal {S}}_{C}\})} . Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases. Non-separable kernels - Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels. A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning, Multi-Task Learning with Joint Feature Selection, Robust Multi-Task Feature Learning, Trace-Norm Regularized Multi-Task Learning, Alternating Structural Optimization, Incoherent Low-Rank and Sparse Learning, Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning, Multi-Task Learning with Graph Structures. The Biosignals Intelligence Group at UIUC Washington University in St. Louis Depart. of Computer Science The Multi-Task Learning via Structural Regularization Package Online Multi-Task Learning Toolkit (OMT) A general-purpose online multi-task learning toolkit based on conditional random field models and stochastic gradient descent training (C#, .NET)
Multimodal sentiment analysis
Multimodal sentiment analysis is a technology for traditional text-based sentiment analysis, which includes modalities such as audio and visual data. It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities. With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis, which can be applied in the development of virtual assistants, analysis of YouTube movie reviews, analysis of news videos, and emotion recognition (sometimes known as emotion detection) such as depression monitoring, among others. Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral. The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion. The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis.
Multiple instance learning
In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept. Babenko (2008) gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't.
Multiple-instance learning
In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept. Babenko (2008) gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't.
Multiplicative weight update method
The multiplicative weights update method is an algorithmic technique most commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design. The simplest use case is the problem of prediction from expert advice, in which a decision maker needs to iteratively decide on an expert whose advice to follow. The method assigns initial weights to the experts (usually identical initial weights), and updates these weights multiplicatively and iteratively according to the feedback of how well an expert performed: reducing it in case of poor performance, and increasing it otherwise. It was discovered repeatedly in very diverse fields such as machine learning (AdaBoost, Winnow, Hedge), optimization (solving linear programs), theoretical computer science (devising fast algorithm for LPs and SDPs), and game theory. "Multiplicative weights" implies the iterative rule used in algorithms derived from the multiplicative weight update method. It is given with different names in the different fields where it was discovered or rediscovered. The earliest known version of this technique was in an algorithm named "fictitious play" which was proposed in game theory in the early 1950s. Grigoriadis and Khachiyan applied a randomized variant of "fictitious play" to solve two-player zero-sum games efficiently using the multiplicative weights algorithm. In this case, player allocates higher weight to the actions that had a better outcome and choose his strategy relying on these weights. In machine learning, Littlestone applied the earliest form of the multiplicative weights update rule in his famous winnow algorithm, which is similar to Minsky and Papert's earlier perceptron learning algorithm. Later, he generalized the winnow algorithm to weighted majority algorithm. Freund and Schapire followed his steps and generalized the winnow algorithm in the form of hedge algorithm. The multiplicative weights algorithm is also widely applied in computational geometry such as Kenneth Clarkson's algorithm for linear programming (LP) with a bounded number of variables in linear time. Later, Bronnimann and Goodrich employed analogous methods to find set covers for hypergraphs with small VC dimension. In operations research and on-line statistical decision making problem field, the weighted majority algorithm and its more complicated versions have been found independently. In computer science field, some researchers have previously observed the close relationships between multiplicative update algorithms used in different contexts. Young discovered the similarities between fast LP algorithms and Raghavan's method of pessimistic estimators for derandomization of randomized rounding algorithms; Klivans and Servedio linked boosting algorithms in learning theory to proofs of Yao's XOR Lemma; Garg and Khandekar defined a common framework for convex optimization problems that contains Garg-Konemann and Plotkin-Shmoys-Tardos as subcases. The Hedge algorithm is a special case of mirror descent. A binary decision needs to be made based on n experts’ opinions to attain an associated payoff. In the first round, all experts’ opinions have the same weight. The decision maker will make the first decision based on the majority of the experts' prediction. Then, in each successive round, the decision maker will repeatedly update the weight of each expert's opinion depending on the correctness of his prior predictions. Real life examples includes predicting if it is rainy tomorrow or if the stock market will go up or go down. Given a sequential game played between an adversary and an aggregator who is advised by N experts, the goal is for the aggregator to make as few mistakes as possible. Assume there is an expert among the N experts who always gives the correct prediction. In the halving algorithm, only the consistent experts are retained. Experts who make mistakes will be dismissed. For every decision, the aggregator decides by taking a majority vote among the remaining experts. Therefore, every time the aggregator makes a mistake, at least half of the remaining experts are dismissed. The aggregator makes at most log2(N) mistakes. Unlike halving algorithm which dismisses experts who have made mistakes, weighted majority algorithm discounts their advice. Given the same "expert advice" setup, suppose we have n decisions, and we need to select one decision for each loop. In each loop, every decision incurs a cost. All costs will be revealed after making the choice. The cost is 0 if the expert is correct, and 1 otherwise. this algorithm's goal is to limit its cumulative losses to roughly the same as the best of experts. The very first algorithm that makes choice based on majority vote every iteration does not work since the majority of the experts can be wrong consistently every time. The weighted majority algorithm corrects above trivial algorithm by keeping a weight of experts instead of fixing the cost at either 1 or 0. This would make fewer mistakes compared to halving algorithm. Initialization: Fix an η ≤ 1 / 2 {\displaystyle \eta \leq 1/2} . For each expert, associate the weight w i 1 {\displaystyle {w_{i}}^{1}} ≔1. For t {\displaystyle t} = 1 {\displaystyle {\mathit {1}}} , 2 {\displaystyle {\mathit {2}}} ,..., T {\displaystyle T} 1. Make the prediction given by the weighted majority of the experts' predictions based on their weights w 1 t , . . . , w n t {\displaystyle \mathbb {w_{1}} ^{t},...,\mathbb {w_{n}} ^{t}} . That is, choose 0 or 1 depending on which prediction has a higher total weight of experts advising it (breaking ties arbitrarily). 2. For every expert i that predicted wrongly, decrease his weight for the next round by multiplying it by a factor of (1-η): w i t + 1 {\displaystyle w_{i}^{t+1}} = ( 1 − η ) w i t {\displaystyle (1-\eta )w_{i}^{t}} (update rule) If η = 0 {\displaystyle \eta =0} , the weight of the expert's advice will remain the same. When η {\displaystyle \eta } increases, the weight of the expert's advice will decrease. Note that some researchers fix η = 1 / 2 {\displaystyle \eta =1/2} in weighted majority algorithm. After T {\displaystyle T} steps, let m i T {\displaystyle m_{i}^{T}} be the number of mistakes of expert i and M T {\displaystyle M^{T}} be the number of mistakes our algorithm has made. Then we have the following bound for every i {\displaystyle i} : M T ≤ 2 ( 1 + η ) m i T + 2 ln ⁡ ( n ) η {\displaystyle M^{T}\leq 2(1+\eta )m_{i}^{T}+{\frac {2\ln(n)}{\eta }}} . In particular, this holds for i which is the best expert. Since the best expert will have the least m i T {\displaystyle m_{i}^{T}} , it will give the best bound on the number of mistakes made by the algorithm as a whole. This algorithm can be understood as follows: Given the same setup with N experts. Consider the special situation where the proportions of experts predicting positive and negative, counting the weights, are both close to 50%. Then, there might be a tie. Following the weight update rule in weighted majority algorithm, the predictions made by the algorithm would be randomized. The algorithm calculates the probabilities of experts predicting positive or negatives, and then makes a random decision based on the computed fraction: predict f ( x ) = { 1 with probability q 1 W 0 otherwise {\displaystyle f(x)={\begin{cases}1&{\text{with probability}}{\frac {q_{1}}{W}}\\0&{\text{otherwise}}\end{cases}}} where W = ∑ i w i = q 0 + q 1 {\displaystyle W=\sum _{i}{w_{i}}=q_{0}+q_{1}} . The number of mistakes made by the randomized weighted majority algorithm is bounded as: E [ # mistakes of the learner ] ≤ α β ( # mistakes of the best expert ) + c β ln ⁡ ( N ) {\displaystyle E\left[\#{\text{mistakes of the learner}}\right]\leq \alpha _{\beta }\left(\#{\text{ mistakes of the best expert}}\right)+c_{\beta }\ln(N)} where α β = ln ⁡ ( 1 β ) 1 − β {\displaystyle \alpha _{\beta }={\frac {\ln({\frac {1}{\beta }})}{1-\beta }}} and c β = 1 1 − β {\displaystyle c_{\beta }={\frac {1}{1-\beta }}} . Note that only the learning algorithm is randomized. The underlying assumption is that the examples and experts' predictions are not random. The only randomness is the randomness where the learner makes his own prediction. In this randomized algorithm, α β → 1 {\displaystyle \alpha _{\beta }\rightarrow 1} if β → 1 {\displaystyle \beta \rightarrow 1} . Compared to weighted algorithm, this randomness halved the number of mistakes the algorithm is going to make. However, it is important to note that in some research, people define η = 1 / 2 {\displaystyle \eta =1/2} in weighted majority algorithm and allow 0 ≤ η ≤ 1 {\displaystyle 0\leq \eta \leq 1} in randomized weighted majority algorithm. The multiplicative weights method is usually used to solve a constrained optimization problem. Let each expert be the constraint in the problem, and the events represent the points in the area of interest. The punishment of the expert corresponds to how well its corresponding constraint is satisfied on the point represented by an event. Suppose we were given the distribution P {\displaystyle P} on experts. Let A {\displaystyle A} = payoff matrix of a finite two-player zero-sum game, with n {\displaystyle n} rows. When the row player p r {\displaystyle p_{r}} uses plan i {\displaystyle i} and the column player p c {\displaystyle p_{c}} uses plan j {\displaystyle j} , the payoff of player p c {\displaystyle p_{c}} is A ( i , j ) {\displaystyle A\left(i,j\right)} ≔ A i j {\displaystyle A_{ij}} , assuming A ( i , j ) ∈ [ 0 , 1 ] {\displaystyle A\left(i,j\right)\in \left[0,1\right]} . If player p r {\displaystyle p_{r}} chooses action i {\displaystyle i} from a distribution P {\displaystyle P} over the rows, then the expected result for player p c {\displaystyle p_{c}} selecting action j {\displaystyle j} is A ( P , j ) = E i ∈ P [ A ( i , j ) ] {\displaystyle A\left(P,j\right)=E_{i\in P}\left[A\left(i,j\right)\right]} . To maximize A ( P , j ) {\displaystyle A\left(P,j\right)} , player p c {\displaystyle p_{c}} should choose plan j {\displaystyle j} . Similarly, the expected payoff for player p l {\displaystyle p_{l}} is A ( i , P ) = E j ∈ P [ A ( i , j ) ] {\displaystyle A\left(i,P\right)=E_{j\in P}\left[A\left(i,j\right)\right]} . Choosing plan i {\displaystyle i} would minimize this payoff. By John Von Neumann's Min-Max Theorem, we obtain: min P max j A ( P , j ) = max Q min i A ( i , Q ) {\displaystyle \min _{P}\max _{j}A\left(P,j\right)=\max _{Q}\min _{i}A\left(i,Q\right)} where P and i changes over the distributions over rows, Q and j changes over the columns. Then, let λ ∗ {\displaystyle \lambda ^{*}} denote the common value of above quantities, also named as the "value of the game". Let δ > 0 {\displaystyle \delta >0} be an error parameter. To solve the zero-sum game bounded by additive error of δ {\displaystyle \delta } , λ ∗ − δ ≤ min i A ( i , q ) {\displaystyle \lambda ^{*}-\delta \leq \min _{i}A\left(i,q\right)} max j A ( p , j ) ≤ λ ∗ + δ {\displaystyle \max _{j}A\left(p,j\right)\leq \lambda ^{*}+\delta } So there is an algorithm solving zero-sum game up to an additive factor of δ using O(log2(n)/ δ 2 {\displaystyle \delta ^{2}} ) calls to ORACLE, with an additional processing time of O(n) per call Bailey and Piliouras showed that although the time average behavior of multiplicative weights update converges to Nash equilibria in zero-sum games the day-to-day (last iterate) behavior diverges away from it. In machine learning, Littlestone and Warmuth generalized the winnow algorithm to the weighted majority algorithm. Later, Freund and Schapire generalized it in the form of hedge algorithm. AdaBoost Algorithm formulated by Yoav Freund and Robert Schapire also employed the Multiplicative Weight Update Method. Based on current knowledge in algorithms, the multiplicative weight update method was first used in Littlestone's winnow algorithm. It is used in machine learning to solve a linear program. Given m {\displaystyle m} labeled examples ( a 1 , l 1 ) , … , ( a m , l m ) {\displaystyle \left(a_{1},l_{1}\right),{\text{…}},\left(a_{m},l_{m}\right)} where a j ∈ R n {\displaystyle a_{j}\in \mathbb {R} ^{n}} are feature vectors, and l j ∈ { − 1 , 1 } {\displaystyle l_{j}\in \left\{-1,1\right\}\quad } are their labels. The aim is to find non-negative weights such that for all examples, the sign of the weighted combination of the features matches its labels. That is, require that l j a j x ≥ 0 {\displaystyle l_{j}a_{j}x\geq 0} for all j {\displaystyle j} . Without loss of generality, assume the total weight is 1 so that they form a distribution. Thus, for notational convenience, redefine a j {\displaystyle a_{j}} to be l j a j {\displaystyle l_{j}a_{j}} , the problem reduces to finding a solution to the following LP: ∀ j = 1 , 2 , … , m : a j x ≥ 0 {\displaystyle \forall j=1,2,{\text{…}},m:a_{j}x\geq 0} , 1 ∗ x = 1 {\displaystyle 1*x=1} , ∀ i : x i ≥ 0 {\displaystyle \forall i:x_{i}\geq 0} . This is general form of LP. The hedge algorithm is similar to the weighted majority algorithm. However, their exponential update rules are different. It is generally used to solve the problem of binary allocation in which we need to allocate different portion of resources into N different options. The loss with every option is available at the end of every iteration. The goal is to reduce the total loss suffered for a particular allocation. The allocation for the following iteration is then revised, based on the total loss suffered in the current iteration using multiplicative update. Assume the learning rate η > 0 {\displaystyle \eta >0} and for t ∈ [ T ] {\displaystyle t\in [T]} , p t {\displaystyle p^{t}} is picked by Hedge. Then for all experts i {\displaystyle i} , ∑ t ≤ T p t m t ≤ ∑ t ≤ T m i t + ln ⁡ ( N ) η + η T {\displaystyle \sum _{t\leq T}p^{t}m^{t}\leq \sum _{t\leq T}m_{i}^{t}+{\frac {\ln(N)}{\eta }}+\eta T} Initialization: Fix an η > 0 {\displaystyle \eta >0} . For each expert, associate the weight w i 1 {\displaystyle w_{i}^{1}} ≔1 For t=1,2,...,T: 1. Pick the distribution p i t = w i t Φ t {\displaystyle p_{i}^{t}={\frac {w_{i}^{t}}{\Phi t}}} where Φ t = ∑ i w i t {\displaystyle \Phi t=\sum _{i}w_{i}^{t}} . 2. Observe the cost of the decision m t {\displaystyle m^{t}} . 3. Set w i t + 1 = w i t exp ⁡ ( − η m i t {\displaystyle w_{i}^{t+1}=w_{i}^{t}\exp(-\eta m_{i}^{t}} ). This algorithm maintains a set of weights w t {\displaystyle w^{t}} over the training examples. On every iteration t {\displaystyle t} , a distribution p t {\displaystyle p^{t}} is computed by normalizing these weights. This distribution is fed to the weak learner WeakLearn which generates a hypothesis h t {\displaystyle h_{t}} that (hopefully) has small error with respect to the distribution. Using the new hypothesis h t {\displaystyle h_{t}} , AdaBoost generates the next weight vector w t + 1 {\displaystyle w^{t+1}} . The process repeats. After T such iterations, the final hypothesis h f {\displaystyle h_{f}} is the output. The hypothesis h f {\displaystyle h_{f}} combines the outputs of the T weak hypotheses using a weighted majority vote. Input: Sequence of N {\displaystyle N} labeled examples ( x 1 {\displaystyle x_{1}} , y 1 {\displaystyle y_{1}} ),...,( x N {\displaystyle x_{N}} , y N {\displaystyle y_{N}} ) Distribution D {\displaystyle D} over the N {\displaystyle N} examples Weak learning algorithm "'WeakLearn"' Integer T {\displaystyle T} specifying number of iterations Initialize the weight vector: w i 1 = D ( i ) {\displaystyle w_{i}^{1}=D(i)} for i = 1 , 2 , . . . , N {\displaystyle i=1,2,...,N} . Do for t = 1 , 2 , . . . , T {\displaystyle t=1,2,...,T} 1. Set p t = w t ∑ i = 1 N w i t {\displaystyle p^{t}={\frac {w^{t}}{\sum _{i=1}^{N}w_{i}^{t}}}} . 2. Call WeakLearn, providing it with the distribution p t {\displaystyle p^{t}} ; get back a hypothesis h t : X → {\displaystyle h_{t}:X\rightarrow } [0,1]. 3. Calculate the error of h t : ϵ t = ∑ i = 1 N p i t | h t ( x i ) − y i | {\displaystyle h_{t}:\epsilon _{t}=\sum _{i=1}^{N}p_{i}^{t}|h_{t}(x_{i})-y_{i}|} . 4. Set β t = ϵ t 1 − ϵ t {\displaystyle \beta _{t}={\frac {\epsilon _{t}}{1-\epsilon _{t}}}} . 5. Set the new weight vector to be w i t + 1 = w i t β t 1 − | h t ( x i ) − y i | {\displaystyle w_{i}^{t+1}=w_{i}^{t}\beta _{t}^{1-|h_{t}(x_{i})-y_{i}|}} . Output the hypothesis: f ( x ) = h f ( x ) = { 1 if ∑ t = 1 T ( log ⁡ ( 1 / β t ) ) h t ( x ) ≥ 1 2 ∑ t = 1 T log ⁡ ( 1 / β t ) 0 otherwise {\displaystyle f(x)=h_{f}(x)={\begin{cases}1&{\text{if}}\sum _{t=1}^{T}(\log(1/\beta _{t}))h_{t}(x)\geq {\frac {1}{2}}\sum _{t=1}^{T}\log(1/\beta _{t})\\0&{\text{otherwise}}\end{cases}}} Given a m × n {\displaystyle m\times n} matrix A {\displaystyle A} and b ∈ R n {\displaystyle b\in \mathbb {R} ^{n}} , is there a x {\displaystyle x} such that A x ≥ b {\displaystyle Ax\geq b} ? ∃ ? x : A x ≥ b {\displaystyle \exists ?x:Ax\geq b} (1) Using the oracle algorithm in solving zero-sum problem, with an error parameter ϵ > 0 {\displaystyle \epsilon >0} , the output would either be a point x {\displaystyle x} such that A x ≥ b − ϵ {\displaystyle Ax\geq b-\epsilon } or a proof that x {\displaystyle x} does not exist, i.e., there is no solution to this linear system of inequalities. Given vector p ∈ Δ n {\displaystyle p\in \Delta _{n}} , solves the following relaxed problem ∃ ? x : p T A x ≥ p T b {\displaystyle \exists ?x:p^{\textsf {T}}\!\!Ax\geq p^{\textsf {T}}\!b} (2) If there exists a x satisfying (1), then x satisfies (2) for all p ∈ Δ n {\displaystyle p\in \Delta _{n}} . The contrapositive of this statement is also true. Suppose if oracle returns a feasible solution for a p {\displaystyle p} , the solution x {\displaystyle x} it returns has bounded width max i | ( A x ) i − b i | ≤ 1 {\displaystyle \max _{i}|{(Ax)}_{i}-b_{i}|\leq 1} . So if there is a solution to (1), then there is an algorithm that its output x satisfies the system (2) up to an additive error of 2 ϵ {\displaystyle 2\epsilon } . The algorithm makes at most ln ⁡ ( m ) ϵ 2 {\displaystyle {\frac {\ln(m)}{\epsilon ^{2}}}} calls to a width-bounded oracle for the problem (2). The contrapositive stands true as well. The multiplicative updates is applied in the algorithm in this case. Evolutionary game theory Multiplicative weights update is the discrete-time variant of the replicator equation (replicator dynamics), which is a commonly used model in evolutionary game theory. It converges to Nash equilibrium when applied to a congestion game. Operations research and online statistical decision-making In operations research and on-line statistical decision making problem field, the weighted majority algorithm and its more complicated versions have been found independently. Computational geometry The multiplicative weights algorithm is also widely applied in computational geometry, such as Clarkson's algorithm for linear programming (LP) with a bounded number of variables in linear time. Later, Bronnimann and Goodrich employed analogous methods to find Set Covers for hypergraphs with small VC dimension. Gradient descent method Matrix multiplicative weights update Plotkin, Shmoys, Tardos framework for packing/covering LPs Approximating multi-commodity flow problems O (logn)- approximation for many NP-hard problems Learning theory and boosting Hard-core sets and the XOR lemma Hannan's algorithm and multiplicative weights Online convex optimization The Game Theory of Life a Quanta Magazine article describing the use of the method to evolutionary biology in a paper by Erick Chastain, Adi Livnat, Christos Papadimitriou, and Umesh Vazirani
Multitask optimization
Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics. The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes, the search progress can be transferred to substantially accelerate the search on the other. The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems. There is a direct relationship between multitask optimization and multi-objective optimization.
Multivariate adaptive regression spline
In statistics, multivariate adaptive regression splines (MARS) is a form of regression analysis introduced by Jerome H. Friedman in 1991. It is a non-parametric regression technique and can be seen as an extension of linear models that automatically models nonlinearities and interactions between variables. The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".
Native-language identification
Native-language identification (NLI) is the task of determining an author's native language (L1) based only on their writings in a second language (L2). NLI works through identifying language-usage patterns that are common to specific L1 groups and then applying this knowledge to predict the native language of previously unseen texts. This is motivated in part by applications in second-language acquisition, language teaching and forensic linguistics, amongst others.
Nature Machine Intelligence
Nature Machine Intelligence is a monthly peer-reviewed scientific journal published by Nature Portfolio covering machine learning and artificial intelligence. The editor-in-chief is Liesbeth Venema.
Neural modeling fields
Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS). This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of the mind's mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
Neural network quantum states
Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems. Given a many-body quantum state | Ψ ⟩ {\displaystyle |\Psi \rangle } comprising N {\displaystyle N} degrees of freedom and a choice of associated quantum numbers s 1 … s N {\displaystyle s_{1}\ldots s_{N}} , then an NQS parameterizes the wave-function amplitudes ⟨ s 1 … s N | Ψ ; W ⟩ = F ( s 1 … s N ; W ) , {\displaystyle \langle s_{1}\ldots s_{N}|\Psi ;W\rangle =F(s_{1}\ldots s_{N};W),} where F ( s 1 … s N ; W ) {\displaystyle F(s_{1}\ldots s_{N};W)} is an artificial neural network of parameters (weights) W {\displaystyle W} , N {\displaystyle N} input variables ( s 1 … s N {\displaystyle s_{1}\ldots s_{N}} ) and one complex-valued output corresponding to the wave-function amplitude. This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest.
Novelty detection
Novelty detection is the mechanism by which an intelligent organism is able to identify an incoming sensory pattern as being hitherto unknown. If the pattern is sufficiently salient or associated with a high positive or strong negative utility, it will be given computational resources for effective future processing. The principle is long known in neurophysiology, with roots in the orienting response research by E. N. Sokolov in the 1950s. The reverse phenomenon is habituation, i.e., the phenomenon that known patterns yield a less marked response. Early neural modeling attempts were by Yehuda Salu. An increasing body of knowledge has been collected concerning the corresponding mechanisms in the brain. In technology, the principle became important for radar detection methods during the Cold War, where unusual aircraft-reflection patterns could indicate an attack by a new type of aircraft. Today, the phenomenon plays an important role in machine learning and data science, where the corresponding methods are known as anomaly detection or outlier detection. An extensive methodological overview is given by Markou and Singh.
NSynth
NSYNC ( en-SINK, in-; also stylized as *NSYNC or 'N Sync) are an American vocal group and boy band formed by Chris Kirkpatrick in Orlando, Florida, in 1995 and launched in Germany by BMG Ariola Munich. The group consists of Kirkpatrick, Justin Timberlake, Joey Fatone, Lance Bass, and JC Chasez. Their self-titled debut album was successfully released to European countries in 1997, and later debuted in the U.S. market with the single "I Want You Back". After heavily publicized legal battles with their former manager Lou Pearlman and former record label Bertelsmann Music Group, the group's second album, No Strings Attached (2000), sold over one million copies in one day and 2.4 million copies in one week, which was a record for over fifteen years. NSYNC's first two studio albums were both certified Diamond by the Recording Industry Association of America (RIAA). Celebrity (2001) debuted with 1.8 million copies in its first week in the US. Singles such as "Girlfriend", "Pop", "Gone" and "It's Gonna Be Me" reached the top 10 in several international charts, with the last being a US Billboard Hot 100 number one. In addition to eight Grammy Award nominations, NSYNC performed at the Super Bowl and sang the national anthem at the Olympic Games and World Series. They have also sung or recorded with Elton John, Stevie Wonder, Michael and Janet Jackson, Britney Spears, Phil Collins, Celine Dion, Aerosmith, Nelly, Lisa "Left Eye" Lopes, Mary J. Blige, country music band Alabama, and Gloria Estefan. NSYNC went on a hiatus in 2002 and reunited in 2023 to release the single "Better Place" for the DreamWorks animated film Trolls Band Together (2023). Over the course of their hiatus, the five members reunited occasionally, including at the 2013 MTV Video Music Awards and to receive a star on the Hollywood Walk of Fame in 2018. The band completed five nationwide concert tours and has sold over 70 million records, becoming one of the best-selling boy bands of all time. Rolling Stone recognized their instant success as one of the Top 25 Teen Idol Breakout Moments of all time.
Overfitting
In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.: 45  Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
Paraphrasing (computational linguistics)
Paraphrase or paraphrasing in computational linguistics is the natural language processing task of detecting and generating paraphrases. Applications of paraphrasing are varied including information retrieval, question answering, text summarization, and plagiarism detection. Paraphrasing is also useful in the evaluation of machine translation, as well as semantic parsing and generation of new samples to expand existing corpora.
Parity learning
Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function ƒ, given some samples (x, ƒ(x)) and the assurance that ƒ computes the parity of bits at some fixed locations. The samples are generated using some distribution over the input. The problem is easy to solve using Gaussian elimination provided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm.
Pattern language (formal languages)
In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar. The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules. In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.
Pattern recognition
Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent patterns. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power. Pattern recognition systems are commonly trained from labeled "training" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition. In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is "spam"). Pattern recognition is a more general problem that encompasses other types of output as well. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); and parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence. Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors. A modern definition of pattern recognition is: The field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories. Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value. Supervised learning assumes that a set of training data (the training set) has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A learning procedure then generates a model that attempts to meet two sometimes conflicting objectives: Perform as well as possible on the training data, and generalize as well as possible to new data (usually, this means being as simple as possible, for some technical definition of "simple", in accordance with Occam's Razor, discussed below). Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). In cases of unsupervised learning, there may be no training data at all. Sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. The unsupervised equivalent of classification is normally known as clustering, based on the common perception of the task as involving no training data to speak of, and of grouping the input data into clusters based on some inherent similarity measure (e.g. the distance between instances, considered as vectors in a multi-dimensional vector space), rather than assigning each input instance into one of a set of pre-defined classes. In some fields, the terminology is different. In community ecology, the term classification is used to refer to what is commonly known as "clustering". The piece of input data for which an output value is generated is formally termed an instance. The instance is formally described by a vector of features, which together constitute a description of all known characteristics of the instance. These feature vectors can be seen as defining points in an appropriate multidimensional space, and methods for manipulating vectors in vector spaces can be correspondingly applied to them, such as computing the dot product or the angle between two vectors. Features typically are either categorical (also known as nominal, i.e., consisting of one of a set of unordered items, such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or "O"), ordinal (consisting of one of a set of ordered items, e.g., "large", "medium" or "small"), integer-valued (e.g., a count of the number of occurrences of a particular word in an email) or real-valued (e.g., a measurement of blood pressure). Often, categorical and ordinal data are grouped together, and this is also the case for integer-valued and real-valued data. Many algorithms work only in terms of categorical data and require that real-valued or integer-valued data be discretized into groups (e.g., less than 5, between 5 and 10, or greater than 10). Many common pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which simply output a "best" label, often probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the N-best labels with associated probabilities, for some value of N, instead of simply a single best label. When the number of possible labels is fairly small (e.g., in the case of classification), N may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms: They output a confidence value associated with their choice. (Note that some other algorithms may also output confidence values, but in general, only for probabilistic algorithms is this value mathematically grounded in probability theory. Non-probabilistic confidence values can in general not be given any specific meaning, and only used to compare against other confidence values output by the same algorithm.) Correspondingly, they can abstain when the confidence of choosing any particular output is too low. Because of the probabilities output, probabilistic pattern-recognition algorithms can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation. Feature selection algorithms attempt to directly prune out redundant or irrelevant features. A general introduction to feature selection which summarizes approaches and challenges, has been given. The complexity of feature-selection is, because of its non-monotonous character, an optimization problem where given a total of n {\displaystyle n} features the powerset consisting of all 2 n − 1 {\displaystyle 2^{n}-1} subsets of features need to be explored. The Branch-and-Bound algorithm does reduce this complexity but is intractable for medium to large values of the number of available features n {\displaystyle n} Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm. Feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis (PCA). The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features. The problem of pattern recognition can be stated as follows: Given an unknown function g : X → Y {\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}} (the ground truth) that maps input instances x ∈ X {\displaystyle {\boldsymbol {x}}\in {\mathcal {X}}} to output labels y ∈ Y {\displaystyle y\in {\mathcal {Y}}} , along with training data D = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle \mathbf {D} =\{({\boldsymbol {x}}_{1},y_{1}),\dots ,({\boldsymbol {x}}_{n},y_{n})\}} assumed to represent accurate examples of the mapping, produce a function h : X → Y {\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}} that approximates as closely as possible the correct mapping g {\displaystyle g} . (For example, if the problem is filtering spam, then x i {\displaystyle {\boldsymbol {x}}_{i}} is some representation of an email and y {\displaystyle y} is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function or cost function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of X {\displaystyle {\mathcal {X}}} . In practice, neither the distribution of X {\displaystyle {\mathcal {X}}} nor the ground truth function g : X → Y {\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}} are known exactly, but can be computed only empirically by collecting a large number of samples of X {\displaystyle {\mathcal {X}}} and hand-labeling them using the correct value of Y {\displaystyle {\mathcal {Y}}} (a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data (i.e. counting up the fraction of instances that the learned function h : X → Y {\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}} labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize the correctness) on a "typical" test set. For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form p ( l a b e l | x , θ ) = f ( x ; θ ) {\displaystyle p({\rm {label}}|{\boldsymbol {x}},{\boldsymbol {\theta }})=f\left({\boldsymbol {x}};{\boldsymbol {\theta }}\right)} where the feature vector input is x {\displaystyle {\boldsymbol {x}}} , and the function f is typically parameterized by some parameters θ {\displaystyle {\boldsymbol {\theta }}} . In a discriminative approach to the problem, f is estimated directly. In a generative approach, however, the inverse probability p ( x | l a b e l ) {\displaystyle p({{\boldsymbol {x}}|{\rm {label}}})} is instead estimated and combined with the prior probability p ( l a b e l | θ ) {\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})} using Bayes' rule, as follows: p ( l a b e l | x , θ ) = p ( x | l a b e l , θ ) p ( l a b e l | θ ) ∑ L ∈ all labels p ( x | L ) p ( L | θ ) . {\displaystyle p({\rm {label}}|{\boldsymbol {x}},{\boldsymbol {\theta }})={\frac {p({{\boldsymbol {x}}|{\rm {label,{\boldsymbol {\theta }}}}})p({\rm {label|{\boldsymbol {\theta }}}})}{\sum _{L\in {\text{all labels}}}p({\boldsymbol {x}}|L)p(L|{\boldsymbol {\theta }})}}.} When the labels are continuously distributed (e.g., in regression analysis), the denominator involves integration rather than summation: p ( l a b e l | x , θ ) = p ( x | l a b e l , θ ) p ( l a b e l | θ ) ∫ L ∈ all labels p ( x | L ) p ( L | θ ) d ⁡ L . {\displaystyle p({\rm {label}}|{\boldsymbol {x}},{\boldsymbol {\theta }})={\frac {p({{\boldsymbol {x}}|{\rm {label,{\boldsymbol {\theta }}}}})p({\rm {label|{\boldsymbol {\theta }}}})}{\int _{L\in {\text{all labels}}}p({\boldsymbol {x}}|L)p(L|{\boldsymbol {\theta }})\operatorname {d} L}}.} The value of θ {\displaystyle {\boldsymbol {\theta }}} is typically learned using maximum a posteriori (MAP) estimation. This finds the best value that simultaneously meets two conflicting objects: To perform as well as possible on the training data (smallest error-rate) and to find the simplest possible model. Essentially, this combines maximum likelihood estimation with a regularization procedure that favors simpler models over more complex models. In a Bayesian context, the regularization procedure can be viewed as placing a prior probability p ( θ ) {\displaystyle p({\boldsymbol {\theta }})} on different values of θ {\displaystyle {\boldsymbol {\theta }}} . Mathematically: θ ∗ = arg ⁡ max θ p ( θ | D ) {\displaystyle {\boldsymbol {\theta }}^{*}=\arg \max _{\boldsymbol {\theta }}p({\boldsymbol {\theta }}|\mathbf {D} )} where θ ∗ {\displaystyle {\boldsymbol {\theta }}^{*}} is the value used for θ {\displaystyle {\boldsymbol {\theta }}} in the subsequent evaluation procedure, and p ( θ | D ) {\displaystyle p({\boldsymbol {\theta }}|\mathbf {D} )} , the posterior probability of θ {\displaystyle {\boldsymbol {\theta }}} , is given by p ( θ | D ) = [ ∏ i = 1 n p ( y i | x i , θ ) ] p ( θ ) . {\displaystyle p({\boldsymbol {\theta }}|\mathbf {D} )=\left[\prod _{i=1}^{n}p(y_{i}|{\boldsymbol {x}}_{i},{\boldsymbol {\theta }})\right]p({\boldsymbol {\theta }}).} In the Bayesian approach to this problem, instead of choosing a single parameter vector θ ∗ {\displaystyle {\boldsymbol {\theta }}^{*}} , the probability of a given label for a new instance x {\displaystyle {\boldsymbol {x}}} is computed by integrating over all possible values of θ {\displaystyle {\boldsymbol {\theta }}} , weighted according to the posterior probability: p ( l a b e l | x ) = ∫ p ( l a b e l | x , θ ) p ( θ | D ) d ⁡ θ . {\displaystyle p({\rm {label}}|{\boldsymbol {x}})=\int p({\rm {label}}|{\boldsymbol {x}},{\boldsymbol {\theta }})p({\boldsymbol {\theta }}|\mathbf {D} )\operatorname {d} {\boldsymbol {\theta }}.} The first pattern classifier – the linear discriminant presented by Fisher – was developed in the frequentist tradition. The frequentist approach entails that the model parameters are considered unknown, but objective. The parameters are then computed (estimated) from the collected data. For the linear discriminant, these parameters are precisely the mean vectors and the covariance matrix. Also the probability of each class p ( l a b e l | θ ) {\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})} is estimated from the collected dataset. Note that the usage of 'Bayes rule' in a pattern classifier does not make the classification approach Bayesian. Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities p ( l a b e l | θ ) {\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})} can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., the Beta- (conjugate prior) and Dirichlet-distributions. The Bayesian approach facilitates a seamless intermixing between expert knowledge in the form of subjective probabilities, and objective observations. Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach. Within medical science, pattern recognition is the basis for computer-aided diagnosis (CAD) systems. CAD describes a procedure that supports the doctor's interpretations and findings. Other typical applications of pattern recognition techniques are automatic speech recognition, speaker identification, classification of text into several categories (e.g., spam or non-spam email messages), the automatic recognition of handwriting on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms. The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems. Optical character recognition is an example of the application of a pattern classifier. The method of signing one's name was captured with stylus and overlay starting in 1990. The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers. Pattern recognition has many real-world applications in image processing. Some examples include: identification and authentication: e.g., license plate recognition, fingerprint analysis, face detection/verification, and voice-based authentication. medical diagnosis: e.g., screening for cervical cancer (Papnet), breast tumors or heart sounds; defense: various navigation and guidance systems, target recognition systems, shape recognition technology etc. mobility: advanced driver assistance systems, autonomous vehicle technology, etc. In psychology, pattern recognition is used to make sense of and identify objects, and is closely related to perception. This explains how the sensory inputs humans receive are made meaningful. Pattern recognition can be thought of in two different ways. The first concerns template matching and the second concerns feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long-term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their component parts for identification. One observation is a capital E having three horizontal lines and one vertical line. Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized as generative or discriminative. Parametric: Linear discriminant analysis Quadratic discriminant analysis Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class.) Nonparametric: Decision trees, decision lists Kernel estimation and K-nearest-neighbor algorithms Naive Bayes classifier Neural networks (multi-layer perceptrons) Perceptrons Support vector machines Gene expression programming Categorical mixture models Hierarchical clustering (agglomerative or divisive) K-means clustering Correlation clustering Kernel principal component analysis (Kernel PCA) Boosting (meta-algorithm) Bootstrap aggregating ("bagging") Ensemble averaging Mixture of experts, hierarchical mixture of experts Bayesian networks Markov random fields Unsupervised: Multilinear principal component analysis (MPCA) Kalman filters Particle filters Gaussian process regression (kriging) Linear regression and extensions Independent component analysis (ICA) Principal components analysis (PCA) Conditional random fields (CRFs) Hidden Markov models (HMMs) Maximum entropy Markov models (MEMMs) Recurrent neural networks (RNNs) Dynamic time warping (DTW) Fukunaga, Keinosuke (1990). Introduction to Statistical Pattern Recognition (2nd ed.). Boston: Academic Press. ISBN 978-0-12-269851-4. Hornegger, Joachim; Paulus, Dietrich W. R. (1999). Applied Pattern Recognition: A Practical Introduction to Image and Speech Processing in C++ (2nd ed.). San Francisco: Morgan Kaufmann Publishers. ISBN 978-3-528-15558-2. Schuermann, Juergen (1996). Pattern Classification: A Unified View of Statistical and Neural Approaches. New York: Wiley. ISBN 978-0-471-13534-0. Godfried T. Toussaint, ed. (1988). Computational Morphology. Amsterdam: North-Holland Publishing Company. ISBN 9781483296722. Kulikowski, Casimir A.; Weiss, Sholom M. (1991). Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. San Francisco: Morgan Kaufmann Publishers. ISBN 978-1-55860-065-2. Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). Pattern Classification (2nd ed.). Wiley-Interscience. ISBN 978-0471056690. Jain, Anil.K.; Duin, Robert.P.W.; Mao, Jianchang (2000). "Statistical pattern recognition: a review". IEEE Transactions on Pattern Analysis and Machine Intelligence. 22 (1): 4–37. CiteSeerX 10.1.1.123.8151. doi:10.1109/34.824819. S2CID 192934. An introductory tutorial to classifiers (introducing the basic terms, with numeric example) Kovalevsky, V. A. (1980). Image Pattern Recognition. New York, NY: Springer New York. ISBN 978-1-4612-6033-2. OCLC 852790446. The International Association for Pattern Recognition List of Pattern Recognition web sites Journal of Pattern Recognition Research Archived 2008-09-08 at the Wayback Machine Pattern Recognition Info Pattern Recognition (Journal of the Pattern Recognition Society) International Journal of Pattern Recognition and Artificial Intelligence Archived 2004-12-11 at the Wayback Machine International Journal of Applied Pattern Recognition Open Pattern Recognition Project, intended to be an open source platform for sharing algorithms of pattern recognition Improved Fast Pattern Matching Improved Fast Pattern Matching
Perceiver
Perceiver is a transformer adapted to be able to process non-textual data, such as images, sounds and video, and spatial data. Transformers underlie other notable systems such as BERT and GPT-3, which preceded Perceiver. It adopts an asymmetric attention mechanism to distill inputs into a latent bottleneck, allowing it to learn from large amounts of heterogeneous data. Perceiver matches or outperforms specialized models on classification tasks. Perceiver was introduced in June 2021 by DeepMind. It was followed by Perceiver IO in August 2021.
PHerc. Paris. 4
PHerc. Paris. 4 is a carbonized scroll of papyrus, dating to the 1st century BC to the 1st century AD. Part of a corpus known as the Herculaneum papyri, it was buried by hot-ash in the Roman city of Herculaneum during the eruption of Mount Vesuvius in 79 AD. It was subsequently discovered in excavations of the Villa of the Papyri from 1752–1754. Held by the Institut de France in its rolled state, it is now known to be a cornerstone example of non-invasive reading, where in February 2024, an announcement was made that the scroll's contents can be unveiled with the use of non-invasive imaging and machine learning artificial intelligence, paving the way towards the decipherment and scanning of other Herculaneum papyri and otherwise heavily damaged texts.
Phi coefficient
In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a measure of association for two binary variables. In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975. Introduced by Karl Pearson, and also known as the Yule phi coefficient from its introduction by Udny Yule in 1912 this measure is similar to the Pearson correlation coefficient in its interpretation.
Predictive learning
Predictive learning is a machine learning technique where an artificial intelligence model is fed new data to develop an understanding of its environment, capabilities, and limitations. The fields of neuroscience, business, robotics, computer vision, and other fields employ this technique extensively. This concept was developed and expanded by French computer scientist Yann LeCun in 1988 during his career at Bell Labs, where he trained models to detect handwriting so that financial companies could automate check processing. The mathematical foundation for predictive learning dates back to the 17th century, where the British insurance company Lloyd's used predictive analytics models to make a profit. Starting out as a mathematical concept, this concept expanded the possibilities of artificial intelligence. Predictive learning is an attempt to learn with a minimum of pre-existing mental structure. It was inspired by Piaget's account of children constructing knowledge of the world through interaction. Gary Drescher's book 'Made-up Minds' was crucial to the development of this concept. The idea that predictions and unconscious inference are used by the brain to construct a model of the world, in which it can identify causes of percepts, goes back even further to Hermann von Helmholtz's iteration of this study. Those ideas were later picked up in the field of predictive coding. Another related predictive learning theory is Jeff Hawkins' memory-prediction framework, which is laid out in his book On Intelligence. Similar to machine learning, predictive learning aims to extrapolate the value of an unknown dependent variable y, given independent input data x = (x1, x2, … , xn). A set of attributes can be classified into categorical data (immeasurable factors such as race, sex, or affiliation) and numerical data (measurable values such as temperature, annual income, and average speed). Every set of input values is fed into a neural network to predict a value y. In order to predict the output accurately, the weights of the neural network (representing how much each predictor variable affects the outcome) must be incrementally adjusted using stochastic gradient descent to make estimates closer to the actual data. Once a machine learning model is given enough adjustments and training to predict values closer to the actual values, it should be able to correctly predict outputs of the new data with little error ε, (usually ε < 0.001) compared to the actual data. In order to ensure maximum accuracy for a predictive learning model, the predicted values ŷ = F(x), compared to the actual values y, must not exceed a certain error threshold by the risk formula R ( F ) = E x y {\displaystyle R(F)=E_{xy}} L ( y , F ( x ) ) {\displaystyle L(y,F(x))} , where L represents the loss function, y is the actual data, and F(x) is the predicted data. This error function is then used to make incremental adjustments to the model's weights to eventually reach a well-trained prediction of F ∗ ( x ) = a r g m i n E x y {\displaystyle F^{*}(x)=argminE_{xy}} L ( y , F ( x ) ) {\displaystyle L(y,F(x))} . Even if you continuously train a machine learning model, it is impossible to achieve zero error. But if the error is negligible enough, then the model is said to be converged and future predictions will be accurate a vast majority of the time. In some cases, using a singular machine learning approach is not enough to create an accurate estimate for certain data. Ensemble learning is a combination of several machine learning algorithms to create a more accurate estimate. Each machine learning model is represented by the function F(x) = a0+ ∑ m = 1 M {\displaystyle \sum _{m=1}^{M}} amfm(x), where M is the number of methods used, a0 is the bias, am is the weight corresponding to each mth variable, and fm(x) is the activation function corresponding to each variable. An ensemble learning model is represented as a linear combination of the predictions from each constituent approach, {âm} = arg min ∑ i = 1 N {\displaystyle \sum _{i=1}^{N}} L(yi,a0+ ∑ m = 1 M {\displaystyle \sum _{m=1}^{M}} amfm(xi)) + λ ∑ i = 1 N {\displaystyle \sum _{i=1}^{N}} |am|, where yi is the actual value, the second parameter is the value predicted by each constituent method, and λ is a coefficient representing each model's variation for a certain predictor variable. Sensorimotor signals are neural impulses sent to the brain upon physical touch. Using predictive learning to detect sensorimotor signals plays a key role in early cognitive development, as the human brain represents sensorimotor signals in a predictive manner, (it attempts to minimize prediction error between incoming sensory signals and top–down prediction). In order to update an unadjusted predictor, it must be trained through sensorimotor experiences because it does not inherently have prediction ability. In a recent research paper, Dr. Yukie Nagai suggested a new architecture in predictive learning to predict sensorimotor signals based on a two-module approach: a sensorimotor system which interacts with the environment and a predictor which simulates the sensorimotor system in the brain. Computers use predictive learning in spatiotemporal memory to completely create an image given constituent frames. This implementation uses predictive recurrent neural networks, which are neural networks designed to work with sequential data, such as a time series. Using predictive learning in conjunction with computer vision enables computers to create images of their own, which can be helpful when replicating sequential phenomena such as replicating DNA strands, face recognition, or even creating X-ray images. In a recent study, data on consumer behavior was collected from various social media platforms such as Facebook, Twitter, LinkedIn, YouTube, Instagram, and Pinterest. The usage of predictive learning analytics led researchers to discover various trends in consumer behavior, such as determining how successful a campaign could be, estimating a fair price for a product to attract consumers, assessing how secure data is, and analyzing the specific audience of the consumers they could target for specific products. Reinforcement learning Predictive coding
Predictive state representation
In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.
Preference learning
Preference learning is a subfield in machine learning, which is a classification method based on observed preference information. In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items. While the concept of preference learning has been emerged for some time in many fields such as economics, it's a relatively new topic in Artificial Intelligence research. Several workshops have been discussing preference learning and related topics in the past decade. The main task in preference learning concerns problems in "learning to rank". According to different types of preference information observed, the tasks are categorized as three main problems in the book Preference Learning: In label ranking, the model has an instance space X = { x i } {\displaystyle X=\{x_{i}\}\,\!} and a finite set of labels Y = { y i | i = 1 , 2 , ⋯ , k } {\displaystyle Y=\{y_{i}|i=1,2,\cdots ,k\}\,\!} . The preference information is given in the form y i ≻ x y j {\displaystyle y_{i}\succ _{x}y_{j}\,\!} indicating instance x {\displaystyle x\,\!} shows preference in y i {\displaystyle y_{i}\,\!} rather than y j {\displaystyle y_{j}\,\!} . A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance. It was observed some conventional classification problems can be generalized in the framework of label ranking problem: if a training instance x {\displaystyle x\,\!} is labeled as class y i {\displaystyle y_{i}\,\!} , it implies that ∀ j ≠ i , y i ≻ x y j {\displaystyle \forall j\neq i,y_{i}\succ _{x}y_{j}\,\!} . In the multi-label case, x {\displaystyle x\,\!} is associated with a set of labels L ⊆ Y {\displaystyle L\subseteq Y\,\!} and thus the model can extract a set of preference information { y i ≻ x y j | y i ∈ L , y j ∈ Y ∖ L } {\displaystyle \{y_{i}\succ _{x}y_{j}|y_{i}\in L,y_{j}\in Y\backslash L\}\,\!} . Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label. Instance ranking also has the instance space X {\displaystyle X\,\!} and label set Y {\displaystyle Y\,\!} . In this task, labels are defined to have a fixed order y 1 ≻ y 2 ≻ ⋯ ≻ y k {\displaystyle y_{1}\succ y_{2}\succ \cdots \succ y_{k}\,\!} and each instance x l {\displaystyle x_{l}\,\!} is associated with a label y l {\displaystyle y_{l}\,\!} . Giving a set of instances as training data, the goal of this task is to find the ranking order for a new set of instances. Object ranking is similar to instance ranking except that no labels are associated with instances. Given a set of pairwise preference information in the form x i ≻ x j {\displaystyle x_{i}\succ x_{j}\,\!} and the model should find out a ranking order among instances. There are two practical representations of the preference information A ≻ B {\displaystyle A\succ B\,\!} . One is assigning A {\displaystyle A\,\!} and B {\displaystyle B\,\!} with two real numbers a {\displaystyle a\,\!} and b {\displaystyle b\,\!} respectively such that a > b {\displaystyle a>b\,\!} . Another one is assigning a binary value V ( A , B ) ∈ { 0 , 1 } {\displaystyle V(A,B)\in \{0,1\}\,\!} for all pairs ( A , B ) {\displaystyle (A,B)\,\!} denoting whether A ≻ B {\displaystyle A\succ B\,\!} or B ≻ A {\displaystyle B\succ A\,\!} . Corresponding to these two different representations, there are two different techniques applied to the learning process. If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called utility function. For label ranking the mapping is a function f : X × Y → R {\displaystyle f:X\times Y\rightarrow \mathbb {R} \,\!} such that y i ≻ x y j ⇒ f ( x , y i ) > f ( x , y j ) {\displaystyle y_{i}\succ _{x}y_{j}\Rightarrow f(x,y_{i})>f(x,y_{j})\,\!} . For instance ranking and object ranking, the mapping is a function f : X → R {\displaystyle f:X\rightarrow \mathbb {R} \,\!} . Finding the utility function is a regression learning problem which is well developed in machine learning. The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervising learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem. For object ranking, there is an early approach by Cohen et al. Using preference relations to predict the ranking will not be so intuitive. Since preference relation is not transitive, it implies that the solution of ranking satisfying those relations would sometimes be unreachable, or there could be more than one solution. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification. Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the relevance with this query. More discussions on research in this field can be found in Tie-Yan Liu's survey paper. Another application of preference learning is recommender systems. Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents. Learning to rank Preference Learning site
Prior knowledge for pattern recognition
Pattern recognition is a very active field of research intimately bound to machine learning. Also known as classification or statistical classification, pattern recognition aims at building a classifier that can determine the class of an input pattern. This procedure, known as training, corresponds to learning an unknown decision function based only on a set of input-output pairs ( x i , y i ) {\displaystyle ({\boldsymbol {x}}_{i},y_{i})} that form the training data (or training set). Nonetheless, in real world applications such as character recognition, a certain amount of information on the problem is usually known beforehand. The incorporation of this prior knowledge into the training is the key element that will allow an increase of performance in many applications.
Proactive learning
Proactive learning is a generalization of active learning designed to relax unrealistic assumptions and thereby reach practical applications. "Active learning seeks to select the most informative unlabeled instances and ask an omniscient oracle for their labels, so as to retrain a learning algorithm maximizing accuracy. However, the oracle is assumed to be infallible (never wrong), indefatigable (always answers), individual (only one oracle), and insensitive to costs (always free or always charges the same)." "In real life, it is possible and more general to have multiple sources of information with differing reliabilities or areas of expertise. Active learning also assumes that the single oracle is perfect, always providing a correct answer when requested. In reality, though, an "oracle" (if we generalize the term to mean any source of expert information) may be incorrect (fallible) with a probability that should be a function of the difficulty of the question. Moreover, an oracle may be reluctant – it may refuse to answer if it is too uncertain or too busy. Finally, active learning presumes the oracle is either free or charges uniform cost in label elicitation. Such an assumption is naive since cost is likely to be regulated by difficulty (amount of work required to formulate an answer) or other factors." Proactive learning relaxes all four of these assumptions, relying on a decision-theoretic approach to jointly select the optimal oracle and instance, by casting the problem as a utility optimization problem subject to a budget constraint.
Proaftn
Proaftn is a fuzzy classification method that belongs to the class of supervised learning algorithms. The acronym Proaftn stands for: (PROcédure d'Affectation Floue pour la problématique du Tri Nominal), which means in English: Fuzzy Assignment Procedure for Nominal Sorting. The method enables to determine the fuzzy indifference relations by generalizing the indices (concordance and discordance) used in the ELECTRE III method. To determine the fuzzy indifference relations, PROAFTN uses the general scheme of the discretization technique described in, that establishes a set of pre-classified cases called a training set. To resolve the classification problems, Proaftn proceeds by the following stages: Stage 1. Modeling of classes: In this stage, the prototypes of the classes are conceived using the two following steps: Step 1. Structuring: The prototypes and their parameters (thresholds, weights, etc.) are established using the available knowledge given by the expert. Step 2. Validation: We use one of the two following techniques in order to validate or adjust the parameters obtained in the first step through the assignment examples known as a training set. Direct technique: It consists in adjusting the parameters through the training set and with the expert intervention. Indirect technique: It consists in fitting the parameters without the expert intervention as used in machine learning approaches. In multicriteria classification problem, the indirect technique is known as preference disaggregation analysis. This technique requires less cognitive effort than the former technique; it uses an automatic method to determine the optimal parameters, which minimize the classification errors. Furthermore, several heuristics and metaheuristics were used to learn the multicriteria classification method Proaftn. Stage 2. Assignment: After conceiving the prototypes, Proaftn proceeds to assign the new objects to specific classes.
Probabilistic numerics
Probabilistic numerics is an active field of study at the intersection of applied mathematics, statistics, and machine learning centering on the concept of uncertainty in computation. In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration, linear algebra, optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference.
Probability matching
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. The optimal Bayesian decision strategy (to maximize the number of correct predictions, see Duda, Hart & Stork (2001)) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be p 2 + ( 1 − p ) 2 {\displaystyle p^{2}+(1-p)^{2}} , here .6 × .6 + .4 × .4 {\displaystyle .6\times .6+.4\times .4} ). The probability-matching strategy is of psychological interest because it is frequently employed by human subjects in decision and classification studies (where it may be related to Thompson sampling). The only case when probability matching will yield same results as Bayesian decision strategy mentioned above is when all class base rates are the same. So, if in the training set positive examples are observed 50% of the time, then the Bayesian strategy would yield 50% accuracy (1 × .5), just as probability matching (.5 ×.5 + .5 × .5).
Product of experts
Product of experts (PoE) is a machine learning technique. It models a probability distribution by combining the output from several simpler distributions. It was proposed by Geoffrey Hinton in 1999, along with an algorithm for training the parameters of such a system. The core idea is to combine several probability distributions ("experts") by multiplying their density functions—making the PoE classification similar to an "and" operation. This allows each expert to make decisions on the basis of a few dimensions without having to cover the full dimensionality of a problem: P ( y | { x k } ) = 1 Z ∏ j = 1 M f j ( y | { x k } ) {\displaystyle P(y|\{x_{k}\})={\frac {1}{Z}}\prod _{j=1}^{M}f_{j}(y|\{x_{k}\})} where f j {\displaystyle f_{j}} are unnormalized expert densities and Z = ∫ d y ∏ j = 1 M f j ( y | { x k } ) {\displaystyle Z=\int {\mbox{d}}y\prod _{j=1}^{M}f_{j}(y|\{x_{k}\})} is a normalization constant (see partition function (statistical mechanics)). This is related to (but quite different from) a mixture model, where several probability distributions p j ( y | { x j } ) {\displaystyle p_{j}(y|\{x_{j}\})} are combined via an "or" operation, which is a weighted sum of their density functions: P ( y | { x k } ) = ∑ j = 1 M α j p j ( y | { x k } ) , {\displaystyle P(y|\{x_{k}\})=\sum _{j=1}^{M}\alpha _{j}p_{j}(y|\{x_{k}\}),} with ∑ j α j = 1. {\displaystyle \sum _{j}\alpha _{j}=1.} The experts may be understood as each being responsible for enforcing a constraint in a high-dimensional space. A data point is considered likely iff none of the experts say that the point violates a constraint. To optimize it, he proposed the contrastive divergence minimization algorithm. This algorithm is most often used for learning restricted Boltzmann machines.
Programming by example
In computer science, programming by example (PbE), also termed programming by demonstration or more generally as demonstrational programming, is an end-user development technique for teaching a computer new behavior by demonstrating actions on concrete examples. The system records user actions and infers a generalized program that can be used on new examples. PbE is intended to be easier to do than traditional computer programming, which generally requires learning and using a programming language. Many PbE systems have been developed as research prototypes, but few have found widespread real-world application. More recently, PbE has proved to be a useful paradigm for creating scientific work-flows. PbE is used in two independent clients for the BioMOBY protocol: Seahawk and Gbrowse moby. Also the programming by demonstration (PbD) term has been mostly adopted by robotics researchers for teaching new behaviors to the robot through a physical demonstration of the task. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets. For final users, to automate a workflow in a complex tool (e.g. Photoshop), the most simple case of PbD is the macro recorder.
Prompt engineering
Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison → house, chat → cat, chien →" (the expected response being dog), an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style, layout, lighting, and aesthetic.
Proximal gradient methods for learning
Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is ℓ 1 {\displaystyle \ell _{1}} regularization (also known as Lasso) of the form min w ∈ R d 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 + λ ‖ w ‖ 1 , where x i ∈ R d and y i ∈ R . {\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{1},\quad {\text{ where }}x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .} Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application. Such customized penalties can help to induce certain structure in problem solutions, such as sparsity (in the case of lasso) or group structure (in the case of group lasso). Proximal gradient methods are applicable in a wide variety of scenarios for solving convex optimization problems of the form min x ∈ H F ( x ) + R ( x ) , {\displaystyle \min _{x\in {\mathcal {H}}}F(x)+R(x),} where F {\displaystyle F} is convex and differentiable with Lipschitz continuous gradient, R {\displaystyle R} is a convex, lower semicontinuous function which is possibly nondifferentiable, and H {\displaystyle {\mathcal {H}}} is some set, typically a Hilbert space. The usual criterion of x {\displaystyle x} minimizes F ( x ) + R ( x ) {\displaystyle F(x)+R(x)} if and only if ∇ ( F + R ) ( x ) = 0 {\displaystyle \nabla (F+R)(x)=0} in the convex, differentiable setting is now replaced by 0 ∈ ∂ ( F + R ) ( x ) , {\displaystyle 0\in \partial (F+R)(x),} where ∂ φ {\displaystyle \partial \varphi } denotes the subdifferential of a real-valued, convex function φ {\displaystyle \varphi } . Given a convex function φ : H → R {\displaystyle \varphi :{\mathcal {H}}\to \mathbb {R} } an important operator to consider is its proximal operator prox φ : H → H {\displaystyle \operatorname {prox} _{\varphi }:{\mathcal {H}}\to {\mathcal {H}}} defined by prox φ ⁡ ( u ) = arg ⁡ min x ∈ H φ ( x ) + 1 2 ‖ u − x ‖ 2 2 , {\displaystyle \operatorname {prox} _{\varphi }(u)=\operatorname {arg} \min _{x\in {\mathcal {H}}}\varphi (x)+{\frac {1}{2}}\|u-x\|_{2}^{2},} which is well-defined because of the strict convexity of the ℓ 2 {\displaystyle \ell _{2}} norm. The proximal operator can be seen as a generalization of a projection. We see that the proximity operator is important because x ∗ {\displaystyle x^{*}} is a minimizer to the problem min x ∈ H F ( x ) + R ( x ) {\displaystyle \min _{x\in {\mathcal {H}}}F(x)+R(x)} if and only if x ∗ = prox γ R ⁡ ( x ∗ − γ ∇ F ( x ∗ ) ) , {\displaystyle x^{*}=\operatorname {prox} _{\gamma R}\left(x^{*}-\gamma \nabla F(x^{*})\right),} where γ > 0 {\displaystyle \gamma >0} is any positive real number. One important technique related to proximal gradient methods is the Moreau decomposition, which decomposes the identity operator as the sum of two proximity operators. Namely, let φ : X → R {\displaystyle \varphi :{\mathcal {X}}\to \mathbb {R} } be a lower semicontinuous, convex function on a vector space X {\displaystyle {\mathcal {X}}} . We define its Fenchel conjugate φ ∗ : X → R {\displaystyle \varphi ^{*}:{\mathcal {X}}\to \mathbb {R} } to be the function φ ∗ ( u ) := sup x ∈ X ⟨ x , u ⟩ − φ ( x ) . {\displaystyle \varphi ^{*}(u):=\sup _{x\in {\mathcal {X}}}\langle x,u\rangle -\varphi (x).} The general form of Moreau's decomposition states that for any x ∈ X {\displaystyle x\in {\mathcal {X}}} and any γ > 0 {\displaystyle \gamma >0} that x = prox γ φ ⁡ ( x ) + γ prox φ ∗ / γ ⁡ ( x / γ ) , {\displaystyle x=\operatorname {prox} _{\gamma \varphi }(x)+\gamma \operatorname {prox} _{\varphi ^{*}/\gamma }(x/\gamma ),} which for γ = 1 {\displaystyle \gamma =1} implies that x = prox φ ⁡ ( x ) + prox φ ∗ ⁡ ( x ) {\displaystyle x=\operatorname {prox} _{\varphi }(x)+\operatorname {prox} _{\varphi ^{*}}(x)} . The Moreau decomposition can be seen to be a generalization of the usual orthogonal decomposition of a vector space, analogous with the fact that proximity operators are generalizations of projections. In certain situations it may be easier to compute the proximity operator for the conjugate φ ∗ {\displaystyle \varphi ^{*}} instead of the function φ {\displaystyle \varphi } , and therefore the Moreau decomposition can be applied. This is the case for group lasso. Consider the regularized empirical risk minimization problem with square loss and with the ℓ 1 {\displaystyle \ell _{1}} norm as the regularization penalty: min w ∈ R d 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 + λ ‖ w ‖ 1 , {\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{1},} where x i ∈ R d and y i ∈ R . {\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .} The ℓ 1 {\displaystyle \ell _{1}} regularization problem is sometimes referred to as lasso (least absolute shrinkage and selection operator). Such ℓ 1 {\displaystyle \ell _{1}} regularization problems are interesting because they induce sparse solutions, that is, solutions w {\displaystyle w} to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem min w ∈ R d 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 + λ ‖ w ‖ 0 , {\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \|w\|_{0},} where ‖ w ‖ 0 {\displaystyle \|w\|_{0}} denotes the ℓ 0 {\displaystyle \ell _{0}} "norm", which is the number of nonzero entries of the vector w {\displaystyle w} . Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors. For simplicity we restrict our attention to the problem where λ = 1 {\displaystyle \lambda =1} . To solve the problem min w ∈ R d 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 + ‖ w ‖ 1 , {\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\|w\|_{1},} we consider our objective function in two parts: a convex, differentiable term F ( w ) = 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 {\displaystyle F(w)={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}} and a convex function R ( w ) = ‖ w ‖ 1 {\displaystyle R(w)=\|w\|_{1}} . Note that R {\displaystyle R} is not strictly convex. Let us compute the proximity operator for R ( w ) {\displaystyle R(w)} . First we find an alternative characterization of the proximity operator prox R ⁡ ( x ) {\displaystyle \operatorname {prox} _{R}(x)} as follows: u = prox R ⁡ ( x ) ⟺ 0 ∈ ∂ ( R ( u ) + 1 2 ‖ u − x ‖ 2 2 ) ⟺ 0 ∈ ∂ R ( u ) + u − x ⟺ x − u ∈ ∂ R ( u ) . {\displaystyle {\begin{aligned}u=\operatorname {prox} _{R}(x)\iff &0\in \partial \left(R(u)+{\frac {1}{2}}\|u-x\|_{2}^{2}\right)\\\iff &0\in \partial R(u)+u-x\\\iff &x-u\in \partial R(u).\end{aligned}}} For R ( w ) = ‖ w ‖ 1 {\displaystyle R(w)=\|w\|_{1}} it is easy to compute ∂ R ( w ) {\displaystyle \partial R(w)} : the i {\displaystyle i} th entry of ∂ R ( w ) {\displaystyle \partial R(w)} is precisely ∂ | w i | = { 1 , w i > 0 − 1 , w i < 0 [ − 1 , 1 ] , w i = 0. {\displaystyle \partial |w_{i}|={\begin{cases}1,&w_{i}>0\\-1,&w_{i}<0\\\left[-1,1\right],&w_{i}=0.\end{cases}}} Using the recharacterization of the proximity operator given above, for the choice of R ( w ) = ‖ w ‖ 1 {\displaystyle R(w)=\|w\|_{1}} and γ > 0 {\displaystyle \gamma >0} we have that prox γ R ⁡ ( x ) {\displaystyle \operatorname {prox} _{\gamma R}(x)} is defined entrywise by ( prox γ R ⁡ ( x ) ) i = { x i − γ , x i > γ 0 , | x i | ≤ γ x i + γ , x i < − γ , {\displaystyle \left(\operatorname {prox} _{\gamma R}(x)\right)_{i}={\begin{cases}x_{i}-\gamma ,&x_{i}>\gamma \\0,&|x_{i}|\leq \gamma \\x_{i}+\gamma ,&x_{i}<-\gamma ,\end{cases}}} which is known as the soft thresholding operator S γ ( x ) = prox γ ‖ ⋅ ‖ 1 ⁡ ( x ) {\displaystyle S_{\gamma }(x)=\operatorname {prox} _{\gamma \|\cdot \|_{1}}(x)} . To finally solve the lasso problem we consider the fixed point equation shown earlier: x ∗ = prox γ R ⁡ ( x ∗ − γ ∇ F ( x ∗ ) ) . {\displaystyle x^{*}=\operatorname {prox} _{\gamma R}\left(x^{*}-\gamma \nabla F(x^{*})\right).} Given that we have computed the form of the proximity operator explicitly, then we can define a standard fixed point iteration procedure. Namely, fix some initial w 0 ∈ R d {\displaystyle w^{0}\in \mathbb {R} ^{d}} , and for k = 1 , 2 , … {\displaystyle k=1,2,\ldots } define w k + 1 = S γ ( w k − γ ∇ F ( w k ) ) . {\displaystyle w^{k+1}=S_{\gamma }\left(w^{k}-\gamma \nabla F\left(w^{k}\right)\right).} Note here the effective trade-off between the empirical error term F ( w ) {\displaystyle F(w)} and the regularization penalty R ( w ) {\displaystyle R(w)} . This fixed point method has decoupled the effect of the two different convex functions which comprise the objective function into a gradient descent step ( w k − γ ∇ F ( w k ) {\displaystyle w^{k}-\gamma \nabla F\left(w^{k}\right)} ) and a soft thresholding step (via S γ {\displaystyle S_{\gamma }} ). Convergence of this fixed point scheme is well-studied in the literature and is guaranteed under appropriate choice of step size γ {\displaystyle \gamma } and loss function (such as the square loss taken here). Accelerated methods were introduced by Nesterov in 1983 which improve the rate of convergence under certain regularity assumptions on F {\displaystyle F} . Such methods have been studied extensively in previous years. For more general learning problems where the proximity operator cannot be computed explicitly for some regularization term R {\displaystyle R} , such fixed point schemes can still be carried out using approximations to both the gradient and the proximity operator. There have been numerous developments within the past decade in convex optimization techniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods. In the fixed point iteration scheme w k + 1 = prox γ R ⁡ ( w k − γ ∇ F ( w k ) ) , {\displaystyle w^{k+1}=\operatorname {prox} _{\gamma R}\left(w^{k}-\gamma \nabla F\left(w^{k}\right)\right),} one can allow variable step size γ k {\displaystyle \gamma _{k}} instead of a constant γ {\displaystyle \gamma } . Numerous adaptive step size schemes have been proposed throughout the literature. Applications of these schemes suggest that these can offer substantial improvement in number of iterations required for fixed point convergence. Elastic net regularization offers an alternative to pure ℓ 1 {\displaystyle \ell _{1}} regularization. The problem of lasso ( ℓ 1 {\displaystyle \ell _{1}} ) regularization involves the penalty term R ( w ) = ‖ w ‖ 1 {\displaystyle R(w)=\|w\|_{1}} , which is not strictly convex. Hence, solutions to min w F ( w ) + R ( w ) , {\displaystyle \min _{w}F(w)+R(w),} where F {\displaystyle F} is some empirical loss function, need not be unique. This is often avoided by the inclusion of an additional strictly convex term, such as an ℓ 2 {\displaystyle \ell _{2}} norm regularization penalty. For example, one can consider the problem min w ∈ R d 1 n ∑ i = 1 n ( y i − ⟨ w , x i ⟩ ) 2 + λ ( ( 1 − μ ) ‖ w ‖ 1 + μ ‖ w ‖ 2 2 ) , {\displaystyle \min _{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}+\lambda \left((1-\mu )\|w\|_{1}+\mu \|w\|_{2}^{2}\right),} where x i ∈ R d and y i ∈ R . {\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .} For 0 < μ ≤ 1 {\displaystyle 0<\mu \leq 1} the penalty term λ ( ( 1 − μ ) ‖ w ‖ 1 + μ ‖ w ‖ 2 2 ) {\displaystyle \lambda \left((1-\mu )\|w\|_{1}+\mu \|w\|_{2}^{2}\right)} is now strictly convex, and hence the minimization problem now admits a unique solution. It has been observed that for sufficiently small μ > 0 {\displaystyle \mu >0} , the additional penalty term μ ‖ w ‖ 2 2 {\displaystyle \mu \|w\|_{2}^{2}} acts as a preconditioner and can substantially improve convergence while not adversely affecting the sparsity of solutions. Proximal gradient methods provide a general framework which is applicable to a wide variety of problems in statistical learning theory. Certain problems in learning can often involve data which has additional structure that is known a priori. In the past several years there have been new developments which incorporate information about group structure to provide methods which are tailored to different applications. Here we survey a few such methods. Group lasso is a generalization of the lasso method when features are grouped into disjoint blocks. Suppose the features are grouped into blocks { w 1 , … , w G } {\displaystyle \{w_{1},\ldots ,w_{G}\}} . Here we take as a regularization penalty R ( w ) = ∑ g = 1 G ‖ w g ‖ 2 , {\displaystyle R(w)=\sum _{g=1}^{G}\|w_{g}\|_{2},} which is the sum of the ℓ 2 {\displaystyle \ell _{2}} norm on corresponding feature vectors for the different groups. A similar proximity operator analysis as above can be used to compute the proximity operator for this penalty. Where the lasso penalty has a proximity operator which is soft thresholding on each individual component, the proximity operator for the group lasso is soft thresholding on each group. For the group w g {\displaystyle w_{g}} we have that proximity operator of λ γ ( ∑ g = 1 G ‖ w g ‖ 2 ) {\displaystyle \lambda \gamma \left(\sum _{g=1}^{G}\|w_{g}\|_{2}\right)} is given by S ~ λ γ ( w g ) = { w g − λ γ w g ‖ w g ‖ 2 , ‖ w g ‖ 2 > λ γ 0 , ‖ w g ‖ 2 ≤ λ γ {\displaystyle {\widetilde {S}}_{\lambda \gamma }(w_{g})={\begin{cases}w_{g}-\lambda \gamma {\frac {w_{g}}{\|w_{g}\|_{2}}},&\|w_{g}\|_{2}>\lambda \gamma \\0,&\|w_{g}\|_{2}\leq \lambda \gamma \end{cases}}} where w g {\displaystyle w_{g}} is the g {\displaystyle g} th group. In contrast to lasso, the derivation of the proximity operator for group lasso relies on the Moreau decomposition. Here the proximity operator of the conjugate of the group lasso penalty becomes a projection onto the ball of a dual norm. In contrast to the group lasso problem, where features are grouped into disjoint blocks, it may be the case that grouped features are overlapping or have a nested structure. Such generalizations of group lasso have been considered in a variety of contexts. For overlapping groups one common approach is known as latent group lasso which introduces latent variables to account for overlap. Nested group structures are studied in hierarchical structure prediction and with directed acyclic graphs. Convex analysis Proximal gradient method Regularization Statistical learning theory
Pythia (machine learning)
Pythia is an ancient text restoration model that recovers missing characters from a damaged text input using deep neural networks. It was created by Yannis Assael, Thea Sommerschield, and Jonathan Prag, researchers from Google DeepMind and the University of Oxford. To study the society and the history of ancient civilisations, ancient history relies on disciplines such as epigraphy, the study of ancient inscribed texts. Hundreds of thousands of these texts, known as inscriptions, have survived to our day, but are often damaged over the centuries. Illegible parts of the text must then be restored by specialists, called epigraphists, in order to extract meaningful information from the text and use it to expand our knowledge of the context in which the text was written. Pythia takes as input the damaged text, and is trained to return hypothesised restorations of ancient Greek inscriptions, working as an assistive aid for ancient historians. Its neural network architecture works at both the character- and word-level, thereby effectively handling long-term context information, and dealing efficiently with incomplete word representations. Pythia is applicable to any discipline dealing with ancient texts (philology, papyrology, codicology) and can work in any language (ancient or modern).
Quantification (machine learning)
In machine learning and data mining, quantification (variously called learning to quantify, or supervised prevalence estimation, or class prior estimation) is the task of using supervised learning in order to train models (quantifiers) that estimate the relative frequencies (also known as prevalence values) of the classes of interest in a sample of unlabelled data items. For instance, in a sample of 100,000 unlabelled tweets known to express opinions about a certain political candidate, a quantifier may be used to estimate the percentage of these 100,000 tweets which belong to class `Positive' (i.e., which manifest a positive stance towards this candidate), and to do the same for classes `Neutral' and `Negative'. Quantification may also be viewed as the task of training predictors that estimate a (discrete) probability distribution, i.e., that generate a predicted distribution that approximates the unknown true distribution of the items across the classes of interest. Quantification is different from classification, since the goal of classification is to predict the class labels of individual data items, while the goal of quantification it to predict the class prevalence values of sets of data items. Quantification is also different from regression, since in regression the training data items have real-valued labels, while in quantification the training data items have class labels. It has been shown in multiple research works that performing quantification by classifying all unlabelled instances and then counting the instances that have been attributed to each class (the 'classify and count' method) usually leads to suboptimal quantification accuracy. This suboptimality may be seen as a direct consequence of 'Vapnik's principle', which states: If you possess a restricted amount of information for solving some problem, try to solve the problem directly and never solve a more general problem as an intermediate step. It is possible that the available information is sufficient for a direct solution but is insufficient for solving a more general intermediate problem. In our case, the problem to be solved directly is quantification, while the more general intermediate problem is classification. As a result of the suboptimality of the 'classify and count' method, quantification has evolved as a task in its own right, different (in goals, methods, techniques, and evaluation measures) from classification.
Quantum machine learning
Quantum machine learning is the integration of quantum algorithms within machine learning programs. The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning. While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizes qubits and quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program. This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data. Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning the phase transitions of a quantum system or creating new quantum experiments. Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa. Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory". Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices. Associative (or content-addressable memories) are able to recognize stored content on the basis of a similarity measure, rather than fixed addresses, like in random access memories. As such they must be able to retrieve both incomplete and corrupted patterns, the essential machine learning task of pattern recognition. Typical classical associative memories store p patterns in the O ( n 2 ) {\displaystyle O(n^{2})} interactions (synapses) of a real, symmetric energy matrix over a network of n artificial neurons. The encoding is such that the desired patterns are local minima of the energy functional and retrieval is done by minimizing the total energy, starting from an initial configuration. Unfortunately, classical associative memories are severely limited by the phenomenon of cross-talk. When too many patterns are stored, spurious memories appear which quickly proliferate, so that the energy landscape becomes disordered and no retrieval is anymore possible. The number of storable patterns is typically limited by a linear function of the number of neurons, p ≤ O ( n ) {\displaystyle p\leq O(n)} . Quantum associative memories (in their simplest realization) store patterns in a unitary matrix U acting on the Hilbert space of n qubits. Retrieval is realized by the unitary evolution of a fixed initial state to a quantum superposition of the desired patterns with probability distribution peaked on the most similar pattern to an input. By its very quantum nature, the retrieval process is thus probabilistic. Because quantum associative memories are free from cross-talk, however, spurious memories are never generated. Correspondingly, they have a superior capacity than classical ones. The number of parameters in the unitary matrix U is O ( p n ) {\displaystyle O(pn)} . One can thus have efficient, spurious-memory-free quantum associative memories for any polynomial number of patterns. A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the amplitudes of a quantum state with the inputs and outputs of computations. Since a state of n {\displaystyle n} qubits is described by 2 n {\displaystyle 2^{n}} complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose resources grow polynomially in the number of qubits n {\displaystyle n} , which amounts to a logarithmic time complexity in the number of amplitudes and thereby the dimension of the input. Many quantum machine learning algorithms in this category are based on variations of the quantum algorithm for linear systems of equations (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a Hamiltonian which entry wise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse or low rank. For reference, any known classical algorithm for matrix inversion requires a number of operations that grows more than quadratically in the dimension of the matrix (e.g. O ( n 2.373 ) {\displaystyle O{\mathord {\left(n^{2.373}\right)}}} ), but they are not restricted to sparse matrices. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a linear system of equations, for example in least-squares linear regression, the least-squares version of support vector machines, and Gaussian processes. A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases, this step easily hides the complexity of the task. VQAs are one of the most studied quantum algorithms as researchers expect that all the needed applications for the quantum computer will be using the VQAs and also VQAs seem to fulfill the expectation for gaining quantum supremacy. VQAs is a mixed quantum-classical approach where the quantum processor prepares quantum states and measurement is made and the optimization is done by a classical computer. VQAs are considered best for NISQ as VQAs are noise tolerant compared to other algorithms and give quantum superiority with only a few hundred qubits. Researchers have studied circuit-based algorithms to solve optimization problems and find the ground state energy of complex systems, which were difficult to solve or required a large time to perform the computation using a classical computer. Variational Quantum Circuits also known as Parametrized Quantum Circuits (PQCs) are based on Variational Quantum Algorithms (VQAs). VQCs consist of three parts: preparation of initial states, quantum circuit, and measurement. Researchers are extensively studying VQCs, as it uses the power of quantum computation to learn in a short time and also use fewer parameters than its classical counterparts. It is theoretically and numerically proven that we can approximate non-linear functions, like those used in neural networks, on quantum circuits. Due to VQCs superiority, neural network has been replaced by VQCs in Reinforcement Learning tasks and Generative Algorithms. The intrinsic nature of quantum devices towards decoherence, random gate error and measurement errors caused to have high potential to limit the training of the variation circuits. Training the VQCs on the classical devices before employing them on quantum devices helps to overcome the problem of decoherence noise that came through the number of repetitions for training. Pattern reorganization is one of the important tasks of machine learning, binary classification is one of the tools or algorithms to find patterns. Binary classification is used in supervised learning and in unsupervised learning. In quantum machine learning, classical bits are converted to qubits and they are mapped to Hilbert space; complex value data are used in a quantum binary classifier to use the advantage of Hilbert space. By exploiting the quantum mechanic properties such as superposition, entanglement, interference the quantum binary classifier produces the accurate result in short period of time. Another approach to improving classical machine learning with quantum information processing uses amplitude amplification methods based on Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the k-medians and the k-nearest neighbors algorithms. Other applications include quadratic speedups in the training of perceptron and the computation of attention. An example of amplitude amplification being used in a machine learning algorithm is Grover's search algorithm minimization. In which a subroutine uses Grover's search algorithm to find an element less than some previously defined element. This can be done with an oracle that determines whether or not a state with a corresponding element is less than the predefined one. Grover's algorithm can then find an element such that our condition is met. The minimization is initialized by some random element in our data set, and iteratively does this subroutine to find the minimum element in the data set. This minimization is notably used in quantum k-medians, and it has a speed up of at least O ( n / k ) {\displaystyle O({\sqrt {n/k}})} compared to classical versions of k-medians, where n {\displaystyle n} is the number of data points and k {\displaystyle k} is the number of clusters. Amplitude amplification is often combined with quantum walks to achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm as well as the performance of reinforcement learning agents in the projective simulation framework. Reinforcement learning is a branch of machine learning distinct from supervised and unsupervised learning, which also admits quantum enhancements. In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical or quantum environment and occasionally receives rewards for its actions, which allows the agent to adapt its behavior—in other words, to learn what to do in order to gain more rewards. In some situations, either because of the quantum processing capability of the agent, or due to the possibility to probe the environment in superpositions, a quantum speedup may be achieved. Implementations of these kinds of protocols have been proposed for systems of trapped ions and superconducting circuits. A quantum speedup of the agent's internal decision-making time has been experimentally demonstrated in trapped ions, while a quantum speedup of the learning time in a fully coherent (`quantum') interaction between agent and environment has been experimentally realized in a photonic setup. Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schrödinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system. As the depth of the quantum circuit advances on NISQ devices, the noise level rises, posing a significant challenge to accurately computing costs and gradients on training models. The noise tolerance will be improved by using the quantum perceptron and the quantum algorithm on the currently accessible quantum hardware. A regular connection of similar components known as neurons forms the basis of even the most complex brain networks. Typically, a neuron has two operations: the inner product and an activation function. As opposed to the activation function, which is typically nonlinear, the inner product is a linear process. With quantum computing, linear processes may be easily accomplished additionally, due to the simplicity of implementation, the threshold function is preferred by the majority of quantum neurons for activation functions. Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include deep learning, probabilistic programming, and other machine learning and artificial intelligence applications. A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures. Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks. The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets. In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward. Reverse annealing has been used as well to solve a fully connected quantum restricted Boltzmann machine. Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed. Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines. Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing. The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template. This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware. Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network. Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set. Such gates make certain phases unable to be observed and generate specific oscillations. Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing. Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size. A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time. The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10 - 20 nm) electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } in Dirac notation. A novel design for multi-dimensional vectors that uses circuits as convolution filters is QCNN. It was inspired by the advantages of CNNs and the power of QML. It is made using a combination of a variational quantum circuit(VQC) and a deep neural network(DNN), fully utilizing the power of extremely parallel processing on a superposition of a quantum state with a finite number of qubits. The main strategy is to carry out an iterative optimization process in the NISQ devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction. The quantum circuit must effectively handle spatial information in order for QCNN to function as CNN. The convolution filter is the most basic technique for making use of spatial information. One or more quantum convolutional filters make up a quantum convolutional neural network (QCNN), and each of these filters transforms input data using a quantum circuit that can be created in an organized or randomized way. Three parts that make up the quantum convolutional filter are: the encoder, the parameterized quantum circuit (PQC), and the measurement. The quantum convolutional filter can be seen as an extension of the filter in the traditional CNN because it was designed with trainable parameters. Quantum neural networks take advantage of the hierarchical structures, and for each subsequent layer, the number of qubits from the preceding layer is decreased by a factor of two. For n input qubits, these structure have O(log(n)) layers, allowing for shallow circuit depth. Additionally, they are able to avoid "barren plateau," one of the most significant issues with PQC-based algorithms, ensuring trainability. Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of the pooling layer is also offered to assure validity. In QCNN architecture, the pooling layer is typically placed between succeeding convolutional layers. Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting. Such process can be accomplished applying full Tomography on the state to reduce it all the way down to one qubit and then processed it in subway. The most frequently used unit type in the pooling layer is max pooling, although there are other types as well. Similar to conventional feed-forward neural networks, the last module is a fully connected layer with full connections to all activations in the preceding layer. Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture. Dissipative QNNs (DQNNs) are constructed from layers of qubits coupled by perceptron called building blocks, which have an arbitrary unitary design. Each node in the network layer of a DQNN is given a distinct collection of qubits, and each qubit is also given a unique quantum perceptron unitary to characterize it. The input states information are transported through the network in a feed-forward fashion, layer-to-layer transition mapping on the qubits of the two adjacent layers, as the name implies. Dissipative term also refers to the fact that the output layer is formed by the ancillary qubits while the input layers are dropped while tracing out the final layer. When performing a broad supervised learning task, DQNN are used to learn a unitary matrix connecting the input and output quantum states. The training data for this task consists of the quantum state and the corresponding classical labels. Inspired by the extremely successful classical Generative adversarial network(GAN), dissipative quantum generative adversarial network (DQGAN) is introduced for unsupervised learning of the unlabeled training data . The generator and the discriminator are the two DQNNs that make up a single DQGAN. The generator's goal is to create false training states that the discriminator cannot differentiate from the genuine ones, while the discriminator's objective is to separate the real training states from the fake states created by the generator. The relevant features of the training set are learned by the generator by alternate and adversarial training of the networks that aid in the production of sets that extend the training set. DQGAN has a fully quantum architecture and is trained in quantum data. Hidden quantum Markov models (HQMMs) are a quantum-enhanced version of classical Hidden Markov Models (HMMs), which are typically used to model sequential data in various fields like robotics and natural language processing. Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well. Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue: density matrices. Recent work has shown that these models can be successfully learned by maximizing the log-likelihood of the given data via classical optimization, and there is some empirical evidence that these models can better model sequential data compared to classical HMMs in practice, although further work is needed to determine exactly when and how these benefits are derived. Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models. In the most general case of quantum machine learning, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case. (This also relates to work on quantum pattern matching.) The problem of learning unitary transformations can be approached in a similar way. Going beyond the specific problem of learning states and transformations, the task of clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum. Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in, where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning. Such a speedup in the reinforcement-learning paradigm has been experimentally demonstrated in a photonic setup. The need for models that can be understood by humans emerges in quantum machine learning in analogy to classical machine learning and drives the research field of explainable quantum machine learning (or XQML in analogy to XAI/XML). These efforts are often also referred to as Interpretable Machine Learning (IML, and by extension IQML). XQML/IQML can be considered as an alternative research direction instead of finding a quantum advantage. For example, XQML has been used in the context of mobile malware detection and classification. Quantum Shapley values have also been proposed to interpret gates within a circuit based on a game-theoretic approach. For this purpose, gates instead of features act as players in a coalitional game with a value function that depends on measurements of the quantum circuit of interest. Additionally, a quantum version of the classical technique known as LIME (Linear Interpretable Model-Agnostic Explanations) has also been proposed, known as Q-LIME. The term "quantum machine learning" sometimes refers to classical machine learning performed on data from quantum systems. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other applications include learning Hamiltonians and automatically generating quantum experiments. Quantum learning theory pursues a mathematical analysis of the quantum generalizations of classical learning models and of the possible speed-ups or other improvements that they may provide. The framework is very similar to that of classical computational learning theory, but the learner in this case is a quantum information processing device, while the data may be either classical or quantum. Quantum learning theory should be contrasted with the quantum-enhanced machine learning discussed above, where the goal was to consider specific problems and to use quantum protocols to improve the time complexity of classical algorithms for these problems. Although quantum learning theory is still under development, partial results in this direction have been obtained. The starting point in learning theory is typically a concept class, a set of possible concepts. Usually a concept is a function on some domain, such as { 0 , 1 } n {\displaystyle \{0,1\}^{n}} . For example, the concept class could be the set of disjunctive normal form (DNF) formulas on n bits or the set of Boolean circuits of some constant depth. The goal for the learner is to learn (exactly or approximately) an unknown target concept from this concept class. The learner may be actively interacting with the target concept, or passively receiving samples from it. In active learning, a learner can make membership queries to the target concept c, asking for its value c(x) on inputs x chosen by the learner. The learner then has to reconstruct the exact target concept, with high probability. In the model of quantum exact learning, the learner can make membership queries in quantum superposition. If the complexity of the learner is measured by the number of membership queries it makes, then quantum exact learners can be polynomially more efficient than classical learners for some concept classes, but not more. If complexity is measured by the amount of time the learner uses, then there are concept classes that can be learned efficiently by quantum learners but not by classical learners (under plausible complexity-theoretic assumptions). A natural model of passive learning is Valiant's probably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples ∑ x D ( x ) | x , c ( x ) ⟩ {\displaystyle \sum _{x}{\sqrt {D(x)}}|x,c(x)\rangle } . In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors. However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution. When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions). This passive learning type is also the most common scheme in supervised learning: a learning algorithm typically takes the training examples fixed, without the ability to query the label of unlabelled examples. Outputting a hypothesis h is a step of induction. Classically, an inductive model splits into a training and an application phase: the model parameters are estimated in the training phase, and the learned model is applied an arbitrary many times in the application phase. In the asymptotic limit of the number of applications, this splitting of phases is also present with quantum resources. The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009. Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations. In 2013, Google Research, NASA, and the Universities Space Research Association launched the Quantum Artificial Intelligence Lab which explores the use of the adiabatic D-Wave quantum computer. A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits. Using a different annealing technology based on nuclear magnetic resonance (NMR), a quantum Hopfield network was implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation. NMR technology also enables universal quantum computing, and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number ‘6’ and ‘9’ on a liquid-state quantum computer in 2015. The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on the feature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costly quantum tomography by reading out the final state in terms of direction (up/down) of the NMR signal. Photonic implementations are attracting more attention, not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013. Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule. A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015. Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of quantum machine learning, in the form of a so-called quantum memristor, a quantized model of the standard classical memristor. This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed, and an experiment with quantum dots performed. A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network. Since 2016, IBM has launched an online cloud-based platform for quantum software developers, called the IBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qubits, using both trapped-ion and superconductive quantum computing methods. In October 2019, it was noted that the introduction of Quantum Random Number Generators (QRNGs) to machine learning models including Neural Networks and Convolutional Neural Networks for random initial weight distribution and Random Forests for splitting processes had a profound effect on their ability when compared to the classical method of Pseudorandom Number Generators (PRNGs). However, in a more recent publication from 2021, these claims could not be reproduced for Neural Network weight initialization and no significant advantage of using QRNGs over PRNGs was found. The work also demonstrated that the generation of fair random numbers with a gate quantum computer is a non-trivial task on NISQ devices, and QRNGs are therefore typically much more difficult to use in practice than PRNGs. A paper published in December 2018 reported on an experiment using a trapped-ion system demonstrating a quantum speedup of the deliberation time of reinforcement learning agents employing internal quantum hardware. In March 2021, a team of researchers from Austria, The Netherlands, the US and Germany reported the experimental demonstration of a quantum speedup of the learning time of reinforcement learning agents interacting fully quantumly with the environment. The relevant degrees of freedom of both agent and environment were realized on a compact and fully tunable integrated nanophotonic processor. While machine learning itself is now not only a research field but an economically significant and fast growing industry and quantum computing is a well established field of both theoretical and experimental research, quantum machine learning remains a purely theoretical field of studies. Attempts to experimentally demonstrate concepts of quantum machine learning remain insufficient. Many of the leading scientists that extensively publish in the field of quantum machine learning warn about the extensive hype around the topic and are very restrained if asked about its practical uses in the foreseeable future. Sophia Chen collected some of the statements made by well known scientists in the field: "I think we haven't done our homework yet. This is an extremely new scientific field," - physicist Maria Schuld of Canada-based quantum computing startup Xanadu. “When mixing machine learning with ‘quantum,’ you catalyse a hype-condensate.” - Jacob Biamonte a contributor to the theory of quantum computation. "There is a lot more work that needs to be done before claiming quantum machine learning will actually work," - computer scientist Iordanis Kerenidis, the head of quantum algorithms at the Silicon Valley-based quantum computing startup QC Ware. "I have not seen a single piece of evidence that there exists a meaningful [machine learning] task for which it would make sense to use a quantum computer and not a classical computer," - physicist Ryan Sweke of the Free University of Berlin in Germany. “Don't fall for the hype!” - Frank Zickert, who is the author of probably the most practical book related to the subject beware that ”quantum computers are far away from advancing machine learning for their representation ability”, and even speaking about evaluation and optimization for any kind of useful task quantum supremacy is not yet achieved. Furthermore, nobody among the active researchers in the field make any forecasts about when it could possibly become practical. Differentiable programming Quantum computing Quantum algorithm for linear systems of equations Quantum annealing Quantum neural network Quantum image
Rademacher complexity
In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.
Reciprocal human machine learning
Reciprocal Human Machine Learning (RHML) is an interdisciplinary approach to designing human-AI interaction systems. RHML aims to enable continual learning between humans and machine learning models by having them learn from each other. This approach keeps the human expert "in the loop" to oversee and enhance machine learning performance and simultaneously support the human expert continue learning.
Relational data mining
Relational data mining is the data mining technique for relational databases. Unlike traditional data mining algorithms, which look for patterns in a single table (propositional patterns), relational data mining algorithms look for patterns among multiple tables (relational patterns). For most types of propositional patterns, there are corresponding relational patterns. For example, there are relational classification rules (relational classification), relational regression tree, and relational association rules. There are several approaches to relational data mining: Inductive Logic Programming (ILP) Statistical Relational Learning (SRL) Graph Mining Propositionalization Multi-view learning
Right to explanation
In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for." Some such legal rights already exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning. In other words, that the related automated decision making systems that use explainability would be more trustworthy and transparent. Without this right, which could be constituted both legally and through professional standards, the public will be left without much recourse to challenge the decisions of automated systems.
Robot learning
Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives). Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation. Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning".
Robotic process automation
Robotic process automation (RPA) is a form of business process automation that is based on software robots (bots) or artificial intelligence (AI) agents. RPA should not be confused with artificial intelligence as it is based on automotive technology following a predefined workflow. It is sometimes referred to as software robotics (not to be confused with robot software). In traditional workflow automation tools, a software developer produces a list of actions to automate a task and interface to the back end system using internal application programming interfaces (APIs) or dedicated scripting language. In contrast, RPA systems develop the action list by watching the user perform that task in the application's graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI. This can lower the barrier to the use of automation in products that might not otherwise feature APIs for this purpose. RPA tools have strong technical similarities to graphical user interface testing tools. These tools also automate interactions with the GUI, and often do so by repeating a set of demonstration actions performed by a user. RPA tools differ from such systems in that they allow data to be handled in and between multiple applications, for instance, receiving email containing an invoice, extracting the data, and then typing that into a bookkeeping system.
Rule induction
Rule induction is an area of machine learning in which formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data. Data mining in general and rule induction in detail are trying to create algorithms without human programming but with analyzing existing data structures.: 415-  In the easiest case, a rule is expressed with “if-then statements” and was created with the ID3 algorithm for decision tree learning.: 7 : 348  Rule learning algorithm are taking training data as input and creating rules by partitioning the table with cluster analysis.: 7  A possible alternative over the ID3 algorithm is genetic programming which evolves a program until it fits to the data.: 2  Creating different algorithm and testing them with input data can be realized in the WEKA software.: 125  Additional tools are machine learning libraries for Python, like scikit-learn.
Sample complexity
The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function. More precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1. There are two variants of sample complexity: The weak variant fixes a particular input-output distribution; The strong variant takes the worst-case sample complexity over all input-output distributions. The No free lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples. However, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the VC dimension on the class of target functions.
Self-supervised learning
Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on external labels provided by humans. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving it requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples. One sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects. The typical SSL method is based on an artificial neural network or other model such as a decision list. The model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels which help to initialize the model parameters. Second, the actual task is performed with supervised or unsupervised learning. Other auxiliary tasks involve pattern completion from masked input patterns (silent pauses in speech or image portions masked in black). Self-supervised learning has produced promising results in recent years and has found practical application in audio processing and is being used by Facebook and others for speech recognition. Autoassociative self-supervised learning is a specific category of self-supervised learning where a neural network is trained to reproduce or reconstruct its own input data. In other words, the model is tasked with learning a representation of the data that captures its essential features or structure, allowing it to regenerate the original input. The term "autoassociative" comes from the fact that the model is essentially associating the input data with itself. This is often achieved using autoencoders, which are a type of neural network architecture used for representation learning. Autoencoders consist of an encoder network that maps the input data to a lower-dimensional representation (latent space), and a decoder network that reconstructs the input data from this representation. The training process involves presenting the model with input data and requiring it to reconstruct the same data as closely as possible. The loss function used during training typically penalizes the difference between the original input and the reconstructed output. By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space. For a binary classification task, training data can be divided into positive examples and negative examples. Positive examples are those that match the target. For example, if you're learning to identify birds, the positive training data are those pictures that contain birds. Negative examples are those that do not. Contrastive self-supervised learning uses both positive and negative examples. Contrastive learning's loss function minimizes the distance between positive sample pairs while maximizing the distance between negative sample pairs. Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss. For the example of binary classification, it would trivially learn to classify each example as positive. Effective NCSSL requires an extra predictor on the online side that does not back-propagate on the target side. SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in the input are implicitly and autonomously extracted from the data. These supervisory signals, generated from the data, can then be used for training. SSL is similar to unsupervised learning in that it does not require labels in the sample data. Unlike unsupervised learning, however, learning is not done using inherent data structures. Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled. In transfer learning a model designed for one task is reused on a different task. Training an autoencoder intrinsically constitutes a self-supervised process, because the output pattern needs to become an optimal reconstruction of the input pattern itself. However, in current jargon, the term 'self-supervised' has become associated with classification tasks that are based on a pretext-task training setup. This involves the (human) design of such pretext task(s), unlike the case of fully self-contained autoencoder training. In reinforcement learning, self-supervising learning from a combination of losses can create abstract representations where only the most important information about the state are kept in a compressed way. Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other. Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries. OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things. Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks. The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text. DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via gradient update. Self-GenomeNet is an example of self-supervised learning in genomics. Balestriero, Randall; Ibrahim, Mark; Sobal, Vlad; Morcos, Ari; Shekhar, Shashank; Goldstein, Tom; Bordes, Florian; Bardes, Adrien; Mialon, Gregoire; Tian, Yuandong; Schwarzschild, Avi; Wilson, Andrew Gordon; Geiping, Jonas; Garrido, Quentin; Fernandez, Pierre (24 April 2023). "A Cookbook of Self-Supervised Learning". arXiv:2304.12210 [cs.LG]. Doersch, Carl; Zisserman, Andrew (October 2017). "Multi-task Self-Supervised Visual Learning". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 2070–2079. arXiv:1708.07860. doi:10.1109/ICCV.2017.226. ISBN 978-1-5386-1032-9. S2CID 473729. Doersch, Carl; Gupta, Abhinav; Efros, Alexei A. (December 2015). "Unsupervised Visual Representation Learning by Context Prediction". 2015 IEEE International Conference on Computer Vision (ICCV). pp. 1422–1430. arXiv:1505.05192. doi:10.1109/ICCV.2015.167. ISBN 978-1-4673-8391-2. S2CID 9062671. Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo (1 April 2018). "Fast and robust segmentation of white blood cell images by self-supervised learning". Micron. 107: 55–71. doi:10.1016/j.micron.2018.01.010. ISSN 0968-4328. PMID 29425969. S2CID 3796689. Yarowsky, David (1995). "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods". Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge, MA: Association for Computational Linguistics: 189–196. doi:10.3115/981658.981684. Retrieved 1 November 2022.
Semantic analysis (machine learning)
In machine learning, semantic analysis of a corpus is the task of building structures that approximate concepts from a large set of documents. It generally does not involve prior semantic understanding of the documents. A metalanguage based on predicate logic can analyze the speech of humans.: 93-  Another strategy to understand the semantics of a text is symbol grounding. If language is grounded, it is equal to recognizing a machine readable meaning. For the restricted domain of spatial analysis, a computer based language understanding system was demonstrated.: 123  Latent semantic analysis (sometimes latent semantic indexing), is a class of techniques where documents are represented as vectors in term space. A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Semantic folding
Semantic folding theory describes a procedure for encoding the semantics of natural language text in a semantically grounded binary representation. This approach provides a framework for modelling how language data is processed by the neocortex. Semantic folding theory draws inspiration from Douglas R. Hofstadter's Analogy as the Core of Cognition which suggests that the brain makes sense of the world by identifying and applying analogies. The theory hypothesises that semantic data must therefore be introduced to the neocortex in such a form as to allow the application of a similarity measure and offers, as a solution, the sparse binary vector employing a two-dimensional topographic semantic space as a distributional reference frame. The theory builds on the computational theory of the human cortex known as hierarchical temporal memory (HTM), and positions itself as a complementary theory for the representation of language semantics. A particular strength claimed by this approach is that the resulting binary representation enables complex semantic operations to be performed simply and efficiently at the most basic computational level. Analogous to the structure of the neocortex, Semantic Folding theory posits the implementation of a semantic space as a two-dimensional grid. This grid is populated by context-vectors in such a way as to place similar context-vectors closer to each other, for instance, by using competitive learning principles. This vector space model is presented in the theory as an equivalence to the well known word space model described in the information retrieval literature. Given a semantic space (implemented as described above) a word-vector can be obtained for any given word Y by employing the following algorithm: For each position X in the semantic map (where X represents cartesian coordinates) if the word Y is contained in the context-vector at position X then add 1 to the corresponding position in the word-vector for Y else add 0 to the corresponding position in the word-vector for Y The result of this process will be a word-vector containing all the contexts in which the word Y appears and will therefore be representative of the semantics of that word in the semantic space. It can be seen that the resulting word-vector is also in a sparse distributed representation (SDR) format [Schütze, 1993] & [Sahlgreen, 2006]. Some properties of word-SDRs that are of particular interest with respect to computational semantics are: high noise resistance: As a result of similar contexts being placed closer together in the underlying map, word-SDRs are highly tolerant of false or shifted "bits". boolean logic: It is possible to manipulate word-SDRs in a meaningful way using boolean (OR, AND, exclusive-OR) and/or arithmetical (SUBtract) functions . sub-sampling: Word-SDRs can be sub-sampled to a high degree without any appreciable loss of semantic information. topological two-dimensional representation: The SDR representation maintains the topological distribution of the underlying map therefore words with similar meanings will have similar word-vectors. This suggests that a variety of measures can be applied to the calculation of semantic similarity, from a simple overlap of vector elements, to a range of distance measures such as: Euclidean distance, Hamming distance, Jaccard distance, cosine similarity, Levenshtein distance, Sørensen-Dice index, etc. Semantic spaces in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language: Vocabulary mismatch (the fact that the same meaning can be expressed in many ways) and ambiguity of natural language (the fact that the same term can have several meanings). The application of semantic spaces in natural language processing (NLP) aims at overcoming limitations of rule-based or model-based approaches operating on the keyword level. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning. Rule-based and machine learning-based models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models. Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces: latent semantic analysis from Microsoft and Hyperspace Analogue to Language from the University of California. However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to the accuracy of modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved by explicit semantic analysis (ESA) in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000 dimensions (where each dimension represents an Article in Wikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors. More recently, advances in neural networking techniques in combination with other new approaches (tensors) led to a host of new recent developments: Word2vec from Google and GloVe from Stanford University. Semantic folding represents a novel, biologically inspired approach to semantic spaces where each word is represented as a sparse binary vector with 16,000 dimensions (a semantic fingerprint) in a 2D semantic map (the semantic universe). Sparse binary representation are advantageous in terms of computational efficiency, and allow for the storage of very large numbers of possible patterns. The topological distribution over a two-dimensional grid (outlined above) lends itself to a bitmap type visualization of the semantics of any word or text, where each active semantic feature can be displayed as e.g. a pixel. As can be seen in the images shown here, this representation allows for a direct visual comparison of the semantics of two (or more) linguistic items. Image 1 clearly demonstrates that the two disparate terms "dog" and "car" have, as expected, very obviously different semantics. Image 2 shows that only one of the meaning contexts of "jaguar", that of "Jaguar" the car, overlaps with the meaning of Porsche (indicating partial similarity). Other meaning contexts of "jaguar" e.g. "jaguar" the animal clearly have different non-overlapping contexts. The visualization of semantic similarity using Semantic Folding bears a strong resemblance to the fMRI images produced in a research study conducted by A.G. Huth et al., where it is claimed that words are grouped in the brain by meaning. voxels, little volume segments of the brain, were found to follow a pattern were semantic information is represented along the boundary of the visual cortex with visual and linguistic categories represented on posterior and anterior side respectively.
Semi-supervised learning
Weak supervision is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data (exclusively used in more expensive and time-consuming supervised learning paradigm), followed by a large amount of unlabeled data (used exclusively in unsupervised learning paradigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam. Technically, it could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions, or learning an underlying one-dimensional manifold where the data reside. The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning. More formally, semi-supervised learning assumes a set of l {\displaystyle l} independently identically distributed examples x 1 , … , x l ∈ X {\displaystyle x_{1},\dots ,x_{l}\in X} with corresponding labels y 1 , … , y l ∈ Y {\displaystyle y_{1},\dots ,y_{l}\in Y} and u {\displaystyle u} unlabeled examples x l + 1 , … , x l + u ∈ X {\displaystyle x_{l+1},\dots ,x_{l+u}\in X} are processed. Semi-supervised learning combines this information to surpass the classification performance that can be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning. Semi-supervised learning may refer to either transductive learning or inductive learning. The goal of transductive learning is to infer the correct labels for the given unlabeled data x l + 1 , … , x l + u {\displaystyle x_{l+1},\dots ,x_{l+u}} only. The goal of inductive learning is to infer the correct mapping from X {\displaystyle X} to Y {\displaystyle Y} . It is unnecessary (and, according to Vapnik's principle, imprudent) to perform transductive learning by way of inferring a classification rule over the entire input space; however, in practice, algorithms formally designed for transduction or induction are often used interchangeably. In order to make any use of unlabeled data, some relationship to the underlying distribution of data must exist. Semi-supervised learning algorithms make use of at least one of the following assumptions: Points that are close to each other are more likely to share a label. This is also generally assumed in supervised learning and yields a preference for geometrically simple decision boundaries. In the case of semi-supervised learning, the smoothness assumption additionally yields a preference for decision boundaries in low-density regions, so few points are close to each other but in different classes. The data tend to form discrete clusters, and points in the same cluster are more likely to share a label (although data that shares a label may spread across multiple clusters). This is a special case of the smoothness assumption and gives rise to feature learning with clustering algorithms. The data lie approximately on a manifold of much lower dimension than the input space. In this case learning the manifold using both the labeled and unlabeled data can avoid the curse of dimensionality. Then learning can proceed using distances and densities defined on the manifold. The manifold assumption is practical when high-dimensional data are generated by some process that may be hard to model directly, but which has only a few degrees of freedom. For instance, human voice is controlled by a few vocal folds, and images of various facial expressions are controlled by a few muscles. In these cases, it is better to consider distances and smoothness in the natural space of the generating problem, rather than in the space of all possible acoustic waves or images, respectively. The heuristic approach of self-training (also known as self-learning or self-labeling) is historically the oldest approach to semi-supervised learning, with examples of applications starting in the 1960s. The transductive learning framework was formally introduced by Vladimir Vapnik in the 1970s. Interest in inductive learning using generative models also began in the 1970s. A probably approximately correct learning bound for semi-supervised learning of a Gaussian mixture was demonstrated by Ratsaby and Venkatesh in 1995. Generative approaches to statistical learning first seek to estimate p ( x | y ) {\displaystyle p(x|y)} , the distribution of data points belonging to each class. The probability p ( y | x ) {\displaystyle p(y|x)} that a given point x {\displaystyle x} has label y {\displaystyle y} is then proportional to p ( x | y ) p ( y ) {\displaystyle p(x|y)p(y)} by Bayes' rule. Semi-supervised learning with generative models can be viewed either as an extension of supervised learning (classification plus information about p ( x ) {\displaystyle p(x)} ) or as an extension of unsupervised learning (clustering plus some labels). Generative models assume that the distributions take some particular form p ( x | y , θ ) {\displaystyle p(x|y,\theta )} parameterized by the vector θ {\displaystyle \theta } . If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone. However, if the assumptions are correct, then the unlabeled data necessarily improves performance. The unlabeled data are distributed according to a mixture of individual-class distributions. In order to learn the mixture distribution from the unlabeled data, it must be identifiable, that is, different parameters must yield different summed distributions. Gaussian mixture distributions are identifiable and commonly used for generative models. The parameterized joint distribution can be written as p ( x , y | θ ) = p ( y | θ ) p ( x | y , θ ) {\displaystyle p(x,y|\theta )=p(y|\theta )p(x|y,\theta )} by using the chain rule. Each parameter vector θ {\displaystyle \theta } is associated with a decision function f θ ( x ) = argmax y p ( y | x , θ ) {\displaystyle f_{\theta }(x)={\underset {y}{\operatorname {argmax} }}\ p(y|x,\theta )} . The parameter is then chosen based on fit to both the labeled and unlabeled data, weighted by λ {\displaystyle \lambda } : argmax Θ ( log ⁡ p ( { x i , y i } i = 1 l | θ ) + λ log ⁡ p ( { x i } i = l + 1 l + u | θ ) ) {\displaystyle {\underset {\Theta }{\operatorname {argmax} }}\left(\log p(\{x_{i},y_{i}\}_{i=1}^{l}|\theta )+\lambda \log p(\{x_{i}\}_{i=l+1}^{l+u}|\theta )\right)} Another major class of methods attempts to place boundaries in regions with few data points (labeled or unlabeled). One of the most commonly used algorithms is the transductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well). Whereas support vector machines for supervised learning seek a decision boundary with maximal margin over the labeled data, the goal of TSVM is a labeling of the unlabeled data such that the decision boundary has maximal margin over all of the data. In addition to the standard hinge loss ( 1 − y f ( x ) ) + {\displaystyle (1-yf(x))_{+}} for labeled data, a loss function ( 1 − | f ( x ) | ) + {\displaystyle (1-|f(x)|)_{+}} is introduced over the unlabeled data by letting y = sign ⁡ f ( x ) {\displaystyle y=\operatorname {sign} {f(x)}} . TSVM then selects f ∗ ( x ) = h ∗ ( x ) + b {\displaystyle f^{*}(x)=h^{*}(x)+b} from a reproducing kernel Hilbert space H {\displaystyle {\mathcal {H}}} by minimizing the regularized empirical risk: f ∗ = argmin f ( ∑ i = 1 l ( 1 − y i f ( x i ) ) + + λ 1 ‖ h ‖ H 2 + λ 2 ∑ i = l + 1 l + u ( 1 − | f ( x i ) | ) + ) {\displaystyle f^{*}={\underset {f}{\operatorname {argmin} }}\left(\displaystyle \sum _{i=1}^{l}(1-y_{i}f(x_{i}))_{+}+\lambda _{1}\|h\|_{\mathcal {H}}^{2}+\lambda _{2}\sum _{i=l+1}^{l+u}(1-|f(x_{i})|)_{+}\right)} An exact solution is intractable due to the non-convex term ( 1 − | f ( x ) | ) + {\displaystyle (1-|f(x)|)_{+}} , so research focuses on useful approximations. Other approaches that implement low-density separation include Gaussian process models, information regularization, and entropy minimization (of which TSVM is a special case). Laplacian regularization has been historically approached through graph-Laplacian. Graph-based methods for semi-supervised learning use a graph representation of the data, with a node for each labeled and unlabeled example. The graph may be constructed using domain knowledge or similarity of examples; two common methods are to connect each data point to its k {\displaystyle k} nearest neighbors or to examples within some distance ϵ {\displaystyle \epsilon } . The weight W i j {\displaystyle W_{ij}} of an edge between x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} is then set to e − ‖ x i − x j ‖ 2 / ϵ 2 {\displaystyle e^{-\|x_{i}-x_{j}\|^{2}/\epsilon ^{2}}} . Within the framework of manifold regularization, the graph serves as a proxy for the manifold. A term is added to the standard Tikhonov regularization problem to enforce smoothness of the solution relative to the manifold (in the intrinsic space of the problem) as well as relative to the ambient input space. The minimization problem becomes argmin f ∈ H ( 1 l ∑ i = 1 l V ( f ( x i ) , y i ) + λ A ‖ f ‖ H 2 + λ I ∫ M ‖ ∇ M f ( x ) ‖ 2 d p ( x ) ) {\displaystyle {\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left({\frac {1}{l}}\displaystyle \sum _{i=1}^{l}V(f(x_{i}),y_{i})+\lambda _{A}\|f\|_{\mathcal {H}}^{2}+\lambda _{I}\int _{\mathcal {M}}\|\nabla _{\mathcal {M}}f(x)\|^{2}dp(x)\right)} where H {\displaystyle {\mathcal {H}}} is a reproducing kernel Hilbert space and M {\displaystyle {\mathcal {M}}} is the manifold on which the data lie. The regularization parameters λ A {\displaystyle \lambda _{A}} and λ I {\displaystyle \lambda _{I}} control smoothness in the ambient and intrinsic spaces respectively. The graph is used to approximate the intrinsic regularization term. Defining the graph Laplacian L = D − W {\displaystyle L=D-W} where D i i = ∑ j = 1 l + u W i j {\displaystyle D_{ii}=\sum _{j=1}^{l+u}W_{ij}} and f {\displaystyle \mathbf {f} } is the vector [ f ( x 1 ) … f ( x l + u ) ] {\displaystyle [f(x_{1})\dots f(x_{l+u})]} , we have f T L f = ∑ i , j = 1 l + u W i j ( f i − f j ) 2 ≈ ∫ M ‖ ∇ M f ( x ) ‖ 2 d p ( x ) {\displaystyle \mathbf {f} ^{T}L\mathbf {f} =\displaystyle \sum _{i,j=1}^{l+u}W_{ij}(f_{i}-f_{j})^{2}\approx \int _{\mathcal {M}}\|\nabla _{\mathcal {M}}f(x)\|^{2}dp(x)} . The graph-based approach to Laplacian regularization is to put in relation with finite difference method. The Laplacian can also be used to extend the supervised learning algorithms: regularized least squares and support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM. Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples x 1 , … , x l + u {\displaystyle x_{1},\dots ,x_{l+u}} may inform a choice of representation, distance metric, or kernel for the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples. In this vein, some methods learn a low-dimensional representation using the supervised data and then apply either low-density separation or graph-based methods to the learned representation. Iteratively refining the representation and then performing semi-supervised learning on said representation may further improve performance. Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm. Generally only the labels the classifier is most confident in are added at each step. In natural language processing, a common self-training algorithm is the Yarowsky algorithm for problems like word sense disambiguation, accent restoration, and spelling correction. Co-training is an extension of self-training in which multiple classifiers are trained on different (ideally disjoint) sets of features and generate labeled examples for one another. Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data. More natural learning problems may also be viewed as instances of semi-supervised learning. Much of human concept learning involves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback). Human infants are sensitive to the structure of unlabeled natural categories such as images of dogs and cats or male and female faces. Infants and children take into account not only unlabeled examples, but the sampling process from which labeled examples arise. PU learning Chapelle, Olivier; Schölkopf, Bernhard; Zien, Alexander (2006). Semi-supervised learning. Cambridge, Mass.: MIT Press. ISBN 978-0-262-03358-9. Manifold Regularization A freely available MATLAB implementation of the graph-based semi-supervised algorithms Laplacian support vector machines and Laplacian regularized least squares. KEEL: A software tool to assess evolutionary algorithms for Data Mining problems (regression, classification, clustering, pattern mining and so on) KEEL module for semi-supervised learning. Semi-Supervised Learning Software Semi-Supervised learning — scikit-learn documentation Semi-supervised learning in scikit-learn.
Sequence labeling
In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence. However, accuracy is generally improved by making the optimal label for a given element dependent on the choices of nearby elements, using special algorithms to choose the globally best set of labels for the entire sequence at once. As an example of why finding the globally best label sequence might produce better results than labeling one item at a time, consider the part-of-speech tagging task just described. Frequently, many words are members of multiple parts of speech, and the correct label of such a word can often be deduced from the correct label of the word to the immediate left or right. For example, the word "sets" can be either a noun or verb. In a phrase like "he sets the books down", the word "he" is unambiguously a pronoun, and "the" unambiguously a determiner, and using either of these labels, "sets" can be deduced to be a verb, since nouns very rarely follow pronouns and are less likely to precede determiners than verbs are. But in other cases, only one of the adjacent words is similarly helpful. In "he sets and then knocks over the table", only the word "he" to the left is helpful (cf. "...picks up the sets and then knocks over..."). Conversely, in "... and also sets the table" only the word "the" to the right is helpful (cf. "... and also sets of books were ..."). An algorithm that proceeds from left to right, labeling one word at a time, can only use the tags of left-adjacent words and might fail in the second example above; vice versa for an algorithm that proceeds from right to left. Most sequence labeling algorithms are probabilistic in nature, relying on statistical inference to find the best sequence. The most common statistical models in use for sequence labeling make a Markov assumption, i.e. that the choice of label for a particular word is directly dependent only on the immediately adjacent labels; hence the set of labels forms a Markov chain. This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field.
Similarity learning
Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification.
Socially assistive robot
A socially assistive robot (SAR) aids users through social engagement and support rather than through physical tasks and interactions.
Lynda Soderholm
Lynda Soderholm is a physical chemist at the U.S. Department of Energy's (DOE) Argonne National Laboratory with a specialty in f-block elements. She is a senior scientist and the lead of the Actinide, Geochemistry & Separation Sciences Theme within Argonne's Chemical Sciences and Engineering Division. Her specific role is the Separation Science group leader within Heavy Element Chemistry and Separation Science (HESS), directing basic research focused on low-energy methods for isolating lanthanide and actinide elements from complex mixtures. She has made fundamental contributions to understanding f-block chemistry and characterizing f-block elements. Soderholm became a Fellow of the American Association for the Advancement of Science (AAAS) in 2013, and is also an Argonne Distinguished Fellow.
Solomonoff's theory of inductive inference
Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory. Solomonoff proved that this induction is incomputable, but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction". Solomonoff's induction naturally formalizes Occam's razor by assigning larger prior credences to theories that require a shorter algorithmic description.