Dataset Viewer
page_title
stringlengths 4
49
| page_text
stringlengths 134
44.1k
|
---|---|
Bayesian learning mechanisms | Bayesian learning mechanisms are probabilistic causal models used in computer science to research the fundamental underpinnings of machine learning, and in cognitive neuroscience, to model conceptual development.
Bayesian learning mechanisms have also been used in economics and cognitive psychology to study social learning in theoretical models of herd behavior. |
Outline of machine learning | The following outline is provided as an overview of and topical guide to machine learning:
Machine learning – a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning involves the study and construction of algorithms that can learn from and make predictions on data. These algorithms operate by building a model from an example training set of input observations to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.
An academic discipline
A branch of science
An applied science
A subfield of computer science
A branch of artificial intelligence
A subfield of soft computing
Application of statistics
Supervised learning - where the model is trained on labeled data.
Unsupervised learning - where the model tries to identify patterns in unlabeled data
Reinforcement learning - where the model learns to make decisions by receiving rewards or penalties
Applications of machine learning
Bioinformatics
Biomedical informatics
Computer vision
Customer relationship management –
Data mining
Earth sciences
Email filtering
Inverted pendulum – balance and equilibrium system.
Natural language processing (NLP)
Named Entity Recognition
Automatic summarization
Automatic taxonomy construction
Dialog system
Grammar checker
Language recognition
Handwriting recognition
Optical character recognition
Speech recognition
Text to Speech Synthesis (TTS)
Speech Emotion Recognition (SER)
Machine translation
Question answering
Speech synthesis
Text mining
Term frequency–inverse document frequency (tf–idf)
Text simplification
Pattern recognition
Facial recognition system
Handwriting recognition
Image recognition
Optical character recognition
Speech recognition
Recommendation system
Collaborative filtering
Content-based filtering
Hybrid recommender systems (Collaborative and content-based filtering)
Search engine
Search engine optimization
Social Engineering
Graphics processing unit
Tensor processing unit
Vision processing unit
Comparison of deep learning software
Amazon Machine Learning
Microsoft Azure Machine Learning Studio
DistBelief – replaced by TensorFlow
Apache Singa
Apache MXNet
Caffe
PyTorch
mlpack
TensorFlow
Torch
CNTK
Accord.Net
Jax
MLJ.jl – A machine learning framework for Julia
Deeplearning4j
Theano
scikit-learn
Keras
Almeida–Pineda recurrent backpropagation
ALOPEX
Backpropagation
Bootstrap aggregating
CN2 algorithm
Constructing skill trees
Dehaene–Changeux model
Diffusion map
Dominance-based rough set approach
Dynamic time warping
Error-driven learning
Evolutionary multimodal optimization
Expectation–maximization algorithm
FastICA
Forward–backward algorithm
GeneRec
Genetic Algorithm for Rule Set Production
Growing self-organizing map
Hyper basis function network
IDistance
k-nearest neighbors algorithm
Kernel methods for vector output
Kernel principal component analysis
Leabra
Linde–Buzo–Gray algorithm
Local outlier factor
Logic learning machine
LogitBoost
Manifold alignment
Markov chain Monte Carlo (MCMC)
Minimum redundancy feature selection
Mixture of experts
Multiple kernel learning
Non-negative matrix factorization
Online machine learning
Out-of-bag error
Prefrontal cortex basal ganglia working memory
PVLV
Q-learning
Quadratic unconstrained binary optimization
Query-level feature
Quickprop
Radial basis function network
Randomized weighted majority algorithm
Reinforcement learning
Repeated incremental pruning to produce error reduction (RIPPER)
Rprop
Rule-based machine learning
Skill chaining
Sparse PCA
State–action–reward–state–action
Stochastic gradient descent
Structured kNN
T-distributed stochastic neighbor embedding
Temporal difference learning
Wake-sleep algorithm
Weighted majority algorithm (machine learning)
K-nearest neighbors algorithm (KNN)
Learning vector quantization (LVQ)
Self-organizing map (SOM)
Logistic regression
Ordinary least squares regression (OLSR)
Linear regression
Stepwise regression
Multivariate adaptive regression splines (MARS)
Regularization algorithm
Ridge regression
Least Absolute Shrinkage and Selection Operator (LASSO)
Elastic net
Least-angle regression (LARS)
Classifiers
Probabilistic classifier
Naive Bayes classifier
Binary classifier
Linear classifier
Hierarchical classifier
Dimensionality reduction
Canonical correlation analysis (CCA)
Factor analysis
Feature extraction
Feature selection
Independent component analysis (ICA)
Linear discriminant analysis (LDA)
Multidimensional scaling (MDS)
Non-negative matrix factorization (NMF)
Partial least squares regression (PLSR)
Principal component analysis (PCA)
Principal component regression (PCR)
Projection pursuit
Sammon mapping
t-distributed stochastic neighbor embedding (t-SNE)
Ensemble learning
AdaBoost
Boosting
Bootstrap aggregating (Bagging)
Ensemble averaging – process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model. Frequently an ensemble of models performs better than any individual model, because the various errors of the models "average out."
Gradient boosted decision tree (GBDT)
Gradient boosting machine (GBM)
Random Forest
Stacked Generalization (blending)
Meta-learning
Inductive bias
Metadata
Reinforcement learning
Q-learning
State–action–reward–state–action (SARSA)
Temporal difference learning (TD)
Learning Automata
Supervised learning
Averaged one-dependence estimators (AODE)
Artificial neural network
Case-based reasoning
Gaussian process regression
Gene expression programming
Group method of data handling (GMDH)
Inductive logic programming
Instance-based learning
Lazy learning
Learning Automata
Learning Vector Quantization
Logistic Model Tree
Minimum message length (decision trees, decision graphs, etc.)
Nearest Neighbor Algorithm
Analogical modeling
Probably approximately correct learning (PAC) learning
Ripple down rules, a knowledge acquisition methodology
Symbolic machine learning algorithms
Support vector machines
Random Forests
Ensembles of classifiers
Bootstrap aggregating (bagging)
Boosting (meta-algorithm)
Ordinal classification
Conditional Random Field
ANOVA
Quadratic classifiers
k-nearest neighbor
Boosting
SPRINT
Bayesian networks
Naive Bayes
Hidden Markov models
Hierarchical hidden Markov model
Bayesian statistics
Bayesian knowledge base
Naive Bayes
Gaussian Naive Bayes
Multinomial Naive Bayes
Averaged One-Dependence Estimators (AODE)
Bayesian Belief Network (BBN)
Bayesian Network (BN)
Decision tree algorithm
Decision tree
Classification and regression tree (CART)
Iterative Dichotomiser 3 (ID3)
C4.5 algorithm
C5.0 algorithm
Chi-squared Automatic Interaction Detection (CHAID)
Decision stump
Conditional decision tree
ID3 algorithm
Random forest
SLIQ
Linear classifier
Fisher's linear discriminant
Linear regression
Logistic regression
Multinomial logistic regression
Naive Bayes classifier
Perceptron
Support vector machine
Unsupervised learning
Expectation-maximization algorithm
Vector Quantization
Generative topographic map
Information bottleneck method
Association rule learning algorithms
Apriori algorithm
Eclat algorithm
Artificial neural network
Feedforward neural network
Extreme learning machine
Convolutional neural network
Recurrent neural network
Long short-term memory (LSTM)
Logic learning machine
Self-organizing map
Association rule learning
Apriori algorithm
Eclat algorithm
FP-growth algorithm
Hierarchical clustering
Single-linkage clustering
Conceptual clustering
Cluster analysis
BIRCH
DBSCAN
Expectation–maximization (EM)
Fuzzy clustering
Hierarchical clustering
k-means clustering
k-medians
Mean-shift
OPTICS algorithm
Anomaly detection
k-nearest neighbors algorithm (k-NN)
Local outlier factor
Semi-supervised learning
Active learning – special case of semi-supervised learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.
Generative models
Low-density separation
Graph-based methods
Co-training
Transduction
Deep learning
Deep belief networks
Deep Boltzmann machines
Deep Convolutional neural networks
Deep Recurrent neural networks
Hierarchical temporal memory
Generative Adversarial Network
Style transfer
Transformer
Stacked Auto-Encoders
Anomaly detection
Association rules
Bias-variance dilemma
Classification
Multi-label classification
Clustering
Data Pre-processing
Empirical risk minimization
Feature engineering
Feature learning
Learning to rank
Occam learning
Online machine learning
PAC learning
Regression
Reinforcement Learning
Semi-supervised learning
Statistical learning
Structured prediction
Graphical models
Bayesian network
Conditional random field (CRF)
Hidden Markov model (HMM)
Unsupervised learning
VC theory
List of artificial intelligence projects
List of datasets for machine learning research
History of machine learning
Timeline of machine learning
Machine learning projects
DeepMind
Google Brain
OpenAI
Meta AI
Machine learning organizations
Artificial Intelligence and Security (AISec) (co-located workshop with CCS)
Conference on Neural Information Processing Systems (NIPS)
ECML PKDD
International Conference on Machine Learning (ICML)
ML4ALL (Machine Learning For All)
Mathematics for Machine Learning
Hands-On Machine Learning Scikit-Learn, Keras, and TensorFlow
The Hundred-Page Machine Learning Book
Machine Learning
Journal of Machine Learning Research (JMLR)
Neural Computation
Alberto Broggi
Andrei Knyazev
Andrew McCallum
Andrew Ng
Anuraag Jain
Armin B. Cremers
Ayanna Howard
Barney Pell
Ben Goertzel
Ben Taskar
Bernhard Schölkopf
Brian D. Ripley
Christopher G. Atkeson
Corinna Cortes
Demis Hassabis
Douglas Lenat
Eric Xing
Ernst Dickmanns
Geoffrey Hinton – co-inventor of the backpropagation and contrastive divergence training algorithms
Hans-Peter Kriegel
Hartmut Neven
Heikki Mannila
Ian Goodfellow – Father of Generative & adversarial networks
Jacek M. Zurada
Jaime Carbonell
Jeremy Slovak
Jerome H. Friedman
John D. Lafferty
John Platt – invented SMO and Platt scaling
Julie Beth Lovins
Jürgen Schmidhuber
Karl Steinbuch
Katia Sycara
Leo Breiman – invented bagging and random forests
Lise Getoor
Luca Maria Gambardella
Léon Bottou
Marcus Hutter
Mehryar Mohri
Michael Collins
Michael I. Jordan
Michael L. Littman
Nando de Freitas
Ofer Dekel
Oren Etzioni
Pedro Domingos
Peter Flach
Pierre Baldi
Pushmeet Kohli
Ray Kurzweil
Rayid Ghani
Ross Quinlan
Salvatore J. Stolfo
Sebastian Thrun
Selmer Bringsjord
Sepp Hochreiter
Shane Legg
Stephen Muggleton
Steve Omohundro
Tom M. Mitchell
Trevor Hastie
Vasant Honavar
Vladimir Vapnik – co-inventor of the SVM and VC theory
Yann LeCun – invented convolutional neural networks
Yasuo Matsuyama
Yoshua Bengio
Zoubin Ghahramani
Outline of artificial intelligence
Outline of computer vision
Outline of robotics
Accuracy paradox
Action model learning
Activation function
Activity recognition
ADALINE
Adaptive neuro fuzzy inference system
Adaptive resonance theory
Additive smoothing
Adjusted mutual information
AIVA
AIXI
AlchemyAPI
AlexNet
Algorithm selection
Algorithmic inference
Algorithmic learning theory
AlphaGo
AlphaGo Zero
Alternating decision tree
Apprenticeship learning
Causal Markov condition
Competitive learning
Concept learning
Decision tree learning
Differentiable programming
Distribution learning theory
Eager learning
End-to-end reinforcement learning
Error tolerance (PAC learning)
Explanation-based learning
Feature
GloVe
Hyperparameter
Inferential theory of learning
Learning automata
Learning classifier system
Learning rule
Learning with errors
M-Theory (learning framework)
Machine learning control
Machine learning in bioinformatics
Margin
Markov chain geostatistics
Markov chain Monte Carlo (MCMC)
Markov information source
Markov logic network
Markov model
Markov random field
Markovian discrimination
Maximum-entropy Markov model
Multi-armed bandit
Multi-task learning
Multilinear subspace learning
Multimodal learning
Multiple instance learning
Multiple-instance learning
Never-Ending Language Learning
Offline learning
Parity learning
Population-based incremental learning
Predictive learning
Preference learning
Proactive learning
Proximal gradient methods for learning
Semantic analysis
Similarity learning
Sparse dictionary learning
Stability (learning theory)
Statistical learning theory
Statistical relational learning
Tanagra
Transfer learning
Variable-order Markov model
Version space learning
Waffles
Weka
Loss function
Loss functions for classification
Mean squared error (MSE)
Mean squared prediction error (MSPE)
Taguchi loss function
Low-energy adaptive clustering hierarchy
Anne O'Tate
Ant colony optimization algorithms
Anthony Levandowski
Anti-unification (computer science)
Apache Flume
Apache Giraph
Apache Mahout
Apache SINGA
Apache Spark
Apache SystemML
Aphelion (software)
Arabic Speech Corpus
Archetypal analysis
Arthur Zimek
Artificial ants
Artificial bee colony algorithm
Artificial development
Artificial immune system
Astrostatistics
Averaged one-dependence estimators
Bag-of-words model
Balanced clustering
Ball tree
Base rate
Bat algorithm
Baum–Welch algorithm
Bayesian hierarchical modeling
Bayesian interpretation of kernel regularization
Bayesian optimization
Bayesian structural time series
Bees algorithm
Behavioral clustering
Bernoulli scheme
Bias–variance tradeoff
Biclustering
BigML
Binary classification
Bing Predicts
Bio-inspired computing
Biogeography-based optimization
Biplot
Bondy's theorem
Bongard problem
Bradley–Terry model
BrownBoost
Brown clustering
Burst error
CBCL (MIT)
CIML community portal
CMA-ES
CURE data clustering algorithm
Cache language model
Calibration (statistics)
Canonical correspondence analysis
Canopy clustering algorithm
Cascading classifiers
Category utility
CellCognition
Cellular evolutionary algorithm
Chi-square automatic interaction detection
Chromosome (genetic algorithm)
Classifier chains
Cleverbot
Clonal selection algorithm
Cluster-weighted modeling
Clustering high-dimensional data
Clustering illusion
CoBoosting
Cobweb (clustering)
Cognitive computer
Cognitive robotics
Collostructional analysis
Common-method variance
Complete-linkage clustering
Computer-automated design
Concept class
Concept drift
Conference on Artificial General Intelligence
Conference on Knowledge Discovery and Data Mining
Confirmatory factor analysis
Confusion matrix
Congruence coefficient
Connect (computer system)
Consensus clustering
Constrained clustering
Constrained conditional model
Constructive cooperative coevolution
Correlation clustering
Correspondence analysis
Cortica
Coupled pattern learner
Cross-entropy method
Cross-validation (statistics)
Crossover (genetic algorithm)
Cuckoo search
Cultural algorithm
Cultural consensus theory
Curse of dimensionality
DADiSP
DARPA LAGR Program
Darkforest
Dartmouth workshop
DarwinTunes
Data Mining Extensions
Data exploration
Data pre-processing
Data stream clustering
Dataiku
Davies–Bouldin index
Decision boundary
Decision list
Decision tree model
Deductive classifier
DeepArt
DeepDream
Deep Web Technologies
Defining length
Dendrogram
Dependability state model
Detailed balance
Determining the number of clusters in a data set
Detrended correspondence analysis
Developmental robotics
Diffbot
Differential evolution
Discrete phase-type distribution
Discriminative model
Dissociated press
Distributed R
Dlib
Document classification
Documenting Hate
Domain adaptation
Doubly stochastic model
Dual-phase evolution
Dunn index
Dynamic Bayesian network
Dynamic Markov compression
Dynamic topic model
Dynamic unobserved effects model
EDLUT
ELKI
Edge recombination operator
Effective fitness
Elastic map
Elastic matching
Elbow method (clustering)
Emergent (software)
Encog
Entropy rate
Erkki Oja
Eurisko
European Conference on Artificial Intelligence
Evaluation of binary classifiers
Evolution strategy
Evolution window
Evolutionary Algorithm for Landmark Detection
Evolutionary algorithm
Evolutionary art
Evolutionary music
Evolutionary programming
Evolvability (computer science)
Evolved antenna
Evolver (software)
Evolving classification function
Expectation propagation
Exploratory factor analysis
F1 score
FLAME clustering
Factor analysis of mixed data
Factor graph
Factor regression model
Factored language model
Farthest-first traversal
Fast-and-frugal trees
Feature Selection Toolbox
Feature hashing
Feature scaling
Feature vector
Firefly algorithm
First-difference estimator
First-order inductive learner
Fish School Search
Fisher kernel
Fitness approximation
Fitness function
Fitness proportionate selection
Fluentd
Folding@home
Formal concept analysis
Forward algorithm
Fowlkes–Mallows index
Frederick Jelinek
Frrole
Functional principal component analysis
GATTO
GLIMMER
Gary Bryce Fogel
Gaussian adaptation
Gaussian process
Gaussian process emulator
Gene prediction
General Architecture for Text Engineering
Generalization error
Generalized canonical correlation
Generalized filtering
Generalized iterative scaling
Generalized multidimensional scaling
Generative adversarial network
Generative model
Genetic algorithm
Genetic algorithm scheduling
Genetic algorithms in economics
Genetic fuzzy systems
Genetic memory (computer science)
Genetic operator
Genetic programming
Genetic representation
Geographical cluster
Gesture Description Language
Geworkbench
Glossary of artificial intelligence
Glottochronology
Golem (ILP)
Google matrix
Grafting (decision trees)
Gramian matrix
Grammatical evolution
Granular computing
GraphLab
Graph kernel
Gremlin (programming language)
Growth function
HUMANT (HUManoid ANT) algorithm
Hammersley–Clifford theorem
Harmony search
Hebbian theory
Hidden Markov random field
Hidden semi-Markov model
Hierarchical hidden Markov model
Higher-order factor analysis
Highway network
Hinge loss
Holland's schema theorem
Hopkins statistic
Hoshen–Kopelman algorithm
Huber loss
IRCF360
Ian Goodfellow
Ilastik
Ilya Sutskever
Immunocomputing
Imperialist competitive algorithm
Inauthentic text
Incremental decision tree
Induction of regular languages
Inductive bias
Inductive probability
Inductive programming
Influence diagram
Information Harvesting
Information gain in decision trees
Information gain ratio
Inheritance (genetic algorithm)
Instance selection
Intel RealSense
Interacting particle system
Interactive machine translation
International Joint Conference on Artificial Intelligence
International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics
International Semantic Web Conference
Iris flower data set
Island algorithm
Isotropic position
Item response theory
Iterative Viterbi decoding
JOONE
Jabberwacky
Jaccard index
Jackknife variance estimates for random forest
Java Grammatical Evolution
Joseph Nechvatal
Jubatus
Julia (programming language)
Junction tree algorithm
k-SVD
k-means++
k-medians clustering
k-medoids
KNIME
KXEN Inc.
k q-flats
Kaggle
Kalman filter
Katz's back-off model
Kernel adaptive filter
Kernel density estimation
Kernel eigenvoice
Kernel embedding of distributions
Kernel method
Kernel perceptron
Kernel random forest
Kinect
Klaus-Robert Müller
Kneser–Ney smoothing
Knowledge Vault
Knowledge integration
LIBSVM
LPBoost
Labeled data
LanguageWare
Language identification in the limit
Language model
Large margin nearest neighbor
Latent Dirichlet allocation
Latent class model
Latent semantic analysis
Latent variable
Latent variable model
Lattice Miner
Layered hidden Markov model
Learnable function class
Least squares support vector machine
Leslie P. Kaelbling
Linear genetic programming
Linear predictor function
Linear separability
Lingyun Gu
Linkurious
Lior Ron (business executive)
List of genetic algorithm applications
List of metaphor-based metaheuristics
List of text mining software
Local case-control sampling
Local independence
Local tangent space alignment
Locality-sensitive hashing
Log-linear model
Logistic model tree
Low-rank approximation
Low-rank matrix approximations
MATLAB
MIMIC (immunology)
MXNet
Mallet (software project)
Manifold regularization
Margin-infused relaxed algorithm
Margin classifier
Mark V. Shaney
Massive Online Analysis
Matrix regularization
Matthews correlation coefficient
Mean shift
Mean squared error
Mean squared prediction error
Measurement invariance
Medoid
MeeMix
Melomics
Memetic algorithm
Meta-optimization
Mexican International Conference on Artificial Intelligence
Michael Kearns (computer scientist)
MinHash
Mixture model
Mlpy
Models of DNA evolution
Moral graph
Mountain car problem
Movidius
Multi-armed bandit
Multi-label classification
Multi expression programming
Multiclass classification
Multidimensional analysis
Multifactor dimensionality reduction
Multilinear principal component analysis
Multiple correspondence analysis
Multiple discriminant analysis
Multiple factor analysis
Multiple sequence alignment
Multiplicative weight update method
Multispectral pattern recognition
Mutation (genetic algorithm)
MysteryVibe
N-gram
NOMINATE (scaling method)
Native-language identification
Natural Language Toolkit
Natural evolution strategy
Nearest-neighbor chain algorithm
Nearest centroid classifier
Nearest neighbor search
Neighbor joining
Nest Labs
NetMiner
NetOwl
Neural Designer
Neural Engineering Object
Neural modeling fields
Neural network software
NeuroSolutions
Neuroevolution
Neuroph
Niki.ai
Noisy channel model
Noisy text analytics
Nonlinear dimensionality reduction
Novelty detection
Nuisance variable
One-class classification
Onnx
OpenNLP
Optimal discriminant analysis
Oracle Data Mining
Orange (software)
Ordination (statistics)
Overfitting
PROGOL
PSIPRED
Pachinko allocation
PageRank
Parallel metaheuristic
Parity benchmark
Part-of-speech tagging
Particle swarm optimization
Path dependence
Pattern language (formal languages)
Peltarion Synapse
Perplexity
Persian Speech Corpus
Picas (app)
Pietro Perona
Pipeline Pilot
Piranha (software)
Pitman–Yor process
Plate notation
Polynomial kernel
Pop music automation
Population process
Portable Format for Analytics
Predictive Model Markup Language
Predictive state representation
Preference regression
Premature convergence
Principal geodesic analysis
Prior knowledge for pattern recognition
Prisma (app)
Probabilistic Action Cores
Probabilistic context-free grammar
Probabilistic latent semantic analysis
Probabilistic soft logic
Probability matching
Probit model
Product of experts
Programming with Big Data in R
Proper generalized decomposition
Pruning (decision trees)
Pushpak Bhattacharyya
Q methodology
Qloo
Quality control and genetic algorithms
Quantum Artificial Intelligence Lab
Queueing theory
Quick, Draw!
R (programming language)
Rada Mihalcea
Rademacher complexity
Radial basis function kernel
Rand index
Random indexing
Random projection
Random subspace method
Ranking SVM
RapidMiner
Rattle GUI
Raymond Cattell
Reasoning system
Regularization perspectives on support vector machines
Relational data mining
Relationship square
Relevance vector machine
Relief (feature selection)
Renjin
Repertory grid
Representer theorem
Reward-based selection
Richard Zemel
Right to explanation
RoboEarth
Robust principal component analysis
RuleML Symposium
Rule induction
Rules extraction system family
SAS (software)
SNNS
SPSS Modeler
SUBCLU
Sample complexity
Sample exclusion dimension
Santa Fe Trail problem
Savi Technology
Schema (genetic algorithms)
Search-based software engineering
Selection (genetic algorithm)
Self-Service Semantic Suite
Semantic folding
Semantic mapping (statistics)
Semidefinite embedding
Sense Networks
Sensorium Project
Sequence labeling
Sequential minimal optimization
Shattered set
Shogun (toolbox)
Silhouette (clustering)
SimHash
SimRank
Similarity measure
Simple matching coefficient
Simultaneous localization and mapping
Sinkov statistic
Sliced inverse regression
Snakes and Ladders
Soft independent modelling of class analogies
Soft output Viterbi algorithm
Solomonoff's theory of inductive inference
SolveIT Software
Spectral clustering
Spike-and-slab variable selection
Statistical machine translation
Statistical parsing
Statistical semantics
Stefano Soatto
Stephen Wolfram
Stochastic block model
Stochastic cellular automaton
Stochastic diffusion search
Stochastic grammar
Stochastic matrix
Stochastic universal sampling
Stress majorization
String kernel
Structural equation modeling
Structural risk minimization
Structured sparsity regularization
Structured support vector machine
Subclass reachability
Sufficient dimension reduction
Sukhotin's algorithm
Sum of absolute differences
Sum of absolute transformed differences
Swarm intelligence
Switching Kalman filter
Symbolic regression
Synchronous context-free grammar
Syntactic pattern recognition
TD-Gammon
TIMIT
Teaching dimension
Teuvo Kohonen
Textual case-based reasoning
Theory of conjoint measurement
Thomas G. Dietterich
Thurstonian model
Topic model
Tournament selection
Training, test, and validation sets
Transiogram
Trax Image Recognition
Trigram tagger
Truncation selection
Tucker decomposition
UIMA
UPGMA
Ugly duckling theorem
Uncertain data
Uniform convergence in probability
Unique negative dimension
Universal portfolio algorithm
User behavior analytics
VC dimension
VIGRA
Validation set
Vapnik–Chervonenkis theory
Variable-order Bayesian network
Variable kernel density estimation
Variable rules analysis
Variational message passing
Varimax rotation
Vector quantization
Vicarious (company)
Viterbi algorithm
Vowpal Wabbit
WACA clustering algorithm
WPGMA
Ward's method
Weasel program
Whitening transformation
Winnow (algorithm)
Win–stay, lose–switch
Witness set
Wolfram Language
Wolfram Mathematica
Writer invariant
Xgboost
Yooreeka
Zeroth (software)
Trevor Hastie, Robert Tibshirani and Jerome H. Friedman (2001). The Elements of Statistical Learning, Springer. ISBN 0-387-95284-5.
Pedro Domingos (September 2015), The Master Algorithm, Basic Books, ISBN 978-0-465-06570-7
Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012). Foundations of Machine Learning, The MIT Press. ISBN 978-0-262-01825-8.
Ian H. Witten and Eibe Frank (2011). Data Mining: Practical machine learning tools and techniques Morgan Kaufmann, 664pp., ISBN 978-0-12-374856-0.
David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3.
Christopher Bishop (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2.
Vladimir Vapnik (1998). Statistical Learning Theory. Wiley-Interscience, ISBN 0-471-03003-1.
Ray Solomonoff, An Inductive Inference Machine, IRE Convention Record, Section on Information Theory, Part 2, pp., 56–62, 1957.
Ray Solomonoff, "An Inductive Inference Machine" A privately circulated report from the 1956 Dartmouth Summer Research Conference on AI.
Data Science: Data to Insights from MIT (machine learning)
Popular online course by Andrew Ng, at Coursera. It uses GNU Octave. The course is a free version of Stanford University's actual course taught by Ng, see.stanford.edu/Course/CS229 available for free].
mloss is an academic database of open-source machine learning software. |
80 Million Tiny Images | 80 Million Tiny Images is a dataset intended for training machine learning systems. It contains 79,302,017 32×32 pixel color images, scaled down from images extracted from the World Wide Web in 2008 using automated web search queries on a set of 75,062 non-abstract nouns derived from WordNet. The words in the search terms were then used as labels for the images. The researchers used seven web search resources for this purpose: Altavista, Ask.com, Flickr, Cydral, Google, Picsearch and Webshots.
The 80 Million Tiny Images dataset was retired from use by its creators in 2020, after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs which were causing models trained on them to exhibit racial and sexual bias. Birhane and Prabhu also found that the dataset contained a number of offensive images.
Following the release of the paper, the dataset's creators removed the dataset from distribution, and requested that other researchers not use it for further research and to delete their copies of the dataset.
The CIFAR-10 dataset uses a subset of the images in this dataset, but with independently generated labels. |
Accelerated Linear Algebra | Accelerated Linear Algebra (XLA) is an advanced optimization framework within TensorFlow, a popular machine learning library developed by Google. XLA is designed to improve the performance of TensorFlow models by optimizing the computation graph at a lower level, making it particularly useful for large-scale computations and high-performance machine learning models. Key features of TensorFlow XLA include:
Compilation of TensorFlow Graphs: Compiles TensorFlow computation graphs into efficient machine code.
Optimization Techniques: Applies operation fusion, memory optimization, and other techniques.
Hardware Support: Optimizes models for various hardware including GPUs and TPUs.
Improved Model Execution Time**: Aims to reduce TensorFlow models' execution time for both training and inference.
Seamless Integration: Can be used with existing TensorFlow code with minimal changes.
TensorFlow XLA represents a significant step in optimizing machine learning models, providing developers with tools to enhance computational efficiency and performance. |
Action model learning | Action model learning (sometimes abbreviated action learning) is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.
Learning action models is important when goals change. When an agent acted for a while, it can use its accumulated knowledge about actions in the domain to make better decisions. Thus, learning action models differs from reinforcement learning. It enables reasoning about actions instead of expensive trials in the world. Action model learning is a form of inductive reasoning, where new knowledge is generated based on agent's observations. It differs from standard supervised learning in that correct input/output pairs are never presented, nor imprecise action models explicitly corrected.
Usual motivation for action model learning is the fact that manual specification of action models for planners is often a difficult, time consuming, and error-prone task (especially in complex environments). |
Active learning (machine learning) | Active learning is a special case of machine learning in which a learning algorithm can interactively query a human user (or some other information source), to label new data points with the desired outputs. The human user must possess knowledge/expertise in the problem domain, including the ability to consult/research authoritative sources when necessary. In statistics literature, it is sometimes also called optimal experimental design. The information source is also called teacher or oracle.
There are situations in which unlabeled data is abundant but manual labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning, hybrid active learning and active learning in a single-pass (on-line) context, combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, incremental learning policies in the field of online machine learning. Using active learning allows for faster development of a machine learning algorithm, when comparative updates would require a quantum or super computer.
Large-scale active learning projects may benefit from crowdsourcing frameworks such as Amazon Mechanical Turk that include many humans in the active learning loop.
Let T be the total set of all data under consideration. For example, in a protein engineering problem, T would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.
During each iteration, i, T is broken up into three subsets
T
K
,
i
{\displaystyle \mathbf {T} _{K,i}}
: Data points where the label is known.
T
U
,
i
{\displaystyle \mathbf {T} _{U,i}}
: Data points where the label is unknown.
T
C
,
i
{\displaystyle \mathbf {T} _{C,i}}
: A subset of TU,i that is chosen to be labeled.
Most of the current research in active learning involves the best method to choose the data points for TC,i.
Pool-Based Sampling: In this approach, which is the most well known scenario, the learning algorithm attempts to evaluate the entire dataset before selecting data points (instances) for labeling. It is often initially trained on a fully labeled subset of the data using a machine-learning method such as logistic regression or SVM that yields class-membership probabilities for individual data instances. The candidate instances are those for which the prediction is most ambiguous. Instances are drawn from the entire data pool and assigned a confidence score, a measurement of how well the learner "understands" the data. The system then selects the instances for which it is the least confident and queries the teacher for the labels. The theoretical drawback of pool-based sampling is that it is memory-intensive and is therefore limited in its capacity to handle enormous datasets, but in practice, the rate-limiting factor is that the teacher is typically a (fatiguable) human expert who must be paid for their effort, rather than computer memory.
Stream-Based Selective Sampling: Here, each consecutive unlabeled instance is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. As contrasted with Pool-based sampling, the obvious drawback of stream-based methods is that the learning algorithm does not have sufficient information, early in the process, to make a sound assign-label-vs ask-teacher decision, and it does not capitalize as efficiently on the presence of already labeled data. Therefore, the teacher is likely to spend more effort in supplying labels than with the pool-based approach.
Membership Query Synthesis: This is where the learner generates synthetic data from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if the dataset is small. The challenge here, as with all synthetic-data-generation efforts, is in ensuring that the synthetic data is consistent in terms of meeting the constraints on real data. As the number of variables/features in the input data increase, and strong dependencies between variables exist, it becomes increasingly difficult to generate synthetic data with sufficient fidelity. For example, to create a synthetic data set for human laboratory-test values, the sum of the various white blood cell (WBC) components in a White Blood Cell differential must equal 100, since the component numbers are really percentages. Similarly, the enzymes Alanine Transaminase (ALT) and Aspartate Transaminase (AST) measure liver function (though AST is also produced by other tissues, e.g., lung, pancreas) A synthetic data point with AST at the lower limit of normal range (8-33 Units/L) with an ALT several times above normal range (4-35 Units/L) in a simulated chronically ill patient would be physiologically impossible.
Algorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose:
Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al. propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label.
Expected model change: label those points that would most change the current model.
Expected error reduction: label those points that would most reduce the model's generalization error.
Exponentiated Gradient Exploration for Active Learning: In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration.
Random Sampling: a sample is randomly selected.
Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be.
Entropy Sampling: The entropy formula is used on each sample, and the sample with the highest entropy is considered to be the least certain.
Margin Sampling: The sample with the smallest difference between the two highest class probabilities is considered to be the most uncertain.
Least Confident Sampling: The sample with the smallest best probability is considered to be the most uncertain.
Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most
Querying from diverse subspaces or partitions: When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling.
Variance reduction: label those points that would minimize output variance, which is one of the components of error.
Conformal prediction: predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction.
Mismatch-first farthest-traversal: The primary selection criterion is the prediction mismatch between the current model and nearest-neighbour prediction. It targets on wrongly predicted data points. The second selection criterion is the distance to previously selected data, the farthest first. It aims at optimizing the diversity of selected data.
User Centered Labeling Strategies: Learning is accomplished by applying dimensionality reduction to graphs and figures like scatter plots. Then the user is asked to label the compiled data (categorical, numerical, relevance scores, relation between two instances.
A wide variety of algorithms have been studied that fall into these categories. While the traditional AL strategies can achieve remarkable performance, it is often challenging to predict in advance which strategy is the most suitable in aparticular situation. In recent years, meta-learning algorithms have been gaining in popularity. Some of them have been proposed to tackle the problem of learning AL strategies instead of relying on manually designed strategies. A benchmark which compares 'meta-learning approaches to active learning' to 'traditional heuristic-based Active Learning' may give intuitions if 'Learning active learning' is at the crossroads
Some active learning algorithms are built upon support-vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, W, of each unlabeled datum in TU,i and treat W as an n-dimensional distance from that datum to the separating hyperplane.
Minimum Marginal Hyperplane methods assume that the data with the smallest W are those that the SVM is most uncertain about and therefore should be placed in TC,i to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest W. Tradeoff methods choose a mix of the smallest and largest Ws.
List of datasets for machine learning research
Sample complexity
Bayesian Optimization
Reinforcement learning
Improving Generalization with Active Learning, David Cohn, Les Atlas & Richard Ladner, Machine Learning 15, 201–221 (1994). https://doi.org/10.1007/BF00993277
Balcan, Maria-Florina & Hanneke, Steve & Wortman, Jennifer. (2008). The True Sample Complexity of Active Learning.. 45-56. https://link.springer.com/article/10.1007/s10994-010-5174-y
Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal, Francesco Di Fiore, Michela Nardelli, Laura Mainini, https://arxiv.org/abs/2303.01560v2
Learning how to Active Learn: A Deep Reinforcement Learning Approach, Meng Fang, Yuan Li, Trevor Cohn, https://arxiv.org/abs/1708.02383v1 |
Adversarial machine learning | Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.
Most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption.
Most common attacks in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction.
At the MIT Spam Conference in January 2004, John Graham-Cumming showed that a machine-learning spam filter could be used to defeat another machine-learning spam filter by automatically learning which words to add to a spam email to get the email classified as not spam.
In 2004, Nilesh Dalvi and others noted that linear classifiers used in spam filters could be defeated by simple "evasion attacks" as spammers inserted "good words" into their spam emails. (Around 2007, some spammers added random noise to fuzz words within "image spam" in order to defeat OCR-based filters.) In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers (such as support vector machines and neural networks) might be robust to adversaries, until Battista Biggio and others demonstrated the first gradient-based attacks on such machine-learning models (2012–2013). In 2012, deep neural networks began to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries, again using a gradient-based attack to craft adversarial perturbations.
Recently, it was observed that adversarial attacks are harder to produce in the practical world due to the different environmental constraints that cancel out the effect of noise. For example, any small rotation or slight illumination on an adversarial image can destroy the adversariality. In addition, researchers such as Google Brain's Nicholas Frosst point out that it is much easier to make self-driving cars miss stop signs by physically removing the sign itself, rather than creating adversarial examples. Frosst also believes that the adversarial machine learning community incorrectly assumes models trained on a certain data distribution will also perform well on a completely different data distribution. He suggests that a new approach to machine learning should be explored, and is currently working on a unique neural network that has characteristics more similar to human perception than state-of-the-art approaches.
While adversarial machine learning continues to be heavily rooted in academia, large tech companies such as Google, Microsoft, and IBM have begun curating documentation and open source code bases to allow others to concretely assess the robustness of machine learning models and minimize the risk of adversarial attacks.
Examples include attacks in spam filtering, where spam messages are obfuscated through the misspelling of "bad" words or the insertion of "good" words; attacks in computer security, such as obfuscating malware code within network packets or modifying the characteristics of a network flow to mislead intrusion detection; attacks in biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user; or to compromise users' template galleries that adapt to updated traits over time.
Researchers showed that by changing only one-pixel it was possible to fool deep learning algorithms. Others 3-D printed a toy turtle with a texture engineered to make Google's object detection AI classify it as a rifle regardless of the angle from which the turtle was viewed. Creating the turtle required only low-cost commercially available 3-D printing technology.
A machine-tweaked image of a dog was shown to look like a cat to both computers and humans. A 2019 study reported that humans can guess how machines will classify adversarial images. Researchers discovered methods for perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.
McAfee attacked Tesla's former Mobileye system, fooling it into driving 50 mph over the speed limit, simply by adding a two-inch strip of black tape to a speed limit sign.
Adversarial patterns on glasses or clothing designed to deceive facial-recognition systems or license-plate readers, have led to a niche industry of "stealth streetwear".
An adversarial attack on a neural network can allow an attacker to inject algorithms into the target system. Researchers can also create adversarial audio inputs to disguise commands to intelligent assistants in benign-seeming audio; a parallel literature explores human perception of such stimuli.
Clustering algorithms are used in security applications. Malware and computer virus analysis aims to identify malware families, and to generate specific detection signatures.
Attacks against (supervised) machine learning algorithms have been categorized along three primary axes: influence on the classifier, the security violation and their specificity.
Classifier influence: An attack can influence the classifier by disrupting the classification phase. This may be preceded by an exploration phase to identify vulnerabilities. The attacker's capabilities might be restricted by the presence of data manipulation constraints.
Security violation: An attack can supply malicious data that gets classified as legitimate. Malicious data supplied during training can cause legitimate data to be rejected after training.
Specificity: A targeted attack attempts to allow a specific intrusion/disruption. Alternatively, an indiscriminate attack creates general mayhem.
This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy. This taxonomy has further been extended to include dimensions for defense strategies against adversarial attacks.
Below are some of the most commonly encountered attack scenarios.
Poisoning consists of contaminating the training dataset with data designed to increase errors in the output. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms with potentially malicious intent. Concerns have been raised especially for user-generated training data, e.g. for content recommendation or natural language models. The ubiquity of fake accounts offers many opportunities for poisoning. Facebook reportedly removes around 7 billion fake accounts per year. Poisoning has been reported as the leading concern for industrial applications.
On social medias, disinformation campaigns attempt to bias recommendation and moderation algorithms, to push certain content over others.
A particular case of data poisoning is the backdoor attack, which aims to teach a specific behavior for inputs with a given trigger, e.g. a small defect on images, sounds, videos or texts.
For instance, intrusion detection systems are often trained using collected data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining.
Data poisoning techniques can also be applied to text-to-image models to alter their output.
Data poisoning can also happen unintentionally through model collapse, where models are trained on synthetic data.
As machine learning is scaled, it often relies on multiple computing machines. In federated learning, for instance, edge devices collaborate with a central server, typically by sending gradients or model parameters. However, some of these devices may deviate from their expected behavior, e.g. to harm the central server's model or to bias algorithms towards certain behaviors (e.g., amplifying the recommendation of disinformation content). On the other hand, if the training is performed on a single machine, then the model is very vulnerable to a failure of the machine, or an attack on the machine; the machine is a single point of failure. In fact, the machine owner may themselves insert provably undetectable backdoors.
The current leading solutions to make (distributed) learning algorithms provably resilient to a minority of malicious (a.k.a. Byzantine) participants are based on robust gradient aggregation rules. The robust aggregation rules do not always work especially when the data across participants has a non-iid distribution. Nevertheless, in the context of heterogeneous honest participants, such as users with different consumption habits for recommendation algorithms or writing styles for language models, there are provable impossibility theorems on what any robust learning algorithm can guarantee.
Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems.
Evasion attacks can be generally split into two different categories: black box attacks and white box attacks.
Model extraction involves an adversary probing a black box machine learning system in order to extract the data it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model extraction could be used to extract a proprietary stock trading model which the adversary could then use for their own financial benefit.
In the extreme case, model extraction can lead to model stealing, which corresponds to extracting a sufficient amount of data from the model to enable the complete reconstruction of the model.
On the other hand, membership inference is a targeted model extraction attack, which infers the owner of a data point, often by leveraging the overfitting resulting from poor machine learning practices. Concerningly, this is sometimes achievable even without knowledge or access to a target model's parameters, raising security concerns for models trained on sensitive data, including but not limited to medical records and/or personally identifiable information. With the emergence of transfer learning and public accessibility of many state of the art machine learning models, tech companies are increasingly drawn to create models based on public ones, giving attackers freely accessible information to the structure and type of model being used.
Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area, some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.
Adversarial attacks on speech recognition have been introduced for speech-to-text applications, in particular for Mozilla's implementation of DeepSpeech.
There is a growing literature about adversarial attacks in
linear models. Indeed, since the seminal work from Goodfellow at al. studying these models in linear models has been an important tool to understand how adversarial attacks affect machine learning models.
The analysis of these models is simplified because the computation of adversarial attacks can be simplified in linear regression and classification problems. Moreover, adversarial training is convex in this case.
Linear models allow for analytical analysis while still reproducing phenomena observed in state-of-the-art models.
One prime example of that is how this model can be used to explain the trade-off between robustness and accuracy.
Diverse work indeed provides analysis of adversarial attacks in linear models, including asymptotic analysis for classification and for linear regression. And, finite-sample analysis based on Rademacher complexity.
There are a large variety of different adversarial attacks that can be used against machine learning systems. Many of these work on both deep learning systems as well as traditional machine learning models such as SVMs and linear regression. A high level sample of these attack types include:
Adversarial Examples
Trojan Attacks / Backdoor Attacks
Model Inversion
Membership Inference
An adversarial example refers to specially crafted input that is designed to look "normal" to humans but causes misclassification to a machine learning model. Often, a form of specially designed "noise" is used to elicit the misclassifications. Below are some current techniques for generating adversarial examples in the literature (by no means an exhaustive list).
Gradient-based evasion attack
Fast Gradient Sign Method (FGSM)
Projected Gradient Descent (PGD)
Carlini and Wagner (C&W) attack
Adversarial patch attack
Black box attacks in adversarial machine learning assume that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the ability to query the original model). In either case, the objective of these attacks is to create adversarial examples that are able to transfer to the black box model in question.
The Square Attack was introduced in 2020 as a black box evasion adversarial attack based on querying classification scores without the need of gradient information. As a score based black box attack, this adversarial approach is able to query probability distributions across model output classes, but has no other access to the model itself. According to the paper's authors, the proposed Square Attack required fewer queries than when compared to state-of-the-art score-based black box attacks at the time.
To describe the function objective, the attack defines the classifier as
f
:
[
0
,
1
]
d
→
R
K
{\textstyle f:[0,1]^{d}\rightarrow \mathbb {R} ^{K}}
, with
d
{\textstyle d}
representing the dimensions of the input and
K
{\textstyle K}
as the total number of output classes.
f
k
(
x
)
{\textstyle f_{k}(x)}
returns the score (or a probability between 0 and 1) that the input
x
{\textstyle x}
belongs to class
k
{\textstyle k}
, which allows the classifier's class output for any input
x
{\textstyle x}
to be defined as
argmax
k
=
1
,
.
.
.
,
K
f
k
(
x
)
{\textstyle {\text{argmax}}_{k=1,...,K}f_{k}(x)}
. The goal of this attack is as follows:
argmax
k
=
1
,
.
.
.
,
K
f
k
(
x
^
)
≠
y
,
|
|
x
^
−
x
|
|
p
≤
ϵ
and
x
^
∈
[
0
,
1
]
d
{\displaystyle {\text{argmax}}_{k=1,...,K}f_{k}({\hat {x}})\neq y,||{\hat {x}}-x||_{p}\leq \epsilon {\text{ and }}{\hat {x}}\in [0,1]^{d}}
In other words, finding some perturbed adversarial example
x
^
{\textstyle {\hat {x}}}
such that the classifier incorrectly classifies it to some other class under the constraint that
x
^
{\textstyle {\hat {x}}}
and
x
{\textstyle x}
are similar. The paper then defines loss
L
{\textstyle L}
as
L
(
f
(
x
^
)
,
y
)
=
f
y
(
x
^
)
−
max
k
≠
y
f
k
(
x
^
)
{\textstyle L(f({\hat {x}}),y)=f_{y}({\hat {x}})-\max _{k\neq y}f_{k}({\hat {x}})}
and proposes the solution to finding adversarial example
x
^
{\textstyle {\hat {x}}}
as solving the below constrained optimization problem:
min
x
^
∈
[
0
,
1
]
d
L
(
f
(
x
^
)
,
y
)
,
s.t.
|
|
x
^
−
x
|
|
p
≤
ϵ
{\displaystyle \min _{{\hat {x}}\in [0,1]^{d}}L(f({\hat {x}}),y),{\text{ s.t. }}||{\hat {x}}-x||_{p}\leq \epsilon }
The result in theory is an adversarial example that is highly confident in the incorrect class but is also very similar to the original image. To find such example, Square Attack utilizes the iterative random search technique to randomly perturb the image in hopes of improving the objective function. In each step, the algorithm perturbs only a small square section of pixels, hence the name Square Attack, which terminates as soon as an adversarial example is found in order to improve query efficiency. Finally, since the attack algorithm uses scores and not gradient information, the authors of the paper indicate that this approach is not affected by gradient masking, a common technique formerly used to prevent evasion attacks.
This black box attack was also proposed as a query efficient attack, but one that relies solely on access to any input's predicted output class. In other words, the HopSkipJump attack does not require the ability to calculate gradients or access to score values like the Square Attack, and will require just the model's class prediction output (for any given input). The proposed attack is split into two different settings, targeted and untargeted, but both are built from the general idea of adding minimal perturbations that leads to a different model output. In the targeted setting, the goal is to cause the model to misclassify the perturbed image to a specific target label (that is not the original label). In the untargeted setting, the goal is to cause the model to misclassify the perturbed image to any label that is not the original label. The attack objectives for both are as follows where
x
{\textstyle x}
is the original image,
x
′
{\textstyle x^{\prime }}
is the adversarial image,
d
{\textstyle d}
is a distance function between images,
c
∗
{\textstyle c^{*}}
is the target label, and
C
{\textstyle C}
is the model's classification class label function:
Targeted:
min
x
′
d
(
x
′
,
x
)
subject to
C
(
x
′
)
=
c
∗
{\displaystyle {\textbf {Targeted:}}\min _{x^{\prime }}d(x^{\prime },x){\text{ subject to }}C(x^{\prime })=c^{*}}
Untargeted:
min
x
′
d
(
x
′
,
x
)
subject to
C
(
x
′
)
≠
C
(
x
)
{\displaystyle {\textbf {Untargeted:}}\min _{x^{\prime }}d(x^{\prime },x){\text{ subject to }}C(x^{\prime })\neq C(x)}
To solve this problem, the attack proposes the following boundary function
S
{\textstyle S}
for both the untargeted and targeted setting:
S
(
x
′
)
:=
{
max
c
≠
C
(
x
)
F
(
x
′
)
c
−
F
(
x
′
)
C
(
x
)
,
(Untargeted)
F
(
x
′
)
c
∗
−
max
c
≠
c
∗
F
(
x
′
)
c
,
(Targeted)
{\displaystyle S(x^{\prime }):={\begin{cases}\max _{c\neq C(x)}{F(x^{\prime })_{c}}-F(x^{\prime })_{C(x)},&{\text{(Untargeted)}}\\F(x^{\prime })_{c^{*}}-\max _{c\neq c^{*}}{F(x^{\prime })_{c}},&{\text{(Targeted)}}\end{cases}}}
This can be further simplified to better visualize the boundary between different potential adversarial examples:
S
(
x
′
)
>
0
⟺
{
a
r
g
m
a
x
c
F
(
x
′
)
≠
C
(
x
)
,
(Untargeted)
a
r
g
m
a
x
c
F
(
x
′
)
=
c
∗
,
(Targeted)
{\displaystyle S(x^{\prime })>0\iff {\begin{cases}argmax_{c}F(x^{\prime })\neq C(x),&{\text{(Untargeted)}}\\argmax_{c}F(x^{\prime })=c^{*},&{\text{(Targeted)}}\end{cases}}}
With this boundary function, the attack then follows an iterative algorithm to find adversarial examples
x
′
{\textstyle x^{\prime }}
for a given image
x
{\textstyle x}
that satisfies the attack objectives.
Initialize
x
{\textstyle x}
to some point where
S
(
x
)
>
0
{\textstyle S(x)>0}
Iterate below
Boundary search
Gradient update
Compute the gradient
Find the step size
Boundary search uses a modified binary search to find the point in which the boundary (as defined by
S
{\textstyle S}
) intersects with the line between
x
{\textstyle x}
and
x
′
{\textstyle x^{\prime }}
. The next step involves calculating the gradient for
x
{\textstyle x}
, and update the original
x
{\textstyle x}
using this gradient and a pre-chosen step size. HopSkipJump authors prove that this iterative algorithm will converge, leading
x
{\textstyle x}
to a point right along the boundary that is very close in distance to the original image.
However, since HopSkipJump is a proposed black box attack and the iterative algorithm above requires the calculation of a gradient in the second iterative step (which black box attacks do not have access to), the authors propose a solution to gradient calculation that requires only the model's output predictions alone. By generating many random vectors in all directions, denoted as
u
b
{\textstyle u_{b}}
, an approximation of the gradient can be calculated using the average of these random vectors weighted by the sign of the boundary function on the image
x
′
+
δ
u
b
{\textstyle x^{\prime }+\delta _{u_{b}}}
, where
δ
u
b
{\textstyle \delta _{u_{b}}}
is the size of the random vector perturbation:
∇
S
(
x
′
,
δ
)
≈
1
B
∑
b
=
1
B
ϕ
(
x
′
+
δ
u
b
)
u
b
{\displaystyle \nabla S(x^{\prime },\delta )\approx {\frac {1}{B}}\sum _{b=1}^{B}\phi (x^{\prime }+\delta _{u_{b}})u_{b}}
The result of the equation above gives a close approximation of the gradient required in step 2 of the iterative algorithm, completing HopSkipJump as a black box attack.
White box attacks assumes that the adversary has access to model parameters on top of being able to get labels for provided inputs.
One of the first proposed attacks for generating adversarial examples was proposed by Google researchers Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. The attack was called fast gradient sign method (FGSM), and it consists of adding a linear amount of in-perceivable noise to the image and causing a model to incorrectly classify it. This noise is calculated by multiplying the sign of the gradient with respect to the image we want to perturb by a small constant epsilon. As epsilon increases, the model is more likely to be fooled, but the perturbations become easier to identify as well. Shown below is the equation to generate an adversarial example where
x
{\textstyle x}
is the original image,
ϵ
{\textstyle \epsilon }
is a very small number,
Δ
x
{\textstyle \Delta _{x}}
is the gradient function,
J
{\textstyle J}
is the loss function,
θ
{\textstyle \theta }
is the model weights, and
y
{\textstyle y}
is the true label.
a
d
v
x
=
x
+
ϵ
⋅
s
i
g
n
(
Δ
x
J
(
θ
,
x
,
y
)
)
{\displaystyle adv_{x}=x+\epsilon \cdot sign(\Delta _{x}J(\theta ,x,y))}
One important property of this equation is that the gradient is calculated with respect to the input image since the goal is to generate an image that maximizes the loss for the original image of true label
y
{\textstyle y}
. In traditional gradient descent (for model training), the gradient is used to update the weights of the model since the goal is to minimize the loss for the model on a ground truth dataset. The Fast Gradient Sign Method was proposed as a fast way to generate adversarial examples to evade the model, based on the hypothesis that neural networks cannot resist even linear amounts of perturbation to the input. FGSM has shown to be effective in adversarial attacks for image classification and skeletal action recognition.
In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples.
The attack proposed by Carlini and Wagner begins with trying to solve a difficult non-linear optimization equation:
min
(
|
|
δ
|
|
p
)
subject to
C
(
x
+
δ
)
=
t
,
x
+
δ
∈
[
0
,
1
]
n
{\displaystyle \min(||\delta ||_{p}){\text{ subject to }}C(x+\delta )=t,x+\delta \in [0,1]^{n}}
Here the objective is to minimize the noise (
δ
{\textstyle \delta }
), added to the original input
x
{\textstyle x}
, such that the machine learning algorithm (
C
{\textstyle C}
) predicts the original input with delta (or
x
+
δ
{\textstyle x+\delta }
) as some other class
t
{\textstyle t}
. However instead of directly the above equation, Carlini and Wagner propose using a new function
f
{\textstyle f}
such that:
C
(
x
+
δ
)
=
t
⟺
f
(
x
+
δ
)
≤
0
{\displaystyle C(x+\delta )=t\iff f(x+\delta )\leq 0}
This condenses the first equation to the problem below:
min
(
|
|
δ
|
|
p
)
subject to
f
(
x
+
δ
)
≤
0
,
x
+
δ
∈
[
0
,
1
]
n
{\displaystyle \min(||\delta ||_{p}){\text{ subject to }}f(x+\delta )\leq 0,x+\delta \in [0,1]^{n}}
and even more to the equation below:
min
(
|
|
δ
|
|
p
+
c
⋅
f
(
x
+
δ
)
)
,
x
+
δ
∈
[
0
,
1
]
n
{\displaystyle \min(||\delta ||_{p}+c\cdot f(x+\delta )),x+\delta \in [0,1]^{n}}
Carlini and Wagner then propose the use of the below function in place of
f
{\textstyle f}
using
Z
{\textstyle Z}
, a function that determines class probabilities for given input
x
{\textstyle x}
. When substituted in, this equation can be thought of as finding a target class that is more confident than the next likeliest class by some constant amount:
f
(
x
)
=
(
[
max
i
≠
t
Z
(
x
)
i
]
−
Z
(
x
)
t
)
+
{\displaystyle f(x)=([\max _{i\neq t}Z(x)_{i}]-Z(x)_{t})^{+}}
When solved using gradient descent, this equation is able to produce stronger adversarial examples when compared to fast gradient sign method that is also able to bypass defensive distillation, a defense that was once proposed to be effective against adversarial examples.
Researchers have proposed a multi-step approach to protecting machine learning.
Threat modeling – Formalize the attackers goals and capabilities with respect to the target system.
Attack simulation – Formalize the optimization problem the attacker tries to solve according to possible attack strategies.
Attack impact evaluation
Countermeasure design
Noise detection (For evasion based attack)
Information laundering – Alter the information received by adversaries (for model stealing attacks)
A number of defense mechanisms against evasion, poisoning, and privacy attacks have been proposed, including:
Secure learning algorithms
Byzantine-resilient algorithms
Multiple classifier systems
AI-written algorithms.
AIs that explore the training environment; for example, in image recognition, actively navigating a 3D environment rather than passively scanning a fixed set of 2D images.
Privacy-preserving learning
Ladder algorithm for Kaggle-style competitions
Game theoretic models
Sanitizing training data
Adversarial training
Backdoor detection algorithms
Gradient masking/obfuscation techniques: to prevent the adversary exploiting the gradient in white-box attacks. This family of defenses is deemed unreliable as these models are still vulnerable to black-box attacks or can be circumvented in other ways.
Ensembles of models have been proposed in the literature but caution should be applied when relying on them: usually ensembling weak classifiers results in a more accurate model but it does not seem to apply in the adversarial context.
Pattern recognition
Fawkes (image cloaking software)
Generative adversarial network
MITRE ATLAS: Adversarial Threat Landscape for Artificial-Intelligence Systems
NIST 8269 Draft: A Taxonomy and Terminology of Adversarial Machine Learning
NIPS 2007 Workshop on Machine Learning in Adversarial Environments for Computer Security
AlfaSVMLib Archived 2020-09-24 at the Wayback Machine – Adversarial Label Flip Attacks against Support Vector Machines
Laskov, Pavel; Lippmann, Richard (2010). "Machine learning in adversarial environments". Machine Learning. 81 (2): 115–119. doi:10.1007/s10994-010-5207-6. S2CID 12567278.
Dagstuhl Perspectives Workshop on "Machine Learning Methods for Computer Security"
Workshop on Artificial Intelligence and Security, (AISec) Series |
AIXI | AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence.
It combines Solomonoff induction with sequential decision theory.
AIXI was first proposed by Marcus Hutter in 2000 and several results regarding AIXI are proved in Hutter's 2005 book Universal Artificial Intelligence.
AIXI is a reinforcement learning (RL) agent. It maximizes the expected total rewards received from the environment. Intuitively, it simultaneously considers every computable hypothesis (or environment). In each time step, it looks at every possible program and evaluates how many rewards that program generates depending on the next action taken. The promised rewards are then weighted by the subjective belief that this program constitutes the true environment. This belief is computed from the length of the program: longer programs are considered less likely, in line with Occam's razor. AIXI then selects the action that has the highest expected total reward in the weighted sum of all these programs.
AIXI is a reinforcement learning agent that interacts with some stochastic and unknown but computable environment
μ
{\displaystyle \mu }
. The interaction proceeds in time steps, from
t
=
1
{\displaystyle t=1}
to
t
=
m
{\displaystyle t=m}
, where
m
∈
N
{\displaystyle m\in \mathbb {N} }
is the lifespan of the AIXI agent. At time step t, the agent chooses an action
a
t
∈
A
{\displaystyle a_{t}\in {\mathcal {A}}}
(e.g. a limb movement) and executes it in the environment, and the environment responds with a "percept"
e
t
∈
E
=
O
×
R
{\displaystyle e_{t}\in {\mathcal {E}}={\mathcal {O}}\times \mathbb {R} }
, which consists of an "observation"
o
t
∈
O
{\displaystyle o_{t}\in {\mathcal {O}}}
(e.g., a camera image) and a reward
r
t
∈
R
{\displaystyle r_{t}\in \mathbb {R} }
, distributed according to the conditional probability
μ
(
o
t
r
t
|
a
1
o
1
r
1
.
.
.
a
t
−
1
o
t
−
1
r
t
−
1
a
t
)
{\displaystyle \mu (o_{t}r_{t}|a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t})}
, where
a
1
o
1
r
1
.
.
.
a
t
−
1
o
t
−
1
r
t
−
1
a
t
{\displaystyle a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t}}
is the "history" of actions, observations and rewards. The environment
μ
{\displaystyle \mu }
is thus mathematically represented as a probability distribution over "percepts" (observations and rewards) which depend on the full history, so there is no Markov assumption (as opposed to other RL algorithms). Note again that this probability distribution is unknown to the AIXI agent. Furthermore, note again that
μ
{\displaystyle \mu }
is computable, that is, the observations and rewards received by the agent from the environment
μ
{\displaystyle \mu }
can be computed by some program (which runs on a Turing machine), given the past actions of the AIXI agent.
The only goal of the AIXI agent is to maximise
∑
t
=
1
m
r
t
{\displaystyle \sum _{t=1}^{m}r_{t}}
, that is, the sum of rewards from time step 1 to m.
The AIXI agent is associated with a stochastic policy
π
:
(
A
×
E
)
∗
→
A
{\displaystyle \pi :({\mathcal {A}}\times {\mathcal {E}})^{*}\rightarrow {\mathcal {A}}}
, which is the function it uses to choose actions at every time step, where
A
{\displaystyle {\mathcal {A}}}
is the space of all possible actions that AIXI can take and
E
{\displaystyle {\mathcal {E}}}
is the space of all possible "percepts" that can be produced by the environment. The environment (or probability distribution)
μ
{\displaystyle \mu }
can also be thought of as a stochastic policy (which is a function):
μ
:
(
A
×
E
)
∗
×
A
→
E
{\displaystyle \mu :({\mathcal {A}}\times {\mathcal {E}})^{*}\times {\mathcal {A}}\rightarrow {\mathcal {E}}}
, where the
∗
{\displaystyle *}
is the Kleene star operation.
In general, at time step
t
{\displaystyle t}
(which ranges from 1 to m), AIXI, having previously executed actions
a
1
…
a
t
−
1
{\displaystyle a_{1}\dots a_{t-1}}
(which is often abbreviated in the literature as
a
<
t
{\displaystyle a_{<t}}
) and having observed the history of percepts
o
1
r
1
.
.
.
o
t
−
1
r
t
−
1
{\displaystyle o_{1}r_{1}...o_{t-1}r_{t-1}}
(which can be abbreviated as
e
<
t
{\displaystyle e_{<t}}
), chooses and executes in the environment the action,
a
t
{\displaystyle a_{t}}
, defined as follows
a
t
:=
arg
max
a
t
∑
o
t
r
t
…
max
a
m
∑
o
m
r
m
[
r
t
+
…
+
r
m
]
∑
q
:
U
(
q
,
a
1
…
a
m
)
=
o
1
r
1
…
o
m
r
m
2
−
length
(
q
)
{\displaystyle a_{t}:=\arg \max _{a_{t}}\sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}}
or, using parentheses, to disambiguate the precedences
a
t
:=
arg
max
a
t
(
∑
o
t
r
t
…
(
max
a
m
∑
o
m
r
m
[
r
t
+
…
+
r
m
]
(
∑
q
:
U
(
q
,
a
1
…
a
m
)
=
o
1
r
1
…
o
m
r
m
2
−
length
(
q
)
)
)
)
{\displaystyle a_{t}:=\arg \max _{a_{t}}\left(\sum _{o_{t}r_{t}}\ldots \left(\max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\left(\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}\right)\right)\right)}
Intuitively, in the definition above, AIXI considers the sum of the total reward over all possible "futures" up to
m
−
t
{\displaystyle m-t}
time steps ahead (that is, from
t
{\displaystyle t}
to
m
{\displaystyle m}
), weighs each of them by the complexity of programs
q
{\displaystyle q}
(that is, by
2
−
length
(
q
)
{\displaystyle 2^{-{\textrm {length}}(q)}}
) consistent with the agent's past (that is, the previously executed actions,
a
<
t
{\displaystyle a_{<t}}
, and received percepts,
e
<
t
{\displaystyle e_{<t}}
) that can generate that future, and then picks the action that maximises expected future rewards.
Let us break this definition down in order to attempt to fully understand it.
o
t
r
t
{\displaystyle o_{t}r_{t}}
is the "percept" (which consists of the observation
o
t
{\displaystyle o_{t}}
and reward
r
t
{\displaystyle r_{t}}
) received by the AIXI agent at time step
t
{\displaystyle t}
from the environment (which is unknown and stochastic). Similarly,
o
m
r
m
{\displaystyle o_{m}r_{m}}
is the percept received by AIXI at time step
m
{\displaystyle m}
(the last time step where AIXI is active).
r
t
+
…
+
r
m
{\displaystyle r_{t}+\ldots +r_{m}}
is the sum of rewards from time step
t
{\displaystyle t}
to time step
m
{\displaystyle m}
, so AIXI needs to look into the future to choose its action at time step
t
{\displaystyle t}
.
U
{\displaystyle U}
denotes a monotone universal Turing machine, and
q
{\displaystyle q}
ranges over all (deterministic) programs on the universal machine
U
{\displaystyle U}
, which receives as input the program
q
{\displaystyle q}
and the sequence of actions
a
1
…
a
m
{\displaystyle a_{1}\dots a_{m}}
(that is, all actions), and produces the sequence of percepts
o
1
r
1
…
o
m
r
m
{\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}}
. The universal Turing machine
U
{\displaystyle U}
is thus used to "simulate" or compute the environment responses or percepts, given the program
q
{\displaystyle q}
(which "models" the environment) and all actions of the AIXI agent: in this sense, the environment is "computable" (as stated above). Note that, in general, the program which "models" the current and actual environment (where AIXI needs to act) is unknown because the current environment is also unknown.
length
(
q
)
{\displaystyle {\textrm {length}}(q)}
is the length of the program
q
{\displaystyle q}
(which is encoded as a string of bits). Note that
2
−
length
(
q
)
=
1
2
length
(
q
)
{\displaystyle 2^{-{\textrm {length}}(q)}={\frac {1}{2^{{\textrm {length}}(q)}}}}
. Hence, in the definition above,
∑
q
:
U
(
q
,
a
1
…
a
m
)
=
o
1
r
1
…
o
m
r
m
2
−
length
(
q
)
{\displaystyle \sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}}
should be interpreted as a mixture (in this case, a sum) over all computable environments (which are consistent with the agent's past), each weighted by its complexity
2
−
length
(
q
)
{\displaystyle 2^{-{\textrm {length}}(q)}}
. Note that
a
1
…
a
m
{\displaystyle a_{1}\ldots a_{m}}
can also be written as
a
1
…
a
t
−
1
a
t
…
a
m
{\displaystyle a_{1}\ldots a_{t-1}a_{t}\ldots a_{m}}
, and
a
1
…
a
t
−
1
=
a
<
t
{\displaystyle a_{1}\ldots a_{t-1}=a_{<t}}
is the sequence of actions already executed in the environment by the AIXI agent. Similarly,
o
1
r
1
…
o
m
r
m
=
o
1
r
1
…
o
t
−
1
r
t
−
1
o
t
r
t
…
o
m
r
m
{\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}=o_{1}r_{1}\ldots o_{t-1}r_{t-1}o_{t}r_{t}\ldots o_{m}r_{m}}
, and
o
1
r
1
…
o
t
−
1
r
t
−
1
{\displaystyle o_{1}r_{1}\ldots o_{t-1}r_{t-1}}
is the sequence of percepts produced by the environment so far.
Let us now put all these components together in order to understand this equation or definition.
At time step t, AIXI chooses the action
a
t
{\displaystyle a_{t}}
where the function
∑
o
t
r
t
…
max
a
m
∑
o
m
r
m
[
r
t
+
…
+
r
m
]
∑
q
:
U
(
q
,
a
1
…
a
m
)
=
o
1
r
1
…
o
m
r
m
2
−
length
(
q
)
{\displaystyle \sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}}
attains its maximum.
The parameters to AIXI are the universal Turing machine U and the agent's lifetime m, which need to be chosen. The latter parameter can be removed by the use of discounting.
According to Hutter, the word "AIXI" can have several interpretations. AIXI can stand for AI based on Solomonoff's distribution, denoted by
ξ
{\displaystyle \xi }
(which is the Greek letter xi), or e.g. it can stand for AI "crossed" (X) with induction (I). There are other interpretations.
AIXI's performance is measured by the expected total number of rewards it receives.
AIXI has been proven to be optimal in the following ways.
Pareto optimality: there is no other agent that performs at least as well as AIXI in all environments while performing strictly better in at least one environment.
Balanced Pareto optimality: like Pareto optimality, but considering a weighted sum of environments.
Self-optimizing: a policy p is called self-optimizing for an environment
μ
{\displaystyle \mu }
if the performance of p approaches the theoretical maximum for
μ
{\displaystyle \mu }
when the length of the agent's lifetime (not time) goes to infinity. For environment classes where self-optimizing policies exist, AIXI is self-optimizing.
It was later shown by Hutter and Jan Leike that balanced Pareto optimality is subjective and that any policy can be considered Pareto optimal, which they describe as undermining all previous optimality claims for AIXI.
However, AIXI does have limitations. It is restricted to maximizing rewards based on percepts as opposed to external states. It also assumes it interacts with the environment solely through action and percept channels, preventing it from considering the possibility of being damaged or modified. Colloquially, this means that it doesn't consider itself to be contained by the environment it interacts with. It also assumes the environment is computable.
Like Solomonoff induction, AIXI is incomputable. However, there are computable approximations of it. One such approximation is AIXItl, which performs at least as well as the provably best time t and space l limited agent. Another approximation to AIXI with a restricted environment class is MC-AIXI (FAC-CTW) (which stands for Monte Carlo AIXI FAC-Context-Tree Weighting), which has had some success playing simple games such as partially observable Pac-Man.
Gödel machine
"Universal Algorithmic Intelligence: A mathematical top->down approach", Marcus Hutter, arXiv:cs/0701125; also in Artificial General Intelligence, eds. B. Goertzel and C. Pennachin, Springer, 2007, ISBN 9783540237334, pp. 227–290, doi:10.1007/978-3-540-68677-4_8. |
Algorithm selection | In computer science, a selection algorithm is an algorithm for finding the
k
{\displaystyle k}
th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the
k
{\displaystyle k}
th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of
n
{\displaystyle n}
values, these algorithms take linear time,
O
(
n
)
{\displaystyle O(n)}
as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time
O
(
1
)
{\displaystyle O(1)}
.
|
Algorithmic bias | Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (proposed 2018) and the Artificial Intelligence Act (proposed 2021, approved 2024).
As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.
Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service. |
Algorithmic inference | Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field are computational learning theory, granular computing, bioinformatics, and, long ago, structural probability (Fraser 1966).
The main focus is on the algorithms which compute statistics rooting the study of a random phenomenon, along with the amount of data they must feed on to produce reliable results. This shifts the interest of mathematicians from the study of the distribution laws to the functional properties of the statistics, and the interest of computer scientists from the algorithms for processing data to the information they process.
Concerning the identification of the parameters of a distribution law, the mature reader may recall lengthy disputes in the mid 20th century about the interpretation of their variability in terms of fiducial distribution (Fisher 1956), structural probabilities (Fraser 1966), priors/posteriors (Ramsey 1925), and so on. From an epistemology viewpoint, this entailed a companion dispute as to the nature of probability: is it a physical feature of phenomena to be described through random variables or a way of synthesizing data about a phenomenon? Opting for the latter, Fisher defines a fiducial distribution law of parameters of a given random variable that he deduces from a sample of its specifications. With this law he computes, for instance "the probability that μ (mean of a Gaussian variable – omeur note) is less than any assigned value, or the probability that it lies between any assigned values, or, in short, its probability distribution, in the light of the sample observed".
Fisher fought hard to defend the difference and superiority of his notion of parameter distribution in comparison to
analogous notions, such as Bayes' posterior distribution, Fraser's constructive probability and Neyman's confidence intervals. For half a century, Neyman's confidence intervals won out for all practical purposes, crediting the phenomenological nature of probability. With this perspective, when you deal with a Gaussian variable, its mean μ is fixed by the physical features of the phenomenon you are observing, where the observations are random operators, hence the observed values are specifications of a random sample. Because of their randomness, you may compute from the sample specific intervals containing the fixed μ with a given probability that you denote confidence.
Let X be a Gaussian variable with parameters
μ
{\displaystyle \mu }
and
σ
2
{\displaystyle \sigma ^{2}}
and
{
X
1
,
…
,
X
m
}
{\displaystyle \{X_{1},\ldots ,X_{m}\}}
a sample drawn from it. Working with statistics
S
μ
=
∑
i
=
1
m
X
i
{\displaystyle S_{\mu }=\sum _{i=1}^{m}X_{i}}
and
S
σ
2
=
∑
i
=
1
m
(
X
i
−
X
¯
)
2
,
where
X
¯
=
S
μ
m
{\displaystyle S_{\sigma ^{2}}=\sum _{i=1}^{m}(X_{i}-{\overline {X}})^{2},{\text{ where }}{\overline {X}}={\frac {S_{\mu }}{m}}}
is the sample mean, we recognize that
T
=
S
μ
−
m
μ
S
σ
2
m
−
1
m
=
X
¯
−
μ
S
σ
2
/
(
m
(
m
−
1
)
)
{\displaystyle T={\frac {S_{\mu }-m\mu }{\sqrt {S_{\sigma ^{2}}}}}{\sqrt {\frac {m-1}{m}}}={\frac {{\overline {X}}-\mu }{\sqrt {S_{\sigma ^{2}}/(m(m-1))}}}}
follows a Student's t distribution (Wilks 1962) with parameter (degrees of freedom) m − 1, so that
f
T
(
t
)
=
Γ
(
m
/
2
)
Γ
(
(
m
−
1
)
/
2
)
1
π
(
m
−
1
)
(
1
+
t
2
m
−
1
)
m
/
2
.
{\displaystyle f_{T}(t)={\frac {\Gamma (m/2)}{\Gamma ((m-1)/2)}}{\frac {1}{\sqrt {\pi (m-1)}}}\left(1+{\frac {t^{2}}{m-1}}\right)^{m/2}.}
Gauging T between two quantiles and inverting its expression as a function of
μ
{\displaystyle \mu }
you obtain confidence intervals for
μ
{\displaystyle \mu }
.
With the sample specification:
x
=
{
7.14
,
6.3
,
3.9
,
6.46
,
0.2
,
2.94
,
4.14
,
4.69
,
6.02
,
1.58
}
{\displaystyle \mathbf {x} =\{7.14,6.3,3.9,6.46,0.2,2.94,4.14,4.69,6.02,1.58\}}
having size m = 10, you compute the statistics
s
μ
=
43.37
{\displaystyle s_{\mu }=43.37}
and
s
σ
2
=
46.07
{\displaystyle s_{\sigma ^{2}}=46.07}
, and obtain a 0.90 confidence interval for
μ
{\displaystyle \mu }
with extremes (3.03, 5.65).
From a modeling perspective the entire dispute looks like a chicken-egg dilemma: either fixed data by first and probability distribution of their properties as a consequence, or fixed properties by first and probability distribution of the observed data as a corollary.
The classic solution has one benefit and one drawback. The former was appreciated particularly back when people still did computations with sheet and pencil. Per se, the task of computing a Neyman confidence interval for the fixed parameter θ is hard: you do not know θ, but you look for disposing around it an interval with a possibly very low probability of failing. The analytical solution is allowed for a very limited number of theoretical cases. Vice versa a large variety of instances may be quickly solved in an approximate way via the central limit theorem in terms of confidence interval around a Gaussian distribution – that's the benefit.
The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part. Rather, this size is not sufficiently large because of the complexity of the inference problem.
With the availability of large computing facilities, scientists refocused from isolated parameters inference to complex functions inference, i.e. re sets of highly nested parameters identifying functions. In these cases we speak about learning of functions (in terms for instance of regression, neuro-fuzzy system or computational learning) on the basis of highly informative samples. A first effect of having a complex structure linking data is the reduction of the number of sample degrees of freedom, i.e. the burning of a part of sample points, so that the effective sample size to be considered in the central limit theorem is too small. Focusing on the sample size ensuring a limited learning error with a given confidence level, the consequence is that the lower bound on this size grows with complexity indices such as VC dimension or detail of a class to which the function we want to learn belongs.
A sample of 1,000 independent bits is enough to ensure an absolute error of at most 0.081 on the estimation of the parameter p of the underlying Bernoulli variable with a confidence of at least 0.99. The same size cannot guarantee a threshold less than 0.088 with the same confidence 0.99 when the error is identified with the probability that a 20-year-old man living in New York does not fit the ranges of height, weight and waistline observed on 1,000 Big Apple inhabitants. The accuracy shortage occurs because both the VC dimension and the detail of the class of parallelepipeds, among which the one observed from the 1,000 inhabitants' ranges falls, are equal to 6.
With insufficiently large samples, the approach: fixed sample – random properties suggests inference procedures in three steps:
For a random variable and a sample drawn from it a compatible distribution is a distribution having the same sampling mechanism
M
X
=
(
Z
,
g
θ
)
{\displaystyle {\mathcal {M}}_{X}=(Z,g_{\boldsymbol {\theta }})}
of X with a value
θ
{\displaystyle {\boldsymbol {\theta }}}
of the random parameter
Θ
{\displaystyle \mathbf {\Theta } }
derived from a master equation rooted on a well-behaved statistic s.
You may find the distribution law of the Pareto parameters A and K as an implementation example of the population bootstrap method as in the figure on the left.
Implementing the twisting argument method, you get the distribution law
F
M
(
μ
)
{\displaystyle F_{M}(\mu )}
of the mean M of a Gaussian variable X on the basis of the statistic
s
M
=
∑
i
=
1
m
x
i
{\displaystyle s_{M}=\sum _{i=1}^{m}x_{i}}
when
Σ
2
{\displaystyle \Sigma ^{2}}
is known to be equal to
σ
2
{\displaystyle \sigma ^{2}}
(Apolloni, Malchiodi & Gaito 2006). Its expression is:
F
M
(
μ
)
=
Φ
(
m
μ
−
s
M
σ
m
)
,
{\displaystyle F_{M}(\mu )=\Phi \left({\frac {m\mu -s_{M}}{\sigma {\sqrt {m}}}}\right),}
shown in the figure on the right, where
Φ
{\displaystyle \Phi }
is the cumulative distribution function of a standard normal distribution.
Computing a confidence interval for M given its distribution function is straightforward: we need only find two quantiles (for instance
δ
/
2
{\displaystyle \delta /2}
and
1
−
δ
/
2
{\displaystyle 1-\delta /2}
quantiles in case we are interested in a confidence interval of level δ symmetric in the tail's probabilities) as indicated on the left in the diagram showing the behavior of the two bounds for different values of the statistic sm.
The Achilles heel of Fisher's approach lies in the joint distribution of more than one parameter, say mean and variance of a Gaussian distribution. On the contrary, with the last approach (and above-mentioned methods: population bootstrap and twisting argument) we may learn the joint distribution of many parameters. For instance, focusing on the distribution of two or many more parameters, in the figures below we report two confidence regions where the function to be learnt falls with a confidence of 90%. The former concerns the probability with which an extended support vector machine attributes a binary label 1 to the points of the
(
x
,
y
)
{\displaystyle (x,y)}
plane. The two surfaces are drawn on the basis of a set of sample points in turn labelled according to a specific distribution law (Apolloni et al. 2008). The latter concerns the confidence region of the hazard rate of breast cancer recurrence computed from a censored sample (Apolloni, Malchiodi & Gaito 2006).
Fraser, D. A. S. (1966), "Structural probability and generalization", Biometrika, 53 (1/2): 1–9, doi:10.2307/2334048, JSTOR 2334048.
Fisher, M. A. (1956), Statistical Methods and Scientific Inference, Edinburgh and London: Oliver and Boyd
Apolloni, B.; Malchiodi, D.; Gaito, S. (2006), Algorithmic Inference in Machine Learning, International Series on Advanced Intelligence, vol. 5 (2nd ed.), Adelaide: Magill, Advanced Knowledge International
Apolloni, B.; Bassis, S.; Malchiodi, D.; Witold, P. (2008), The Puzzle of Granular Computing, Studies in Computational Intelligence, vol. 138, Berlin: Springer, ISBN 9783540798637
Ramsey, F. P. (1925), "The Foundations of Mathematics", Proceedings of the London Mathematical Society: 338–384, doi:10.1112/plms/s2-25.1.338.
Wilks, S.S. (1962), Mathematical Statistics, Wiley Publications in Statistics, New York: John Wiley |
Anomaly detection | In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behavior. Such examples may arouse suspicions of being generated by a different mechanism, or appear inconsistent with the remainder of that set of data.
Anomaly detection finds application in many domains including cybersecurity, medicine, machine vision, statistics, neuroscience, law enforcement and financial fraud to name only a few. Anomalies were initially searched for clear rejection or omission from the data to aid statistical analysis, for example to compute the mean or standard deviation. They were also removed to better predictions from models such as linear regression, and more recently their removal aids the performance of machine learning algorithms. However, in many applications anomalies themselves are of interest and are the observations most desirous in the entire data set, which need to be identified and separated from noise or irrelevant outliers.
Three broad categories of anomaly detection techniques exist. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier. However, this approach is rarely used in anomaly detection due to the general unavailability of labelled data and the inherent unbalanced nature of the classes. Semi-supervised anomaly detection techniques assume that some portion of the data is labelled. This may be any combination of the normal or anomalous data, but more often than not, the techniques construct a model representing normal behavior from a given normal training data set, and then test the likelihood of a test instance to be generated by the model. Unsupervised anomaly detection techniques assume the data is unlabelled and are by far the most commonly used due to their wider and relevant application. |
Aporia (company) | Aporia is a machine learning observability platform based in Tel Aviv, Israel. The company has a US office located in San Jose, California.
Aporia has developed software for monitoring and controlling undetected defects and failures used by other companies to detect and report anomalies, and warn in the early stages of faults. |
Apprenticeship learning | In artificial intelligence, apprenticeship learning (or learning from demonstration or imitation learning) is the process of learning by observing an expert. It can be viewed as a form of supervised learning, where the training dataset consists of task executions by a demonstration teacher.
Mapping methods try to mimic the expert by forming a direct mapping either from states to actions, or from states to reward values. For example, in 2002 researchers used such an approach to teach an AIBO robot basic soccer skills.
Inverse reinforcement learning (IRL) is the process of deriving a reward function from observed behavior. While ordinary "reinforcement learning" involves using rewards and punishments to learn behavior, in IRL the direction is reversed, and a robot observes a person's behavior to figure out what goal that behavior seems to be trying to achieve. The IRL problem can be defined as:
Given 1) measurements of an agent's behaviour over time, in a variety of circumstances; 2) measurements of the sensory inputs to that agent; 3) a model of the physical environment (including the agent's body): Determine the reward function that the agent is optimizing.
IRL researcher Stuart J. Russell proposes that IRL might be used to observe humans and attempt to codify their complex "ethical values", in an effort to create "ethical robots" that might someday know "not to cook your cat" without needing to be explicitly told. The scenario can be modeled as a "cooperative inverse reinforcement learning game", where a "person" player and a "robot" player cooperate to secure the person's implicit goals, despite these goals not being explicitly known by either the person nor the robot.
In 2017, OpenAI and DeepMind applied deep learning to the cooperative inverse reinforcement learning in simple domains such as Atari games and straightforward robot tasks such as backflips. The human role was limited to answering queries from the robot as to which of two different actions were preferred. The researchers found evidence that the techniques may be economically scalable to modern systems.
Apprenticeship via inverse reinforcement learning (AIRP) was developed by in 2004 Pieter Abbeel, Professor in Berkeley's EECS department, and Andrew Ng, Associate Professor in Stanford University's Computer Science Department. AIRP deals with "Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform". AIRP has been used to model reward functions of highly dynamic scenarios where there is no obvious reward function intuitively. Take the task of driving for example, there are many different objectives working simultaneously - such as maintaining safe following distance, a good speed, not changing lanes too often, etc. This task, may seem easy at first glance, but a trivial reward function may not converge to the policy wanted.
One domain where AIRP has been used extensively is helicopter control. While simple trajectories can be intuitively derived, complicated tasks like aerobatics for shows has been successful. These include aerobatic maneuvers like - in-place flips, in-place rolls, loops, hurricanes and even auto-rotation landings. This work was developed by Pieter Abbeel, Adam Coates, and Andrew Ng - "Autonomous Helicopter Aerobatics through Apprenticeship Learning"
System models try to mimic the expert by modeling world dynamics.
The system learns rules to associate preconditions and postconditions with each action. In one 1994 demonstration, a humanoid learns a generalized plan from only two demonstrations of a repetitive ball
collection task.
Learning from demonstration is often explained from a perspective that the working Robot-control-system is available and the human-demonstrator is using it. And indeed, if the software works, the Human operator takes the robot-arm, makes a move with it, and the robot will reproduce the action later. For example, he teaches the robot-arm how to put a cup under a coffeemaker and press the start-button. In the replay phase, the robot is imitating this behavior 1:1. But that is not how the system works internally; it is only what the audience can observe. In reality, Learning from demonstration is much more complex. One of the first works on learning by robot apprentices (anthropomorphic robots learning by imitation) was Adrian Stoica's PhD thesis in 1995.
In 1997, robotics expert Stefan Schaal was working on the Sarcos robot-arm. The goal was simple: solve the pendulum swingup task. The robot itself can execute a movement, and as a result, the pendulum is moving. The problem is, that it is unclear what actions will result into which movement. It is an Optimal control-problem which can be described with mathematical formulas but is hard to solve. The idea from Schaal was, not to use a Brute-force solver but record the movements of a human-demonstration. The angle of the pendulum is logged over three seconds at the y-axis. This results into a diagram which produces a pattern.
In computer animation, the principle is called spline animation. That means, on the x-axis the time is given, for example 0.5 seconds, 1.0 seconds, 1.5 seconds, while on the y-axis is the variable given. In most cases it's the position of an object. In the inverted pendulum it is the angle.
The overall task consists of two parts: recording the angle over time and reproducing the recorded motion. The reproducing step is surprisingly simple. As an input we know, in which time step which angle the pendulum must have. Bringing the system to a state is called “Tracking control” or PID control. That means, we have a trajectory over time, and must find control actions to map the system to this trajectory. Other authors call the principle “steering behavior”, because the aim is to bring a robot to a given line.
Inverse reinforcement learning |
Artificial intelligence in hiring | Artificial intelligence (AI) in hiring involves the use of technology to automate aspects of the hiring process. Advances in artificial intelligence, such as the advent of machine learning and the growth of big data, enable AI to be utilized to recruit, screen, and predict the success of applicants. Proponents of artificial intelligence in hiring claim it reduces bias, assists with finding qualified candidates, and frees up human resource workers' time for other tasks, while opponents worry that AI perpetuates inequalities in the workplace and will eliminate jobs. Despite the potential benefits, the ethical implications of AI in hiring remain a subject of debate, with concerns about algorithmic transparency, accountability, and the need for ongoing oversight to ensure fair and unbiased decision-making throughout the recruitment process. |
Astrostatistics | Astrostatistics is a discipline which spans astrophysics, statistical analysis and data mining. It is used to process the vast amount of data produced by automated scanning of the cosmos, to characterize complex datasets, and to link astronomical data to astrophysical theory. Many branches of statistics are involved in astronomical analysis including nonparametrics, multivariate regression and multivariate classification, time series analysis, and especially Bayesian inference. The field is closely related to astroinformatics. |
Attention (machine learning) | The machine learning-based attention method simulates how human attention works by assigning varying levels of importance to different components of a sequence. In natural language processing, this usually means assigning different levels of importance to different words in a sentence. It assigns importance to each word by calculating "soft" weights for the word's numerical representation, known as its embedding, within a specific section of the sentence called the context window to determine its importance. The calculation of these weights can occur simultaneously in models called transformers, or one by one in models known as recurrent neural networks. Unlike "hard" weights, which are predetermined and fixed during training, "soft" weights can adapt and change with each use of the model.
Attention was developed to address the weaknesses of leveraging information from the hidden layers of recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to be attenuated. Attention allows the calculation of the hidden representation of a token equal access to any part of a sentence directly, rather than only through the previous hidden state.
Earlier uses attached this mechanism to a serial recurrent neural network's language translation system (below), but later uses in transformers' large language models removed the recurrent neural network and relied heavily on the faster parallel attention scheme. |
Audio inpainting | Audio inpainting (also known as audio interpolation) is an audio restoration task which deals with the reconstruction of missing or corrupted portions of a digital audio signal. Inpainting techniques are employed when parts of the audio have been lost due to various factors such as transmission errors, data corruption or errors during recording.
The goal of audio inpainting is to fill in the gaps (i.e., the missing portions) in the audio signal seamlessly, making the reconstructed portions indistinguishable from the original content and avoiding the introduction of audible distortions or alterations.
Many techniques have been proposed to solve the audio inpainting problem and this is usually achieved by analyzing the temporal and spectral information surrounding each missing portion of the considered audio signal.
Classic methods employ statistical models or digital signal processing algorithms to predict and synthesize the missing or damaged sections. Recent solutions, instead, take advantage of deep learning models, thanks to the growing trend of exploiting data-driven methods in the context of audio restoration.
Depending on the extent of the lost information, the inpaintining task can be divided in three categories.
Short inpainting refers to the reconstruction of few milliseconds (approximately less than 10) of missing signal, that occurs in the case of short distortions such as clicks or clipping.
In this case, the goal of the reconstruction is to recover the lost information exactly.
In long inpainting instead, with gaps in the order of hundreds of milliseconds or even seconds, this goal becomes unrealistic, since restoration techniques cannot rely on local information.
Therefore, besides providing a coherent reconstruction, the algorithms need to generate new information that has to be semantically compatible with the surrounding context (i.e., the audio signal surrounding the gaps).
The case of medium duration gaps lays between short and long inpainting.
It refers to the reconstruction of tens of millisecond of missing data, a scale where the non-stationary characteristic of audio already becomes important. |
Automated machine learning | Automated machine learning (AutoML) is the process of automating the tasks of applying machine learning to real-world problems. It is the combination of automation and ML.
AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as an artificial intelligence-based solution to the growing challenge of applying machine learning. The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models.
Common techniques used in AutoML include hyperparameter optimization, meta-learning and neural architecture search.
In a typical machine learning application, practitioners have a set of input data points to be used for training. The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods. After these steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert.
Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively.
AutoML plays an important role within the broader approach of automating data science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction.
Automated machine learning can target various stages of the machine learning process. Steps to automate are:
Data preparation and ingestion (from raw data and miscellaneous formats)
Column type detection; e.g., Boolean, discrete numerical, continuous numerical, or text
Column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature
Task detection; e.g., binary classification, regression, clustering, or ranking
Feature engineering
Feature selection
Feature extraction
Meta-learning and transfer learning
Detection and handling of skewed data and/or missing values
Model selection - choosing which machine learning algorithm to use, often including multiple competing software implementations
Ensembling - a form of consensus where using multiple models often gives better results than any single model
Hyperparameter optimization of the learning algorithm and featurization
Neural architecture search
Pipeline selection under time, memory, and complexity constraints
Selection of evaluation metrics and validation procedures
Problem checking
Leakage detection
Misconfiguration detection
Analysis of obtained results
Creating user interfaces and visualizations
There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry". This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms and system design.
Additionally, some other challenges include meta-learning challenges and computational resource allocation.
Neural architecture search
Neuroevolution
Self-tuning
Neural Network Intelligence
AutoAI
ModelOps
Hyperparameter optimization
"Open Source AutoML Tools: AutoGluon, TransmogrifAI, Auto-sklearn, and NNI". Bizety. 2020-06-16.
Ferreira, Luís, et al. "A comparison of AutoML tools for machine learning, deep learning and XGBoost." 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. https://repositorium.sdum.uminho.pt/bitstream/1822/74125/1/automl_ijcnn.pdf
Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. Advances in neural information processing systems, 28. https://proceedings.neurips.cc/paper_files/paper/2015/file/11d0e6287202fced83f79975ec59a3a6-Paper.pdf |
Automation in construction | Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance. Some systems may be fielded as a direct response to increasing skilled labor shortages in some countries. Opponents claim that increased automation may lead to less construction jobs and that software leaves heavy equipment vulnerable to hackers.
Research insights on this subject are today published in several jurnals such as Automation in Construction by Elsevier. |
Bag-of-words model | The bag-of-words model (BoW) is a model of text which uses a representation of text that is based on an unordered collection (a "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus most of syntax or grammar) but captures multiplicity.
The bag-of-words model is commonly used in methods of document classification where, for example, the (frequency of) occurrence of each word is used as a feature for training a classifier. It has also been used for computer vision.
An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on Distributional Structure. |
Ball tree | In computer science, a ball tree, balltree or metric tree, is a space partitioning data structure for organizing points in a multi-dimensional space. A ball tree partitions data points into a nested set of balls. The resulting data structure has characteristics that make it useful for a number of applications, most notably nearest neighbor search. |
Base rate | In probability and statistics, the base rate (also known as prior probabilities) is the class of probabilities unconditional on "featural evidence" (likelihoods).
It is the proportion of individuals in a population who have a certain characteristic or trait. For example, if 1% of the population were medical professionals, and remaining 99% were not medical professionals, then the base rate of medical professionals is 1%. The method for integrating base rates and featural evidence is given by Bayes' rule.
In the sciences, including medicine, the base rate is critical for comparison. In medicine a treatment's effectiveness is clear when the base rate is available. For example, if the control group, using no treatment at all, had their own base rate of 1/20 recoveries within 1 day and a treatment had a 1/100 base rate of recovery within 1 day, we see that the treatment actively decreases the recovery.
The base rate is an important concept in statistical inference, particularly in Bayesian statistics. In Bayesian analysis, the base rate is combined with the observed data to update our belief about the probability of the characteristic or trait of interest. The updated probability is known as the posterior probability and is denoted as P(A|B), where B represents the observed data. For example, suppose we are interested in estimating the prevalence of a disease in a population. The base rate would be the proportion of individuals in the population who have the disease. If we observe a positive test result for a particular individual, we can use Bayesian analysis to update our belief about the probability that the individual has the disease. The updated probability would be a combination of the base rate and the likelihood of the test result given the disease status.
The base rate is also important in decision-making, particularly in situations where the cost of false positives and false negatives are different. For example, in medical testing, a false negative (failing to diagnose a disease) could be much more costly than a false positive (incorrectly diagnosing a disease). In such cases, the base rate can help inform decisions about the appropriate threshold for a positive test result. |
Bayesian interpretation of kernel regularization | Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.
A mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is finite-dimensional. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together. |
Bayesian optimization | Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. |
Bayesian regret | In stochastic game theory, Bayesian regret is the expected difference ("regret") between the utility of a Bayesian strategy and that of the optimal strategy (the one with the highest expected payoff).
The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. |
Bayesian structural time series | Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data.
The model has also promising application in the field of analytical marketing. In particular, it can be used in order to assess how much different marketing campaigns have contributed to the change in web search volumes, product sales, brand popularity and other relevant indicators. Difference-in-differences models and interrupted time series designs are alternatives to this approach. "In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls." |
Bias–variance tradeoff | In statistics and machine learning, the bias–variance tradeoff describes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, or bias. However, for more flexible models, there will tend to be greater variance to the model fit each time we take a set of samples to create a new training data set. It is said that there is greater variance in the model's estimated parameters.
The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:
The bias error is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
The variance is an error from sensitivity to small fluctuations in the training set. High variance may result from an algorithm modeling the random noise in the training data (overfitting).
The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the irreducible error, resulting from noise in the problem itself. |
Binary classification | Binary classification is the task of classifying the elements of a set into one of two groups (each called class). Typical binary classification problems include:
Medical testing to determine if a patient has certain disease or not;
Quality control in industry, deciding whether a specification has been met;
In information retrieval, deciding whether a page should be in the result set of a search or not
In administration, deciding whether someone should be issued with a driving licence or not
In cognition, deciding whether an object is food or not food.
When measuring the accuracy of a binary classifier, the simplest way is to count the errors. But in the real world often one of the two classes is more important, so that the number of both of the different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative). |
Bioserenity | BioSerenity is a medtech company created in 2014 that develops ambulatory medical devices to help diagnose and monitor patients with chronic diseases such as epilepsy. The medical devices are composed of medical sensors, smart clothing, a smartphone app for Patient Reported Outcome, and a web platform to perform data analysis through Medical Artificial Intelligence for detection of digital biomarkers. The company initially focused on Neurology, a domain in which it reported contributing to the diagnosis of 30 000 patients per year. It now also operates in Sleep Disorders and Cardiology. BioSerenity reported it provides pharmaceutical companies with solutions for companion diagnostics. |
Bradley–Terry model | The Bradley–Terry model is a probability model for the outcome of pairwise comparisons between items, teams, or objects. Given a pair of items i and j drawn from some population, it estimates the probability that the pairwise comparison i > j turns out true, as
where pi is a positive real-valued score assigned to individual i. The comparison i > j can be read as "i is preferred to j", "i ranks higher than j", or "i beats j", depending on the application.
For example, pi might represent the skill of a team in a sports tournament and
Pr
(
i
>
j
)
{\displaystyle \Pr(i>j)}
the probability that i wins a game against j. Or pi might represent the quality or desirability of a commercial product and
Pr
(
i
>
j
)
{\displaystyle \Pr(i>j)}
the probability that a consumer will prefer product i over product j.
The Bradley–Terry model can be used in the forward direction to predict outcomes, as described, but is more commonly used in reverse to infer the scores pi given an observed set of outcomes. In this type of application pi represents some measure of the strength or quality of
i
{\displaystyle i}
and the model lets us estimate the strengths from a series of pairwise comparisons. In a survey of wine preferences, for instance, it might be difficult for respondents to give a complete ranking of a large set of wines, but relatively easy for them to compare sample pairs of wines and say which they feel is better. Based on a set of such pairwise comparisons, the Bradley–Terry model can then be used to derive a full ranking of the wines.
Once the values of the scores pi have been calculated, the model can then also be used in the forward direction, for instance to predict the likely outcome of comparisons that have not yet actually occurred. In the wine survey example, for instance, one could calculate the probability that someone will prefer wine
i
{\displaystyle i}
over wine
j
{\displaystyle j}
, even if no one in the survey directly compared that particular pair. |
Category utility | Category utility is a measure of "category goodness" defined in Gluck & Corter (1985) and Corter & Gluck (1992). It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (Reed 1972; Rosch & Mervis 1975) and "collocation index" (Jones 1983). It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in Witten & Frank (2005, pp. 260–262). |
CIML community portal | The computational intelligence and machine learning (CIML) community portal is an international multi-university initiative. Its primary purpose is to help facilitate a virtual scientific community infrastructure for all those involved with, or interested in, computational intelligence and machine learning. This includes CIML research-, education, and application-oriented resources residing at the portal and others that are linked from the CIML site. |
Claude (language model) | Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. Claude 3, released in March 2024, can also analyze images. |
Model-based clustering | In statistics, cluster analysis is the algorithmic grouping of objects into homogeneous
groups based on numerical measurements. Model-based clustering bases this on a statistical model for the data, usually a mixture model. This has several advantages, including a principled statistical basis for clustering,
and ways to choose the number of clusters, to choose the best clustering model, to assess the uncertainty of the clustering, and to identify outliers that do not belong to any group.
Suppose that for each of
n
{\displaystyle n}
observations we have data on
d
{\displaystyle d}
variables, denoted by
y
i
=
(
y
i
,
1
,
…
,
y
i
,
d
)
{\displaystyle y_{i}=(y_{i,1},\ldots ,y_{i,d})}
for observation
i
{\displaystyle i}
. Then
model-based clustering expresses the probability density function of
y
i
{\displaystyle y_{i}}
as a finite mixture, or weighted average of
G
{\displaystyle G}
component probability density functions:
p
(
y
i
)
=
∑
g
=
1
G
τ
g
f
g
(
y
i
∣
θ
g
)
,
{\displaystyle p(y_{i})=\sum _{g=1}^{G}\tau _{g}f_{g}(y_{i}\mid \theta _{g}),}
where
f
g
{\displaystyle f_{g}}
is a probability density function with
parameter
θ
g
{\displaystyle \theta _{g}}
,
τ
g
{\displaystyle \tau _{g}}
is the corresponding
mixture probability where
∑
g
=
1
G
τ
g
=
1
{\displaystyle \sum _{g=1}^{G}\tau _{g}=1}
.
Then in its simplest form, model-based clustering views each component
of the mixture model as a cluster, estimates the model parameters, and assigns
each observation to cluster corresponding to its most likely mixture component.
The most common model for continuous data is that
f
g
{\displaystyle f_{g}}
is a multivariate normal distribution with mean vector
μ
g
{\displaystyle \mu _{g}}
and covariance matrix
Σ
g
{\displaystyle \Sigma _{g}}
, so that
θ
g
=
(
μ
g
,
Σ
g
)
{\displaystyle \theta _{g}=(\mu _{g},\Sigma _{g})}
.
This defines a Gaussian mixture model. The parameters of the model,
τ
g
{\displaystyle \tau _{g}}
and
θ
g
{\displaystyle \theta _{g}}
for
g
=
1
,
…
,
G
{\displaystyle g=1,\ldots ,G}
,
are typically estimated by maximum likelihood estimation using the
expectation-maximization algorithm (EM); see also
EM algorithm and GMM model.
Bayesian inference is also often used for inference about finite
mixture models. The Bayesian approach also allows for the case where the number of components,
G
{\displaystyle G}
, is infinite, using a Dirichlet process prior, yielding a Dirichlet process mixture model for clustering.
An advantage of model-based clustering is that it provides statistically
principled ways to choose the number of clusters. Each different choice of the number of groups
G
{\displaystyle G}
corresponds to a different mixture model. Then standard statistical model selection criteria such as the
Bayesian information criterion (BIC) can be used to choose
G
{\displaystyle G}
. The integrated completed likelihood (ICL) is a different criterion designed to choose the number of clusters rather than the number of mixture components in the model; these will often be different if highly non-Gaussian clusters are present.
For data with high dimension,
d
{\displaystyle d}
, using a full covariance matrix for each mixture component requires estimation of many parameters, which can result in a loss of precision, generalizabity and interpretability. Thus it is common to use more parsimonious component covariance matrices exploiting their geometric interpretation. Gaussian clusters are ellipsoidal, with their volume, shape and orientation determined by the covariance matrix. Consider the eigendecomposition of a matrix
Σ
g
=
λ
g
D
g
A
g
D
g
T
,
{\displaystyle \Sigma _{g}=\lambda _{g}D_{g}A_{g}D_{g}^{T},}
where
D
g
{\displaystyle D_{g}}
is the matrix of eigenvectors of
Σ
g
{\displaystyle \Sigma _{g}}
,
A
g
=
diag
{
A
1
,
g
,
…
,
A
d
,
g
}
{\displaystyle A_{g}={\mbox{diag}}\{A_{1,g},\ldots ,A_{d,g}\}}
is a diagonal matrix whose elements are proportional to
the eigenvalues of
Σ
g
{\displaystyle \Sigma _{g}}
in descending order,
and
λ
g
{\displaystyle \lambda _{g}}
is the associated constant of proportionality.
Then
λ
g
{\displaystyle \lambda _{g}}
controls the volume of the ellipsoid,
A
g
{\displaystyle A_{g}}
its shape, and
D
g
{\displaystyle D_{g}}
its orientation.
Each of the volume, shape and orientation of the clusters can be
constrained to be equal (E) or allowed to vary (V); the orientation can
also be spherical, with identical eigenvalues (I). This yields 14 possible clustering models, shown in this table:
It can be seen that many of these models are more parsimonious, with far fewer
parameters than the unconstrained model that has 90 parameters when
G
=
4
{\displaystyle G=4}
and
d
=
9
{\displaystyle d=9}
.
Several of these models correspond to well-known heuristic clustering methods.
For example, k-means clustering is equivalent to estimation of the
EII clustering model using the classification EM algorithm. The Bayesian information criterion (BIC)
can be used to choose the best clustering model as well as the number of clusters. It can also be used as the basis for a method to choose the variables
in the clustering model, eliminating variables that are not useful for clustering.
Different Gaussian model-based clustering methods have been developed with
an eye to handling high-dimensional data. These include the pgmm method, which is based on the mixture of
factor analyzers model, and the HDclassif method, based on the idea of subspace clustering.
The mixture-of-experts framework extends model-based clustering to include covariates.
We illustrate the method with a dateset consisting of three measurements
(glucose, insulin, sspg) on 145 subjects for the purpose of diagnosing
diabetes and the type of diabetes present.
The subjects were clinically classified into three groups: normal,
chemical diabetes and overt diabetes, but we use this information only
for evaluating clustering methods, not for classifying subjects.
The BIC plot shows the BIC values for each combination of the number of
clusters,
G
{\displaystyle G}
, and the clustering model from the Table.
Each curve corresponds to a different clustering model.
The BIC favors 3 groups, which corresponds to the clinical assessment.
It also favors the unconstrained covariance model, VVV.
This fits the data well, because the normal patients have low values of
both sspg and insulin, while the distributions of the chemical and
overt diabetes groups are elongated, but in different directions.
Thus the volumes, shapes and orientations of the three groups are clearly
different, and so the unconstrained model is appropriate, as selected
by the model-based clustering method.
The classification plot shows the classification of the subjects by model-based
clustering. The classification was quite accurate, with a 12% error rate
as defined by the clinical classificiation.
Other well-known clustering methods performed worse with higher
error rates, such as single-linkage clustering with 46%,
average link clustering with 30%, complete-linkage clustering
also with 30%, and k-means clustering with 28%.
An outlier in clustering is a data point that does not belong to any of
the clusters. One way of modeling outliers in model-based clustering is
to include an additional mixture component that is very dispersed, with
for example a uniform distribution. Another approach is to replace the multivariate
normal densities by
t
{\displaystyle t}
-distributions, with the idea that the long tails of the
t
{\displaystyle t}
-distribution would ensure robustness to outliers.
However, this is not breakdown-robust.
A third approach is the "tclust" or data trimming approach
which excludes observations identified as
outliers when estimating the model parameters.
Sometimes one or more clusters deviate strongly from the Gaussian assumption.
If a Gaussian mixture is fitted to such data, a strongly non-Gaussian
cluster will often be represented by several mixture components rather than
a single one. In that case, cluster merging can be used to find a better
clustering. A different approach is to use mixtures
of complex component densities to represent non-Gaussian clusters.
Clustering multivariate categorical data is most often done using the
latent class model. This assumes that the data arise from a finite
mixture model, where within each cluster the variables are independent.
These arise when variables are of different types, such
as continuous, categorical or ordinal data. A latent class model for
mixed data assumes local independence between the variable. The location model relaxes the local independence
assumption. The clustMD approach assumes that
the observed variables are manifestations of underlying continuous Gaussian
latent variables.
The simplest model-based clustering approach for multivariate
count data is based on finite mixtures with locally independent Poisson
distributions, similar to the latent class model.
More realistic approaches allow for dependence and overdispersion in the
counts.
These include methods based on the multivariate Poisson distribution,
the multivarate Poisson-log normal distribution, the integer-valued
autoregressive (INAR) model and the Gaussian Cox model.
These consist of sequences of categorical values from a finite set of
possibilities, such as life course trajectories.
Model-based clustering approaches include group-based trajectory and
growth mixture models and a distance-based
mixture model.
These arise when individuals rank objects in order of preference. The data
are then ordered lists of objects, arising in voting, education, marketing
and other areas. Model-based clustering methods for rank data include
mixtures of Plackett-Luce models and mixtures of Benter models,
and mixtures of Mallows models.
These consist of the presence, absence or strength of connections between
individuals or nodes, and are widespread in the social sciences and biology.
The stochastic blockmodel carries out model-based clustering of the nodes
in a network by assuming that there is a latent clustering and that
connections are formed independently given the clustering. The latent position cluster model
assumes that each node occupies a position in an unobserved latent space,
that these positions arise from a mixture of Gaussian distributions,
and that presence or absence of a connection is associated with distance
in the latent space.
Much of the model-based clustering software is in the form of a publicly
and freely available R package. Many of these are listed in the
CRAN Task View on Cluster Analysis and Finite Mixture Models.
The most used such package is
mclust,
which is used to cluster continuous data and has been downloaded over
8 million times.
The poLCA package clusters
categorical data using the latent class model.
The clustMD package clusters
mixed data, including continuous, binary, ordinal and nominal variables.
The flexmix package
does model-based clustering for a range of component distributions.
The mixtools package can cluster
different data types. Both flexmix and mixtools
implement model-based clustering with covariates.
Model-based clustering was first invented in 1950 by Paul Lazarsfeld
for clustering multivariate discrete data, in the form of the
latent class model.
In 1959, Lazarsfeld gave a lecture on latent structure analysis
at the University of California-Berkeley, where John H. Wolfe was an M.A. student.
This led Wolfe to think about how to do the same thing for continuous
data, and in 1965 he did so, proposing the Gaussian mixture model for
clustering.
He also produced the first software for estimating it, called NORMIX.
Day (1969), working independently, was the first to publish a journal
article on the approach.
However, Wolfe deserves credit as the inventor of model-based clustering
for continuous data.
Murtagh and Raftery (1984) developed a model-based clustering method
based on the eigenvalue decomposition of the component covariance matrices.
McLachlan and Basford (1988) was the first book on the approach,
advancing methodology and sparking interest.
Banfield and Raftery (1993) coined the term "model-based clustering",
introduced the family of parsimonious models,
described an information criterion for
choosing the number of clusters, proposed the uniform model for outliers,
and introduced the mclust software.
Celeux and Govaert (1995) showed how to perform maximum likelihood estimation
for the models.
Thus, by 1995 the core components of the methodology were in place,
laying the groundwork for extensive development since then.
Scrucca, L.; Fraley, C.; Murphy, T.B.; Raftery, A.E. (2023). Model-Based Clustering, Classification and Density Estimation using mclust in R. Chapman and Hall/CRC Press. ISBN 9781032234953.
Bouveyron, C.; Celeux, G.; Murphy, T.B.; Raftery, A.E. (2019). Model-Based Clustering and Classification for Data Science: With Applications in R. Cambridge University Press. ISBN 9781108494205.
Free download: https://math.univ-cotedazur.fr/~cbouveyr/MBCbook/
Celeux, G; Fruhwirth-Schnatter, S.; Robert, C.P. (2018). Handbook of Mixture Analysis. Chapman and Hall/CRC Press. ISBN 9780367732066.
McNicholas, P.D. (2016). Mixture Model-Based Clustering. Chapman and Hall/CRC Press. ISBN 9780367736958.
Hennig, C.; Melia, M.; Murtagh, F.; Rocci, R. (2015). Handbook of Cluster Analysis. Chapman and Hall/CRC Press. ISBN 9781466551886.
Mengersen, K.L.; Robert, C.P.; Titterington, D.M. (2011). Mixtures: Estimation and Applications. Wiley. ISBN 9781119993896.
McLachlan, G.J.; Peel, D. (2000). Finite Mixture Models. Wiley-Interscience. ISBN 9780471006268. |
Cognitive robotics | Cognitive Robotics or Cognitive Technology is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition, consisting of Robotic Process Automation, Artificial Intelligence, Machine Learning, Deep Learning, Optical Character Recognition, Image Processing, Process Mining, Analytics, Software Development and System Integration. |
Concept drift | In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models. |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 31