sophcheng commited on
Commit
b5abc54
·
verified ·
1 Parent(s): a2e5eee

Update output_topic_details.txt

Browse files
Files changed (1) hide show
  1. output_topic_details.txt +8 -8
output_topic_details.txt CHANGED
@@ -32,11 +32,11 @@ Description:An adaptive neuro-fuzzy inference system or adaptive network-based f
32
 
33
  Topic: Affective Computing
34
 
35
- Description: Area of research in computer science aiming to understand the emotional state of usersAffective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
36
 
37
  Topic: AI-Complete
38
 
39
- Description: Term describing difficult problems in AIIn the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks.
40
 
41
  Topic: Algorithmic Probability
42
 
@@ -52,15 +52,15 @@ Description: Ambient intelligence AmI is a term used in computing to refer to el
52
 
53
  Topic: Natural Language Processing
54
 
55
- Description: Field of linguistics and computer scienceFor other uses, see NLP.This is natural language processing done by computers. For the natural language processing done by the human brain, see Language processing in the brain.Natural language processing NLP is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic i.e. statistical and, most recently, neural network-based machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.Challenges in natural language processing frequently involve speech recognition, natural-language understanding, and natural-language generation.
56
 
57
  Topic: NLP
58
 
59
- Description: Field of linguistics and computer scienceFor other uses, see NLP.This is natural language processing done by computers. For the natural language processing done by the human brain, see Language processing in the brain.Natural language processing NLP is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic i.e. statistical and, most recently, neural network-based machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.Challenges in natural language processing frequently involve speech recognition, natural-language understanding, and natural-language generation.
60
 
61
  Topic: Recurrent Neural Network
62
 
63
- Description: Computational model used in machine learningNot to be confused with recursive neural network.A recurrent neural network RNN is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to the uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state memory to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. The storage can also be replaced by another network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks LSTMs and gated recurrent units. This is also called Feedforward Neural Network FNN. Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.
64
 
65
  Topic: Python
66
 
@@ -68,11 +68,11 @@ Description: Python is a high-level, general-purpose programming language. Its d
68
 
69
  Topic: Convolutional Neural Network
70
 
71
- Description: Artificial neural networkFor other uses, see CNN disambiguation.Convolutional neural network CNN is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution or cross-correlation kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.They have applications in: image and video recognition,recommender systems,image classification,image segmentation,medical image analysis,natural language processing,brain–computer interfaces, andfinancial time series.CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks SIANN, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input.Feed-forward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks make them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training such as weight decay or trimming connectivity skipped connections, dropout, etc. Robust datasets also increases the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters or kernels through automated learning, whereas in traditional algorithms these filters are hand-engineered. This independence from prior knowledge and human intervention in feature extraction is a major advantage.
72
 
73
  Topic: Computer Science
74
 
75
- Description: Fundamental areas of computer scienceProgramming language theoryComputational complexity theoryArtificial intelligenceComputer architectureComputer science is the study of computation, information, and automation. Computer science spans theoretical disciplines such as algorithms, theory of computation, and information theory to applied disciplines including the design and implementation of hardware and software. Though more often considered an academic discipline, computer science is closely related to computer programming.Algorithms and data structures are central to computer science.The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
76
 
77
  Topic: Chatbot
78
 
@@ -92,7 +92,7 @@ Description: Software is a set of computer programs and associated documentation
92
 
93
  Topic : Bias
94
 
95
- Description: Bias in AI is when Data is given to models in a way that underepresents certain groups and can affect the performance of the model and increase discrimination.
96
 
97
  Topic: Ethical Concerns
98
 
 
32
 
33
  Topic: Affective Computing
34
 
35
+ Description: Area of research in computer science aiming to understand the emotional state of users. Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
36
 
37
  Topic: AI-Complete
38
 
39
+ Description: Term describing difficult problems in AI. In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks.
40
 
41
  Topic: Algorithmic Probability
42
 
 
52
 
53
  Topic: Natural Language Processing
54
 
55
+ Description: Field of linguistics and computer science. For other uses, see NLP.This is natural language processing done by computers. For the natural language processing done by the human brain, see Language processing in the brain.Natural language processing NLP is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic i.e. statistical and, most recently, neural network-based machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.Challenges in natural language processing frequently involve speech recognition, natural-language understanding, and natural-language generation.
56
 
57
  Topic: NLP
58
 
59
+ Description: Field of linguistics and computer science. For other uses, see NLP.This is natural language processing done by computers. For the natural language processing done by the human brain, see Language processing in the brain.Natural language processing NLP is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic i.e. statistical and, most recently, neural network-based machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.Challenges in natural language processing frequently involve speech recognition, natural-language understanding, and natural-language generation.
60
 
61
  Topic: Recurrent Neural Network
62
 
63
+ Description: Computational model used in machine learning. Not to be confused with recursive neural network.A recurrent neural network RNN is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to the uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state memory to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. The storage can also be replaced by another network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks LSTMs and gated recurrent units. This is also called Feedforward Neural Network FNN. Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.
64
 
65
  Topic: Python
66
 
 
68
 
69
  Topic: Convolutional Neural Network
70
 
71
+ Description: Artificial neural networkFor other uses, see CNN disambiguation. Convolutional neural network CNN is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution or cross-correlation kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.They have applications in: image and video recognition,recommender systems,image classification,image segmentation,medical image analysis,natural language processing,brain–computer interfaces, andfinancial time series.CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks SIANN, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input.Feed-forward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks make them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training such as weight decay or trimming connectivity skipped connections, dropout, etc. Robust datasets also increases the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters or kernels through automated learning, whereas in traditional algorithms these filters are hand-engineered. This independence from prior knowledge and human intervention in feature extraction is a major advantage.
72
 
73
  Topic: Computer Science
74
 
75
+ Description: Fundamental areas of computer science. Programming language theoryComputational complexity theoryArtificial intelligenceComputer architectureComputer science is the study of computation, information, and automation. Computer science spans theoretical disciplines such as algorithms, theory of computation, and information theory to applied disciplines including the design and implementation of hardware and software. Though more often considered an academic discipline, computer science is closely related to computer programming.Algorithms and data structures are central to computer science.The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
76
 
77
  Topic: Chatbot
78
 
 
92
 
93
  Topic : Bias
94
 
95
+ Description: Bias in AI is when data is given to models in a way that underepresents certain groups and can affect the performance of the model and increase discrimination.
96
 
97
  Topic: Ethical Concerns
98