Difference between revisions of "Terms"

From aHuman Wiki
Jump to: navigation, search
(Automated page entry using MWPush.pl)
 
(Automated page entry using MWPush.pl)
 
Line 1: Line 1:
 
 
<pre style="color: green">Artificial Intelligence Nouns</pre>
 
<pre style="color: green">Artificial Intelligence Nouns</pre>
  
Line 8: Line 7:
  
 
* action
 
* action
**# continuous action
+
  *# continuous action
 
* action selection strategy
 
* action selection strategy
**# confidence based exploration (Thrun, 1999)
+
  *# confidence based exploration (Thrun, 1999)
**# directed exploration
+
  *# directed exploration
**# eps.-greedy selection
+
  *# eps.-greedy selection
**# error-based directed exploration
+
  *# error-based directed exploration
**# frequency-based directed exploration
+
  *# frequency-based directed exploration
**# optimism in the face of uncertainty
+
  *# optimism in the face of uncertainty
**# recency-based directed exploration (Sutton, 1990)
+
  *# recency-based directed exploration (Sutton, 1990)
**# tabu search (Abramson and Wechsler, 2003)
+
  *# tabu search (Abramson and Wechsler, 2003)
 
* activation function
 
* activation function
**# hyperbolic tangent activation function
+
  *# hyperbolic tangent activation function
**# linear activation function
+
  *# linear activation function
**# logistic function
+
  *# logistic function
**# monotonic activation function
+
  *# monotonic activation function
**# normal sigmoid function
+
  *# normal sigmoid function
**# periodic activation function
+
  *# periodic activation function
**# sigmoid function
+
  *# sigmoid function
**# symmetric sigmoid function
+
  *# symmetric sigmoid function
**# symmetric sinus activation function
+
  *# symmetric sinus activation function
**# threshold activation function
+
  *# threshold activation function
 
* agent
 
* agent
**# autonomous agent
+
  *# autonomous agent
 
* artificial intelligence
 
* artificial intelligence
 
* back-propagation drawbacks
 
* back-propagation drawbacks
**# local minima problem
+
  *# local minima problem
**# moving target problem
+
  *# moving target problem
**# step-size problem
+
  *# step-size problem
 
* belief nets
 
* belief nets
**# directed belief nets
+
  *# directed belief nets
**# sigmoid belief nets
+
  *# sigmoid belief nets
 
* binary codes
 
* binary codes
 
* cause
 
* cause
Line 44: Line 43:
 
* conditional random fields
 
* conditional random fields
 
* connection
 
* connection
**# autoregressive connections
+
  *# autoregressive connections
**# input connections
+
  *# input connections
**# lateral connection
+
  *# lateral connection
**# output connections
+
  *# output connections
**# short-cut connections
+
  *# short-cut connections
**# symmetric connections
+
  *# symmetric connections
**# temporal connections
+
  *# temporal connections
**# trainable connections
+
  *# trainable connections
 
* containment function
 
* containment function
 
* damping
 
* damping
 
* dataset
 
* dataset
**# labeled data
+
  *# labeled data
**# noise-free data
+
  *# noise-free data
**# sample
+
  *# sample
**# sequential data
+
  *# sequential data
**# test set
+
  *# test set
**# training example
+
  *# training example
**# training patterns
+
  *# training patterns
**# training data-set
+
  *# training data-set
**# unbiased example
+
  *# unbiased example
**# unlabeled data
+
  *# unlabeled data
**# validation data-set
+
  *# validation data-set
 
* dimensionality reduction
 
* dimensionality reduction
**# non-linear dimensionality reduction
+
  *# non-linear dimensionality reduction
 
* discount rate
 
* discount rate
 
* directed model
 
* directed model
Line 74: Line 73:
 
* dynamic programming
 
* dynamic programming
 
* eligibility traces
 
* eligibility traces
***# replacing eligibility traces
+
    *# replacing eligibility traces
 
* energy of joint configuration
 
* energy of joint configuration
 
* environment
 
* environment
**# stationary environment
+
  *# stationary environment
 
* epoch
 
* epoch
 
* error value
 
* error value
**# mean square error (MSE)
+
  *# mean square error (MSE)
 
* experience value
 
* experience value
**# discounted future experience
+
  *# discounted future experience
**# immediate experience value
+
  *# immediate experience value
 
* experience value function
 
* experience value function
 
* factorial distribution
 
* factorial distribution
Line 94: Line 93:
 
* inference
 
* inference
 
* layer
 
* layer
**# input layer
+
  *# input layer
**# hidden layer
+
  *# hidden layer
**# layer of features
+
  *# layer of features
**# output layer
+
  *# output layer
 
* learning rate
 
* learning rate
 
* likelihood
 
* likelihood
Line 105: Line 104:
 
* misclassification rate
 
* misclassification rate
 
* neural networks
 
* neural networks
**# [[ArtificialNeuralNetwork|artificial neural network (ANN)]]
+
  *# [[ArtificialNeuralNetwork|artificial neural network (ANN)]]
**# cascading neural networks
+
  *# cascading neural networks
**# convolutional multilayer neural networks
+
  *# convolutional multilayer neural networks
**# counterpropagation network
+
  *# counterpropagation network
**# deep neural networks
+
  *# deep neural networks
**# feedforward networks
+
  *# feedforward networks
**# fully connected neural network
+
  *# fully connected neural network
**# functional-link neural networks
+
  *# functional-link neural networks
**# general regression neural network
+
  *# general regression neural network
**# higher order networks
+
  *# higher order networks
**# multilayer feedforward artificial neural networks
+
  *# multilayer feedforward artificial neural networks
**# multilayer neural networks
+
  *# multilayer neural networks
**# probabilistic neural network
+
  *# probabilistic neural network
**# real-time recurrent learning networks
+
  *# real-time recurrent learning networks
**# recurrent backpropagation networks
+
  *# recurrent backpropagation networks
**# recurrent neural networks
+
  *# recurrent neural networks
 
* neuron
 
* neuron
**# bias neuron
+
  *# bias neuron
**# binary neurons
+
  *# binary neurons
**# candidate neuron
+
  *# candidate neuron
**# hidden neuron
+
  *# hidden neuron
**# mean-field logistic unit
+
  *# mean-field logistic unit
**# output neuron
+
  *# output neuron
 
* node (in the network)
 
* node (in the network)
**# leaf node (in the network)
+
  *# leaf node (in the network)
**# unit
+
  *# unit
 
* noise (in the data)
 
* noise (in the data)
 
* objective function
 
* objective function
 
* online inference
 
* online inference
 
* output
 
* output
**# actual output
+
  *# actual output
**# desired output
+
  *# desired output
 
* over-fitting
 
* over-fitting
 
* partial derivative
 
* partial derivative
 
* policy
 
* policy
**# deterministic policy function
+
  *# deterministic policy function
**# optimal policy
+
  *# optimal policy
**# optimal deterministic policy
+
  *# optimal deterministic policy
**# stochastic policy function
+
  *# stochastic policy function
 
* posterior distribution
 
* posterior distribution
**# aggregated posterior distribution
+
  *# aggregated posterior distribution
 
* precision-recall curves
 
* precision-recall curves
 
* prior
 
* prior
**# complementary prior
+
  *# complementary prior
 
* probability
 
* probability
 
* probability density models
 
* probability density models
 
* profit function
 
* profit function
 
* reward
 
* reward
**# cumulative reward
+
  *# cumulative reward
**# discounted future reward
+
  *# discounted future reward
**# future reward
+
  *# future reward
**# immediate reward
+
  *# immediate reward
**# longterm reward
+
  *# longterm reward
**# short-term reward
+
  *# short-term reward
 
* reward value function
 
* reward value function
 
* root mean squared error
 
* root mean squared error
Line 169: Line 168:
 
* softmax function
 
* softmax function
 
* state
 
* state
**# after-state
+
  *# after-state
**# continuous state
+
  *# continuous state
 
* state-action space
 
* state-action space
 
* stop function
 
* stop function
Line 176: Line 175:
 
* training curve
 
* training curve
 
* value function
 
* value function
**# action-value function
+
  *# action-value function
**# state-value function
+
  *# state-value function
 
* variable (for neural network)
 
* variable (for neural network)
**# circular variables
+
  *# circular variables
**# stochastic variable
+
  *# stochastic variable
 
* weights
 
* weights
**# frozen weights
+
  *# frozen weights
**# initial weights
+
  *# initial weights
**# lateral weight
+
  *# lateral weight
  
 
==Named Entities==
 
==Named Entities==
Line 190: Line 189:
 
* Adaline
 
* Adaline
 
* ARTMAP Neural Networks
 
* ARTMAP Neural Networks
**# Fuzzy ARTMAP
+
  *# Fuzzy ARTMAP
**# Gaussian ARTMAP
+
  *# Gaussian ARTMAP
 
* Bellman Optimality Equation (Sutton and Barto, 1998)
 
* Bellman Optimality Equation (Sutton and Barto, 1998)
 
* Bernoulli Variables
 
* Bernoulli Variables
 
* Bidirectional Associative Memory (BAM)
 
* Bidirectional Associative Memory (BAM)
 
* Boltzmann Machine
 
* Boltzmann Machine
**# Conditional RBM model
+
  *# Conditional RBM model
**# Restricted Boltzmann Machine (RBM)
+
  *# Restricted Boltzmann Machine (RBM)
**# Semi-restricted Boltzmann Machines
+
  *# Semi-restricted Boltzmann Machines
**# Temporal RBM
+
  *# Temporal RBM
 
* Boltzmann-Gibbs Selection
 
* Boltzmann-Gibbs Selection
 
* Deep Belief Nets
 
* Deep Belief Nets
**# Deep Autoencoders
+
  *# Deep Autoencoders
 
* Dynamic Bayes Nets
 
* Dynamic Bayes Nets
 
* Elman Neural Networks
 
* Elman Neural Networks
Line 220: Line 219:
 
* MNIST Test Set
 
* MNIST Test Set
 
* MRF
 
* MRF
**# MRF-MBNN
+
  *# MRF-MBNN
 
* Neocognitron
 
* Neocognitron
 
* Perceptron
 
* Perceptron

Latest revision as of 19:10, 28 November 2018

Artificial Intelligence Nouns

@@Home -> ArtificialIntelligenceDictionary -> terms


Common Entities

  • action
  *# continuous action
  • action selection strategy
  *# confidence based exploration (Thrun, 1999)
  *# directed exploration
  *# eps.-greedy selection
  *# error-based directed exploration
  *# frequency-based directed exploration
  *# optimism in the face of uncertainty
  *# recency-based directed exploration (Sutton, 1990)
  *# tabu search (Abramson and Wechsler, 2003)
  • activation function
  *# hyperbolic tangent activation function
  *# linear activation function
  *# logistic function
  *# monotonic activation function
  *# normal sigmoid function
  *# periodic activation function
  *# sigmoid function
  *# symmetric sigmoid function
  *# symmetric sinus activation function
  *# threshold activation function
  • agent
  *# autonomous agent
  • artificial intelligence
  • back-propagation drawbacks
  *# local minima problem
  *# moving target problem
  *# step-size problem
  • belief nets
  *# directed belief nets
  *# sigmoid belief nets
  • binary codes
  • cause
  • cascade correlation architecture (Fahlman and Lebiere, 1990)
  • conditional random fields
  • connection
  *# autoregressive connections
  *# input connections
  *# lateral connection
  *# output connections
  *# short-cut connections
  *# symmetric connections
  *# temporal connections
  *# trainable connections
  • containment function
  • damping
  • dataset
  *# labeled data
  *# noise-free data
  *# sample
  *# sequential data
  *# test set
  *# training example
  *# training patterns
  *# training data-set
  *# unbiased example
  *# unlabeled data
  *# validation data-set
  • dimensionality reduction
  *# non-linear dimensionality reduction
  • discount rate
  • directed model
  • distributed representations
  • domain-specific kernel
  • dynamic programming
  • eligibility traces
   *# replacing eligibility traces
  • energy of joint configuration
  • environment
  *# stationary environment
  • epoch
  • error value
  *# mean square error (MSE)
  • experience value
  *# discounted future experience
  *# immediate experience value
  • experience value function
  • factorial distribution
  • feature
  • generative model
  • generalization
  • goal state
  • gradient
  • greedy strategy
  • inference
  • layer
  *# input layer
  *# hidden layer
  *# layer of features
  *# output layer
  • learning rate
  • likelihood
  • local optima (for neural network)
  • log likelihood
  • log probability
  • misclassification rate
  • neural networks
  *# artificial neural network (ANN)
  *# cascading neural networks
  *# convolutional multilayer neural networks
  *# counterpropagation network
  *# deep neural networks
  *# feedforward networks
  *# fully connected neural network
  *# functional-link neural networks
  *# general regression neural network
  *# higher order networks
  *# multilayer feedforward artificial neural networks
  *# multilayer neural networks
  *# probabilistic neural network
  *# real-time recurrent learning networks
  *# recurrent backpropagation networks
  *# recurrent neural networks
  • neuron
  *# bias neuron
  *# binary neurons
  *# candidate neuron
  *# hidden neuron
  *# mean-field logistic unit
  *# output neuron
  • node (in the network)
  *# leaf node (in the network)
  *# unit
  • noise (in the data)
  • objective function
  • online inference
  • output
  *# actual output
  *# desired output
  • over-fitting
  • partial derivative
  • policy
  *# deterministic policy function
  *# optimal policy
  *# optimal deterministic policy
  *# stochastic policy function
  • posterior distribution
  *# aggregated posterior distribution
  • precision-recall curves
  • prior
  *# complementary prior
  • probability
  • probability density models
  • profit function
  • reward
  *# cumulative reward
  *# discounted future reward
  *# future reward
  *# immediate reward
  *# longterm reward
  *# short-term reward
  • reward value function
  • root mean squared error
  • second order statistics
  • selective attention approach
  • sensory input
  • shallow models
  • slackness of the bound
  • sloppy top-down specification
  • softmax function
  • state
  *# after-state
  *# continuous state
  • state-action space
  • stop function
  • structure (in the data)
  • training curve
  • value function
  *# action-value function
  *# state-value function
  • variable (for neural network)
  *# circular variables
  *# stochastic variable
  • weights
  *# frozen weights
  *# initial weights
  *# lateral weight

Named Entities

  • Adaline
  • ARTMAP Neural Networks
  *# Fuzzy ARTMAP
  *# Gaussian ARTMAP
  • Bellman Optimality Equation (Sutton and Barto, 1998)
  • Bernoulli Variables
  • Bidirectional Associative Memory (BAM)
  • Boltzmann Machine
  *# Conditional RBM model
  *# Restricted Boltzmann Machine (RBM)
  *# Semi-restricted Boltzmann Machines
  *# Temporal RBM
  • Boltzmann-Gibbs Selection
  • Deep Belief Nets
  *# Deep Autoencoders
  • Dynamic Bayes Nets
  • Elman Neural Networks
  • Finite Impulse Response (FIR) filter
  • Gaussian Processes
  • Gaussian Unit
  • Hebbian Theory
  • Hidden Markov Models (HMM)
  • Hopfield Net
  • Jordan Neural Network
  • Long Short-Term Memory (LSTM) Recurrent Network
  • Markov Decision Process (MDP)
  • Markov Environment
  • Markov Property
  • Markov State
  • Max-Boltzmann Selection
  • MNIST Test Set
  • MRF
  *# MRF-MBNN
  • Neocognitron
  • Perceptron
  • RBF Networks
  • Support Vector Machine (SVM)
  • T-step policy
  • T-step return
  • TF-IFD
  • Threshold Logical Units (TLU) Network
  • Time Delay Neural Network (TDNN)
  • UNI-SNE method