- Perceptron Discovery History
- Early Age: The model of cognition was associationism
- Later On: The brain is connectionist
- Alexander Bain 1873: The information is on the connections
- Neural Groups: Neurons excite and stimulate each other
- Making Memories: Predicts Hebbian learning
- David Ferrier. 1876: Functions of the brain.
- McCulloch and Pitts model 1943: Neurons as Boolean threshold units
- Excitatory synapse: Stimulate
- Inhibitory synapse: Restrict
- These two synapses can compose any boolean functions
- Problem: No learning rule
- Donald Hebb 1949: Neurons that fire together wite together
- Model:
- Problem: Fundamentally unstable
- Model:
- Frank Rosenblatt 1958: Perceptron
- Module:
- Learning Algorithm:
(Converge for lineary seperable classes)
- Problem: Not Universal
- Problem: Real world input is not boolean
- Module:
- Minsky and Papert 1968: No solution for XOR
- Multi-Layer’s perceptron can do XOR. They can represent any boolean logic.
- Terminology: Hidden layer.
- Alexander Bain 1873: The information is on the connections
- Connections Machines
- Von Neumann/Princeton Machine: Separated processing unit and memory
- Connections Machines: The connections are the processing unit & the memory
- Current Neural Networks are connections machines
- Perceptron Reals
- A perceptron operates on real-valued vectors. It is a linear classifier.
For XOR, there is no linear classifier, so the perceptron cannot do XOR. - Connecting perceptrons together can separate polygons.
One interpretation of the mechanism of the neural networks is connecting the linear classifiers (perceptrons) together to handle not-linear separable problems. - Complex decision boundaries
Classification problems: finding decision boundaries in high-dimensional space. - Continous valued outputs
MLP as a continuous-valued regression. (Like Integral)
- A perceptron operates on real-valued vectors. It is a linear classifier.
- Summary:
- MLPs (Multi-Layer Perceptrons) are connectionist computation models
- Individual perceptions are the computational equivalent of neurons
- MLP is a layered composition of many perceptrons.
- MLPs can model Boolean functions
- Perceptrons are boolean gates. Networks of perceptrons are boolean functions.
- MLPs are Boolean machines
- They represent boolean functions over linear boundaries. Represent arbitrary decision boundaries. It can be used to classify data.
- MLPs can also model continuous-valued functions
- Neural Networks in AI
- The network is a function
- MLPs (Multi-Layer Perceptrons) are connectionist computation models
Leave a comment