Stay humble. Stay hungry. Stay foolish.

  1. Perceptron Discovery History
    1. Early Age: The model of cognition was associationism
    2. Later On: The brain is connectionist
      1. Alexander Bain 1873: The information is on the connections
        1. Neural Groups: Neurons excite and stimulate each other
        2. Making Memories: Predicts Hebbian learning
      2. David Ferrier. 1876: Functions of the brain.
      3. McCulloch and Pitts model 1943:  Neurons as Boolean threshold units
        1. Excitatory synapse: Stimulate
        2. Inhibitory synapse: Restrict
        3. These two synapses can compose any boolean functions
        4. Problem: No learning rule
      4. Donald Hebb 1949: Neurons that fire together wite together
        1. Model: w_i = w_i + \eta x_i y_i
        2. Problem: Fundamentally unstable
      5. Frank Rosenblatt 1958: Perceptron
        1. Module: Y = \sum_i (w_ix_i - T > 0)
        2. Learning Algorithm: w = w + \eta(d(x) - y(x))x (Converge for lineary seperable classes)
        3. Problem: Not Universal
        4. Problem: Real world input is not boolean
      6. Minsky and Papert 1968: No solution for XOR
        1. Multi-Layer’s perceptron can do XOR. They can represent any boolean logic.
        2. Terminology: Hidden layer.
  2. Connections Machines
    1. Von Neumann/Princeton Machine: Separated processing unit and memory
    2. Connections Machines: The connections are the processing unit & the memory
    3. Current Neural Networks are connections machines
  3. Perceptron Reals
    1. A perceptron operates on real-valued vectors. It is a linear classifier.
      For XOR, there is no linear classifier, so the perceptron cannot do XOR.
    2. Connecting perceptrons together can separate polygons.
      One interpretation of the mechanism of the neural networks is connecting the linear classifiers (perceptrons) together to handle not-linear separable problems.
    3. Complex decision boundaries
      Classification problems: finding decision boundaries in high-dimensional space.
    4. Continous valued outputs
      MLP as a continuous-valued regression. (Like Integral)
  4. Summary:
    1. MLPs (Multi-Layer Perceptrons) are connectionist computation models
      1. Individual perceptions are the computational equivalent of neurons
      2. MLP is a layered composition of many perceptrons.
    2. MLPs can model Boolean functions
      1. Perceptrons are boolean gates. Networks of perceptrons are boolean functions.
    3. MLPs are Boolean machines
      1. They represent boolean functions over linear boundaries. Represent arbitrary decision boundaries. It can be used to classify data.
    4. MLPs can also model continuous-valued functions
    5. Neural Networks in AI
      1. The network is a function

Tags

Leave a comment