Example: stock market

Introduction To Neural Networks

Introduction ToNeural Networks Development of Neural Networks date back to the early 1940s. It experienced an upsurge in popularity in the late 1980s. This was a result of the discovery of new techniques and developments and general advances in computer hardware technology. Some NNs are models of biological Neural Networks and some are not, but historically, much of the inspiration for the field of NNs came from the desire to produce artificial systems capable of sophisticated, perhaps ``intelligent", computations similar to those that the human brain routinely performs, and thereby possibly to enhance our understanding of the human brain. Most NNs have some sort of training" rule. In other words, NNs learn" from examples (as children learn to recognize dogs from examples of dogs) and exhibit some capability for generalizationbeyond the training NetworkTechniques Computers have to be explicitly programmed Analyzethe problem to be solved.

May 19, 2003 · generalization and abstraction, and interpretation of incomplete and noisy inputs • Provide some human problem -solving characteristics • Robust • Fast, flexible and easy to maintain • Powerful hybrid systems

Tags:

  Solving, Neural, Incomplete, Abstraction

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Introduction To Neural Networks

1 Introduction ToNeural Networks Development of Neural Networks date back to the early 1940s. It experienced an upsurge in popularity in the late 1980s. This was a result of the discovery of new techniques and developments and general advances in computer hardware technology. Some NNs are models of biological Neural Networks and some are not, but historically, much of the inspiration for the field of NNs came from the desire to produce artificial systems capable of sophisticated, perhaps ``intelligent", computations similar to those that the human brain routinely performs, and thereby possibly to enhance our understanding of the human brain. Most NNs have some sort of training" rule. In other words, NNs learn" from examples (as children learn to recognize dogs from examples of dogs) and exhibit some capability for generalizationbeyond the training NetworkTechniques Computers have to be explicitly programmed Analyzethe problem to be solved.

2 Write the code in a programming language. Neural Networks learn from examples Norequirement ofan explicit description of the problem. Noneed a programmer. The Neural computer to adapt itself during a training period, based on examples of similar problems evenwithouta desired solution to each problem. After sufficient training the Neural computer is able to relate the problem data to the solutions, inputs to outputs, and it is then able to offer a viable solution to a brand new problem. Able to generalize or to handle incomplete vs ComputersDigital Computers Deductive Reasoning. We apply known rules to input data to produce output. Computation is centralized, synchronous, and serial. Memory is packetted, literally stored, and location addressable. Not fault tolerant. One transistor goes and it no longer works. Exact. Static connectivity. Applicable if well defined rules with precise input Networks Inductive Reasoning.

3 Given input and output data (training examples), we construct the rules. Computation is collective, asynchronous, and parallel. Memory is distributed, internalized, and content addressable. Fault tolerant, redundancy, and sharing of responsibilities. Inexact. Dynamic connectivity. Applicable if rules are unknown or complicated, or if data is noisy or of Neural Networks Realized that the brain could solve many problems much easier than even the best computer image recognition speech recognition pattern recognitionVery easy for the brain but very difficult for a computerEvolution of Neural Networks Studied the brain Each neuron in the brain has a relatively simple function But - 10 billion of them (60 trillion connections) Act together to create an incredible processing unit The brain is trained by its environment Learns by experienceCompensates for problems by massive parallelismThe Biological Inspiration The brain has been extensively studied by scientists.

4 Vast complexity prevents all but rudimentary understanding. Even the behaviour of an individual neuron is extremely complex Engineers modified the Neural models to make them more useful less like biology kept much of the terminologyThe Structure of Neuronsaxoncell bodysynapsenucleusdendritesA neuron has a cell body, a branching input structure (the dendrite) and a branching output structure (the axon) Axons connect to dendrites via synapses. Electro-chemical signals are propagated from the dendritic input, through the cell body, and down the axon to other neurons A neuron only fires if its input signal exceeds a certain amount (threshold) in a short time period. Synapses vary in strength Good connections allowing a large signal Slight connections allow only a weak signal. Synapses either: Excitatory (stimulate) Inhibitory (restrictive)The Structure of NeuronsBiological Analogy Brain Neuron Artificial neuron(processing element) Set of processing elements (PEs) and connections (weights) with adjustable strengthsf(net)Inputsw1w2wnX4X3X5X1X2 OutputLayerInputLayerHidden LayerBenefits of Neural Networks Pattern recognition, learning, classification, generalization and abstraction , and interpretation of incomplete and noisy inputs Provide some human problem- solving characteristics Robust Fast, flexible and easy to maintain Powerful hybrid systems(Artificial) Neural Networks (ANN) ANN architecture(Artificial) Neural Networks (ANN) Neurons have 1 output but many inputs Output is weighted sum of inputs Threshold can be set Gives non-linear responseThe Key Elements of Neural Networks Neural computing requires a number of neurons, to be connected together into a " Neural network".

5 Neurons are arranged in layers. Each neuron within the network is usually a simple processing unit which takes one or more inputs and produces an output. At each neuron, every input has an associated "weight" which modifies the strength of each input. The neuron simply adds together all the inputs and calculates an output to be passed on. What is a Artificial Neural Network The Neural network is: model nonlinear (output is a nonlinear combination of inputs) input is numeric output is numeric pre- and post-processing completed separate from modelModel:mathematical transformationof input to outputnumericalinputsnumericaloutputsTra nsfer functions The threshold, or transfer function, is generally non-linear. Linear (straight-line) functions are limited because the output is simply proportional to the input. Linear functions are not very useful. That was the problem in the earliest network models as noted in Minsky and Papert's book Perceptrons.

6 What can you do with an NN and what not? In principle, NNs can compute any computable function, , they can do everything a normal digital computer can do. Almost any mapping between vector spaces can be approximated to arbitrary precision by feedforwardNNs In practice, NNs are especially useful for classificationand function approximationproblems usually when rules such as those that might be used in an expert system cannot easily be applied. NNs are, at least today, difficult to apply successfully to problems that concern manipulation of symbols and memory. (Artificial) Neural Networks (ANN) Training Initialize weights for all neurons Present input layer with spectral reflectance Calculate outputs Compare outputs with biophysical parameters Update weights to attempt a match Repeat until all examples presentedTraining methods Supervised learningIn supervised training, both the inputs and the outputs are provided.

7 The network then processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights which control the network. This process occurs over and over as the weights are continually tweaked. The set of data which enables the training is called the "training set." During the training of a network the same set of data is processed many times as the connection weights are ever architectures : Multilayer perceptrons Unsupervised learningIn unsupervised training, the network is provided with inputs but not with desired outputs. The system itself must then decide what features it will use to group the input data. This is often referred to as self-organization or adaption. At the present time, unsupervised learning is not well understood. Example architectures : Kohonen, ARTF eedforword NNs The basic structure off a feedforwardNeural Network The'learning rule modifies the weights according to the input patterns that it is presented with.

8 In a sense, ANNs learn by exampleas do their biological counterparts. When the desired output are known we have supervised learningor learning with a overview of the set of examples for training the network is assembled. Each case consists of a problem statement (which represents the input into the network) and the corresponding solution (which represents the desired output from the network). input data is entered into the network via the input layer. neuron in the network processes the input data with the resultant values steadily "percolating" through the network, layer by layer, until a result is generated by the output l a y e r. actual output of the network is compared to expected output for that particular input. This results in an error valuewhich represents the discrepancy between given input and expected output. On the basis of this error value an of the connection weights in the network are gradually adjusted, working backwards from the output layer, through the hidden layer, and to the input layer, until the correct output is produced.

9 Fine tuning the weights in this way has the effect of teaching the network how to produce the correct output for a particular input, the network learns. Backpropagation NetworkThe Learning Rule The delta ruleis often utilized by the most common class of ANNs called backpropagational Neural Networks . When a Neural network is initially presented with a pattern it makes a random 'guess' as to what it might be. It then sees how far its answer was from the actual one and makes an appropriate adjustment to its connection weights. The Insides offDelta Rule Backpropagation performs a gradient descentwithin the solution's vector space towards a global minimum . The error surface itself is a hyperparaboloid but is seldom 'smooth' as is depicted in the graphic below. Indeed, in most problems, the solution space is quite irregular with numerous 'pits' and 'hills' which may cause the network to settle down in a local minimum which is not the best overall Neural NetworksA recurrent Neural network is one in which the outputs from the output layer are fed back to a set of input units (see figure below).

10 This is in contrast to feed-forward Networks , where the outputs are connected only to the inputs of units in subsequent layers. Neural Networks of this kind are able to store information about time, and therefore they are particularly suitable for forecasting applications: they have been used with considerable success for predicting several types of time NNsThe auto-associative Neural network is a special kind of MLP - in fact, it normally consists of two MLP Networks connected "back to back" (see figure below). The other distinguishing feature of auto-associative Networks is that they are trained with a target data set that is identical to the input data set. In training, the network weights are adjusted until the outputs match the inputs, and the values assigned to the weights reflect the relationships between the various input data elements. This property is useful in, for example, data validation: when invalid data is presented to the trained Neural network, the learned relationships no longer hold and it is unable to reproduce the correct output.


Related search queries