You are on page 1of 16

Network Objects, Data, and Training Styles

Introduction
The work flow for the neural network design process has seven primary 1. 2. 3. 4. 5. 6. 7.

steps: Collect data Create the network Configure the network Initialize the weights and biases Train the network Validate the network Use the network This chapter shows how to format the data for presentation to the network. It also explains network configuration and the two forms of network training: incremental training and batch training.

Introduction
There are four different levels at which the Neural Network Toolbox

software can be used. The first level is represented by the GUIs and can be launched by using nnstart . These provide a quick way to access the power of the toolbox for many problems of function fitting, pattern recognition, clustering and time series analysis. The second level of toolbox use is through basic command-line operations. The command-line functions use simple argument lists with intelligent default settings for function parameters. A third level of toolbox use is customization of the toolbox. This advanced capability allows you to create your own custom neural networks, while still having access to the full functionality of the toolbox. The fourth level of toolbox usage is the ability to modify any of the Mfiles contained in the toolbox.

MODEL
Simple Neuron The fundamental building block

for neural networks is the single-input neuron, such as this example. There three processes which taking place are : the weight function, the net input function and the transfer function.

Transfer Functions

Many transfer functions are included in the Neural Network Toolbox software. Two of the most commonly used functions are shown below. Linear Transfer Function Neurons of this type are used in the final layer of multilayer networks that are used as function approximations.

Neural Network DESIGN


Input

One-Input Neuron

Neural Network DESIGN


Input

One-Input Neuron

Linear Neuron: a = purelin(w*p+b) Alter the w eight, bias and input by dragging the triangular shaped indicators. Pick the transfer function w ith the F menu. Watch the change to the neuron function and its output.

Linear Neuron: a = purelin(w*p+b) Alter the w eight, bias and input by dragging the triangular shaped indicators. Pick the transfer function w ith the F menu. Watch the change to the neuron function and its output.

w p b 1 a

w p b 1 a

w 4 -2 0 b 0 -2 -4 -4
a

w 4 -2 0 b 0 -2 -4 -4
a

-2

0 F:

-2

0 F:

-2

0 p

Chapter 2

-2

0 p

Chapter 2

Neuron with Vector Input

The simple neuron can be extended to handle inputs that are vectors. A neuron with a single R-element input vector is shown below. Here the individual input elements are multiplied by weights and the weighted values are fed to the summing junction.

This expression can, of course, be written in MATLAB code as

Abbreviated Notation

Network Architectures
One Layer of Neurons

Multiple Layers of Neurons


To describe networks having multiple layers, the notation

must be extended. Specifically, it needs to make a distinction between weight matrices that are connected to inputs and weight matrices that are connected between layers. It also needs to identify the source and destination for the weight matrices. We will call weight matrices connected to inputs input weights; we will call weight matrices connected to layer outputs layer weights. Further, superscripts are used to identify the source (second index) and the destination (first index) for the various weights and other elements of the network.

the weight matrix

connected to the input vector p is labeled as an input weight matrix (IW1,1) having a source 1 (second index) and a destination 1 (first index). Elements of layer 1, such as its bias, net input, and output have a superscript 1 to say that they are associated with the first layer.

The layers of a multilayer network play different roles. A

layer that produces the network output is called an output layer. All other layers are called hidden layers. The threelayer network shown earlier has one output layer (layer 3) and two hidden layers (layer 1 and layer 2). Some authors refer to the inputs as a fourth layer. This toolbox does not use that designation.

Input and Output Processing Functions


Input Processing Functions

Network inputs might have associated processing functions. Processing functions transform user input data to a form that is easier or more efficient for a network.
Mapminmax transforms input data so that all values fall into the

interval [1, 1]. This can speed up learning for many networks.

removeconstantrows removes the rows of the input vector that

correspond to input elements that always have the same value, because these input elements are not providing any useful information to the network. data with NaN values) into a numerical form for the network. Fix unknowns preserves information about which values are known and which are unknown.

fixunknowns which recodes unknown data (represented in the users

Output Processing Functions


Output processing functions are used to transform user-provided

target vectors for network use. Then, network outputs are reverseprocessed using the same functions to produce output data with the same characteristics as the original user-provided targets.
Both mapminmax and removeconstantrows are often associated

with network outputs. However, fixunknowns is not. Unknown values in targets (represented by NaN values) do not need to be altered for network use.

You might also like