You are on page 1of 14

NEURAL NETWORKS & FUZZY LOGIC

INTRODUCTION: For many centuries, one of the goals of mankind has been to develop machines. We envisioned these machines as performing cumbersome and tedious tasks so that we might enjoy a more fruitful life. The era of machine making begin with the discovery of simple machines such as lever, wheel and pulley. Many equally congenial inventions followed there after. Nowadays engineers and scientists are trying to develop intelligent machines. Artificial neural systems are present day examples of such machines that have great potential to further improve the quality of our life. People and animals are much better than and faster at recognizing images than most advanced computers. Although computers out perform both biological and artificial neural systems for tasks based on precise and fast arithmetic operations, an artificial neural system represents the promising new generations of information processing networks. Advances have been made in applying such systems for problems found intractable on difficult for traditional computational. Neural networks can supplement the enormous processing power of von neuron digital computers with the ability to make sensible decisions and to learn by ordinary experience.

EVOLUTION: In 1943,MCCULLOCH&WALTER PITTS proposed a model of computing elements called MCCULLOCH-PITTS neuron which performs a weighted sum of the inputs to the element followed by a threshold logic operation. The main drawback of this model of computation is that the weights are fixed and hence the model could not learn from examples. In 1949 DONALD HEBB proposed learning scheme -. This law became fundamental learning rule in the neural network literature in 1958;ROSENBLATT proposed the perception model, which has weights adjustable by the perception learning law. In 1969, MINSKY&PAPERT demonstrated the limitations of perception model through several illustrative examples. Lack of suitable learning law for a multi layer perception network had put breaks on the development of neural network models

for pattern recognizing tasks till 1984 the two key developments in1980s are energy analysis of feed back neural networks and adjust the weights of multi layer feed forward neural network (back propagation law). Besides this key developments there are many other significant contributions made in the field during the past 30 years.
A1 w1 A2 w2 signal . S= f(x) . Am Inputs weights (fixed)

activation value
m

o/p

WI Ai-
I=1

wm

summing point

Concept of biological neurons: Features: Some attractive feature of the biological neural network that make it superior to even the most sophisticated AI computer system for pattern recognition tasks are the following.

A) ROBUSTNESS AND FAULT TOLLERANCE: The decay of nerveless does not seem
to affect the performance significantly.

B) FLEXIBILITY: the network automatically adjusts to a new environment with out using
any key program instructions.

C) ABILITY TO DEAL WITH A VARITY OF DATA SITUTATIONS: the network can


deal with information that is fuzzy, probabilistic, noisy & inconsistent.

D) COLLECTIVE COMPUTATION: the network performs routinely many operations in


parallel and also a given task in a distributed manner.

BILOGICAL NEURAL NETWORKS: The features of biological neural networks are attributed to its structure and function. The fundamental unit of the network is called a nerve or nerve cell. It consists of cell body or soma where the cell nucleus is located.

Tree like nerve fibers called dendrites receive signal from another nerves. Extending from the cell body is a single long fiber called the axon, which eventually branches in to strands and sub strands connecting to many other neurons at synaptic junctions or synapses. The receiving ends of these junctions on others cells can be found both on the dendrites and on the cell bodies themselves. The axons of a typical neuron lead to a few thousand synapses associated with other neurons. The cell is fired, when potential reaches a threshold, an electrical activity in the form of short pulse is generated. These signals are sent down the axon. The electrical activity is confined to the interior of a neuron, where as the chemical mechanism operates at the synapses. The dendrites serve as receptions for signals from other neurons. The purpose of axons is transmission of the generated neural activity to the other nerve cell. The size of the cell body of a typical nerve is approximately in the range 10-80m and the dendrites and axons have diameters of the order of few m. The gap at the synaptic junction is about 200nmwide.the total length of the neuron varies from 0.01mm for the internal neurons in the human brains up to 1m for neurons in the limbs.

In the state of inactivity the interior of the neuron, the protoplasm is negatively charged against the surrounding nervous liquid containing positive

sodium ions. The resulting resting potential of about 70mvis supported by the action of the cell membrane which is impenetrable for the positive sodium ions. This causes a deficiency of positive ions in the protoplasm. Signals arriving from the synaptic connections may result in a temporary depolarization of the resting potential. When the potential is increased to a level above 60mv the membrane suddenly looses its impenetrability against sodium ions, which enters in to the protoplasm and reduces the potential difference. This sudden change in the membrane potential causes the neuron to discharge. Then the neuron is set to have fired.

Comparison of confessional computer and BNN: Speed: Neural network is slow in processing information. For the most advanced computers the cycle time corresponding to execution of one step of a program in the CPU is in the range of few nano seconds. The cycle time corresponding to a neural event prompted by an external stimulus occurs in msec.range. Thus, the computer process information nearly a million times faster.

Processing: Neural networks can perform massively parallel operations. In conventional computer the instructions are executed in a sequence mode. On the other hand, the brain operates with massively parallel operations. This is the superior performance of human information processing for certain tasks, despite being several orders of magnitude slower compound to computer processing of information.

Size & complexity: Neural networks having large no of computing elements and the computing are not restricted to with in the neurons. The no of neurons in a brain are 10 and total no of inter connections ten to the power of 15. If is this size and complexity of connections that may be sieving the brain the power of performing Complex pattern recognition tasks.

Storage: In a computer, information is stored in the memory, which is addressed by its location. Any new information in the same location destroys the old information. In contrast, new information is added by adjusting the inter connection.

Strengths, with out destroying the old information. Information in the brain is adaptable. In the computer it is replaceable

Control machines: There is no central control for processing information in the brain. In a computer there is control unit, which monitors all the activities of the computing. A set of processing units when assembled in a closely inter connected network offers a surprisingly rich structure exhibiting some features of BNN. Such a structure is called artificial neural network. The motivation to explore new computing models based on Anns is to solve pattern recognition task. That may involve complex optical and acoustical pattern also. It is impossible to derive logical rules for such problems for applying the wellknown AI methods. It is also difficult to divide a pattern recognition task in to sub tasks, so that each of them could be handled on a separate procedures thus the in adequacies of the logical based AI and limitations of sequential computing have led to the concept of parallel and distributed processing through ANN. Information is stored in the connections and It is distributed through out the network can function as a memory. The information may be recalled by providing potential or enormous input pattern. The associations with other stored data like in the brain store the information. Thus ANN can perform the task associative memory. Thats why ANNs are somewhat fault tolerant.

Learning techniques: We know that ANN is derived from the BNN. Brain acquires knowledge from the concept of learning. Similarly ANN also acquires knowledge from the concept of the learning. That is it must be trained with examples. There are two types of learning techniques 1) Supervised learning. 2) Unsupervised learning. In supervised learning, at each instant of time when the output is applied the designed response of the system is known. The distance between the actual and the designed response serves as an error measure and is used to correct network parameters externally. This error can be used to modify weights so that he error decreases. This mode of learning is very pervasive. It is used in many situations of natural learning.

In unsupervised learning the desired response is not known thus explicit error information cannot be used to improve network behaviour. No information is available as to correctness In correctness of responses, learning must some how be accomplished based on observation of responses to inputs that marginal or no knowledge about unsupervised learning uses mostly local information consist of signals or activation value of the unit of entire and of there connection for which the weight update is being made. A neural network reinforcement algorithm with linguistic information in a control application fuzzy logic has also been employed to speed up the training of an ANN. Applications: In the application two different situations exist (I) where the known neural network concept and models are directly applicable (ii) Where there appears to be potential for using the neural network ideas. It is not yet clear how to formulate the real world problems to evolve suitable neural network architecture. In problems such as pattern classification associative themonies. Optimization vector quantization and control applications. Application in speech: Net talk Phonetic typewriter Vowel classification Recognition of consonant-vowel segments Application in image processing: Recognition of hand written elicits Image segmentation Texture classifications and segmentation Application in decision-making Programming model based on artificial neural network (ANN) has seen increased usage in recent years. Anns are used in various Fields (industry, medicine, finance, or new technology, among others) there is a wide range of possible power system application of neural networks in operation and control processing, including stability assessment, security monitoring, load fore casting, state estimation, load flow analysis, contingency analysis, emergency contrail actions, HVDC system design, etc.

This article features an automatic system that selects the most adequate ANN structure to solve any type of problem. The ANN automatic selection system (SARENEUR) was implemented in a specific case in order to obtain a neural network structure that shows better results in fault location with in a two terminal transmission line. The fault location is obtained according to the values of steady-state voltages and measured at one end.

Fault location in transmission lines


Electric power systems suffer unexpected failures in transmission lines due to various random causes. These failures hinder the proper operation of electrical system. How ever, service must be urgently restored, as the transmission line in which a fault occurred couldnt be kept indefinitely isolated. Since designing a totally reliable system is not possible, for both techni9caol and economic reasons, developing a number of technological aimed to locate faults in transmissio0n lines and make the network operate correctly has been necessary. Because of this, an increasing number of algorithms designed to locate faults have been developed in the past decades with the double purpose of improving the protection of the electric power system and devising upgraded supervision and maintenance procedures. Fault-location algorithms may be developed according to the following basic procedures: Implementation on distance relays, used to protect transmission lines in order to assure the reliability of the system at all times. Implementation on fault location systems, either local or centralized, used to determine accurately the pointed, which a fault occurred so as to be able to perform the required repair or maintenance operations. In the first case, fault-location algorithms implemented on distance relays will determine whether the fault occurred with in a particular protection area. In this type of algorithm, the fault point must be quickly determined, resulting in a reduced accuracy. On fault location systems, the target is to determine the exact point on a transmission line at which a fault occurred. With permanent faults, this facilities performance of all repair operations required restoring power; with temporary faults, this facilitates location of the weak points in the system.

In most cases, existing fault location methods are based on the following basic methodologies. Traveling wave theory. Return time of reflected waves traveling from fault point to line ends is as a factor of the distance at which the fault occurred. Assessment of electric magnitudes at fundamental frequency. In these methodologies, voltage, and/or current signals at the ends of the lines are measured, and the components at fundamental frequency are prefault conditions. Proper use of these fundamental components will determine the fault distance. ANN-based techniques that use elements of artificial intelligence. The system featured in this article obtains the fault distance using ANN-based techniques. The optimum ANN structures was selected by means of the SARENEUR software tool.

ANN Structures
The first neural networks were presented in the 1950s: HEBB proposed his learning rule in 1949, and the perception was designed in 1958. Improvement of sequential computers has allowed its implementation ion conventional computers making its design and simulation easier. ANN simulate the natural systems behavior by means of the interconnection of processing basics units called neurons. Neurons are highly related with each other by means of links. The neurons can receive external signals or signals coming from other neuron affected by a factor called weight. The output of the neuron is the result of applying a specific function, known as a transfer function, to the sum of its inputs plus a threshold value called bias. With these general characteristics, it is able to develop different network structures. One of the features that feature that make ANN so interesting is the ability to learn by means of examples. Once the network has learned how to solve problems, it is able to properly resolve new situations that are different from those present in the learning process. There are several learning rules, which can be classified, in two main categories: Supervised learning algorithms, in which system situation and the expected responses are presented to the network. Unsupervised learning algorithms, in which only the system situations are presented. Therefore, an ANN is defined by the selection of:

ANN topology, which means number of layers, number of neurons in each layer, and the interconnections among them. Transfer function of each neuron. Initial weights and biases. Learning rules. Very often, topology of the ANN is related directed to the learning rule, defining a neural network model. Among the important ones, we highlight the following: Multilayer perception (MLP) with the back propagation-learning rule. Competitive neural networks or kohonen neural network. Hop field neural networks. This wide range of possibilities brings about one problem: how to choose the ANN

structure that responds between to a specifics problem or, at least, in an adequate way. Taking into account what type of problem we are dealing with and its specifics conditions, the most adequate network structure will be different, since there are different types of neural networks. Within the same type of neural network, there are several configurations: number of layers, number of neurons in each layer, transfer functions, and learning rules.

This article presents an automatic selection system that allows us to solve a problem or to select those ANN that solve the problem in compliance with several conditions.

ANN Automatic Selection System


The automatics selection system was developed using the MATLAB package, which includes a specific toolbox for the design and simulation of neural networks. The system is structured in three modules, which are linked as shown in figure Input interface, which aims at obtaining, processing, and preparing the necessary data for the network training and checking. Verification and selection, which generate the various, network and checks their ways of operation. Results display, which shows the results provided by the verification-and-selection module.

SARENEUR is a software package of general application, which allows us to determine the ANN structure to any type of problem that the application of neural networks accepts in its resolution. Input Interface The input interface prepares the data necessary for network training and checking. This input interface is made up of several modules that, depending on the type of network to be studied, obtain the input and output data matrices that will be used in training and checking processes of the different ANN structures. The larger the number of examples presented to the network, the better are the results. This is true, but only to some extent, because if the amount of information given in the learning process is very large, it may cause memorization, so that the ANN learns the examples and it loses it ability to solve a general situation. Moreover, increasing the number of examples means more learning time and requires more powerful systems. SARENEUR allows the selection of the percentage of data to use in the training process.

Verification and selection


This module train and automatically checks the network using the data provided by the interface. It works in two ways. Verifying a specific ANN: in this case, the network will be trained, and its operation will be checked, showing the obtained results. The range of these results is measured taking into account some parameters (highest error, medium error, learning time, number of iterations, etc.). Selecting those ANNs that are in comp line with some conditions: In this mode, different network structures will be trained and checked, selecting those which are in compliance with the restrictions imposed by the user (network structure, training type, transfer functions, etc.). It has a simple procedure. Beginning with a basic network structure, it will be expanded until obtaining those network structures that fulfill certain quality criteria. The network quality will be fixed according to some parameters selected by the user (highest error, medium error, learning time, number of iteration during the training, etc.). Due to the modular features of the developed application, the available structures could be expands to other problems that may develop in the future. The available neural network structures within the developed system are:

Back propagation. LVQ networks. Radial basis functions. Self-organized networks. Competitive learning networks. In all cases, the user can choose the characteristics parameters. The learning process

ends when all conditions established by the user (maximum error, maximum training time, etc.) are fulfilled.

Result Display
This module shows the results provided by the selection and verification module. First the parameters that define the problem to solve are presented. These parameters are the ANN structure (type of structure, topology, transfer functions, etc.), the learning algorithms, the learning parameters, and the examples required in the training and verification phases. Furthermore, this module shows the behavior of the aimed ANN. The results shown are expressed in terms of medium error, training time, or structure of the optimum network.

Results of Application to Fault Location


The ANN structure selection system has been applied to the specific case of obtaining the most proper structure to solve the problem of fault location in two terminal transmission lines. Therefore we want to obtain the network structure that making account the modules of the 50/60 Hz components at the referent end, providing the most accurate fault distance. Thus the input to the neural network will be the voltage and current magnitudes in both fault and prefault situations. Due to the characteristics of the analyzed problem, using a supervised learning algorithm is recommended. So, together with the variables that define the inputs, the fault location p, in p.u.referent to the total longitude has to be given as output. The selected ANN structure should be able to determine the fault distance to any of fault resistance that is not higher than an established value. A fault simulator developed with MATLAB afterwards, gives the voltage and current magnitudes needed in the training phase, and in the test. The SARENEUR input interface takes the values of voltage and current magnitude and the values of the fault distance given by the simulator. Afterwards, the input interface prepares the data and training, putting the remaining data aside to use in the test process.

Once the data is selected, the network selection and verification module obtains the best acts in the determination of the fault distance. As figure shows the automatics system requires definition of network type, rule and learning parameters needed to accept the network structure. The parameters can be modified manually or automatically. Once these parameters have been defined, it initiates the training processes ANN structures that fulfil the given requirements. When this process is complete, the results display module shows the network structures that fulfill the training and validation parameters. Figure 5 presents the results, where the characteristics of the ANN structure are shown: learning rule, training time, and errors provided in the training phase. Also, it shows the error goal and number of iterations needed to achieve it. In the same way, the network behavior in the checking phase is displayed. The search process checks each type of ANN with different training options, in order to select the ANN structure that best operates in solving the presented problem. This example shows how the ANN structure that operates better in the determination of fault distance in a two terminal overhead transmission line is an MLP with the following characteristics: Three layers with six neurons in the input layer, six neurons in the hidden layer, and one neuron in the output layer. Transfer function of the hidden layer is tansig, whereas in the output layer the preferred function is lineal (tansig is a function used by MATLAB to refer to the tan-sigmoid transfer function.) Learning algorithm is back propagation. Based on the optimization technique of levenber-marquardi. The result obtained by applying the automatic selection system have been tested on numerous transmission lines belonging to the Spanish electric power system, Fed from both ends. Table 1 lists several of these lines, with different lengths and voltage levels. The fault current and voltage modules at the reference end were entered as input data on the training process. For the sake of proper network training, these magnitude were consider as containing error of 3% in fault phases and 1% in sound phases. All of these magnitudes, both with and without errors, are used to form the input data for training. Figure 6 shows the average and maximum errors in fault distance determination for single-phase faults on the different lines, considering each reading used in training.

Once the proper operation of the network was verified by testing the data used in training, the method was checked with new data from a simulation carried out for fault distance increases of 1% and fault resistance increases of 1w from 0w to 40w. The error level considers on the input data were also 3% in faulty phases and 1% in sound phases. Figure 7 shows the average and maximum error for single-phase faults on different lines, considering the indicated inputs (with and without errors). The average error in fault distance determination was found to be less than 0.3%, and the maximum error in some situations was no more than 2.5%. The result presented refers to single-phase faults. Which account for most of the faults in transmission lines. Similar results have been obtained for two-phase, two-phase to earth, and symmetrical three-phase faults.

You might also like