You are on page 1of 6

Design of MAC Unit in Artificial Neural Network

Architecture using Verilog HDL


L.Ranganath, D. Jay Kumar P.Siva Nagendra Reddy
P.G.Scholor Associate Professor Assistant Professor
Department of ECE Department of ECE Department of ECE
Kuppam Engineering College Kuppam Engineering College Kuppam Engineering College
Kuppam, Chittoor, A.P, India Kuppam, Chittoor, A.P, India Kuppam, Chittoor, A.P, India
meetranganath@gmail.com djayakumar303@gmail.com snreddy715@gmail.com

ABSTRACT: neurons in order to process the information. It uses a


An Artificial neural network (ANN) is parallel distributed representation of the information stored in the
Information processing structure consists of processing units. network. Normally the neural network model takes an input
The processing unit decides while the network is efficient or not. samples and produces output samples. The relationship
So need to design an efficient processing unit and it also provide between the input and output function is determined by the
better performance. The processing unit consists of MAC unit
network.
(Multiplication and Accumulation) and Activation unit. In an
existing system, the processing MAC unit was designed by Booth Artificial neural network (ANN) is characterized by a
multiplier and carry look ahead adder. The existing processing large number of simple processing neuron like processing
unit provides delay and consumes more area and power. To elements. Three different processing elements in the networks
overcome the drawbacks, designed a new processing unit, Vedic are processing units and topology. Processing units are
multiplier with square root carry select adder (SQRT-CSLA). generally MAC and activation unit. Based on the processing
The proposed design overcomes the drawbacks of the existing unit the performance of the network is increased. Two
system, and it’s also providing better performance of the entire topologies are in the ANNs, one is feed forward networks and
network. The Activation function unit was designed by sigmoid another one is feedback or recurrent networks. Feed forward
neurons process. Entire processing unit was implemented and
network consists of single layer, multilayer perception and
verified by using Verilog HDL language.
radial basis function. Likewise the feedback networks consists
Keywords: Artificial Neural Networks (ANN), MAC, Vedic of competitive networks, Hopfield network and ART models.
multiplier, SQRT-CSLA, Booth multiplier, Verilog HDL.
ANN can be dividing into feed forward and feedback
I. INTRODUCTION network. In the feed forward network the input is directly feed
to processing unit, after the completion of process forward to
A neural network is interconnected with processing the output unit. The operation of the feed forward network
elements. The processing Element of the network is used to shows the output is purely depends on present input only, not
store the interconnection strength and weights. Neural a previous one. But the feedback network is differ from feed
networks are widely used for statistical analysis and data forward, the output of the feedback network is depends on past
modeling techniques. Some examples are image and speech output also. The output of the previous stage is taken as the
recognition, character recognition, financial prediction and feedback and given to the input unit. Application of the feed
geological survey. In the neural network consider the input as forward Networks is to develop nonlinear models that are used
high dimensional and discrete or real valued function, the for pattern recognition and classification.
same way output function is also discrete or real vector-valued The rest of the paper is organized as follows. Section
function. II introduces some related work of the Neural Network and
An artificial neuron computational model is similar to processing unit. In section III is the Architecture of Artificial
the natural neurons. Natural neuron receives signals through Neural Networks. Section IV deals with processing unit of the
synapses located on the membrane of the neurons. When the Neural Network. Simulation results are given in section V.
signal is received are enough, the neuron is activated and Section VI contains the performance evaluation and
emits the signal though the axon. Likewise the signal sent to comparison, and section VII deals with conclusion and future
another synapse and might activate other neurons. Modeling scope.
the artificial neurons the input (synapses), multiplied by II. RELATED WORKS
weights, and then computed by mathematical function which
determines the activation of the neurons, and compute the Saman Razavi and Bryan A. Tolson [1] have been
output of the artificial neurons. ANNs combine artificial proposed the Feed forward neural network is one of the most
commonly used approximation function. This technique is
applied to a variety of problem from various fields. Feed
forward neural networks also known as multi layer perception,
with one hidden layer. A hidden layer is adequate to enable Outputs layer
neural networks to approximate any given function. A neural
network with more than one hidden layer may require fewer
hidden neurons to approximate the same function. The
reformulated neural network is equivalent to the common feed
forward neural network but has a less complex error. ReNN
approach can reduce the complexity of the network error. Hidden layer

Richard L. Welch et al. [2] are proposed a Multi-layer


perception (MLP) neural networks, Elman feedback neural
network, and simultaneous feedback neural network are the Input layer
three types of neural networks. Each network is trained and
tested using meteorological data. The MLP network is a
member of the feed forward network. In the network contains Figure. 1. Simple Neural Network
three layers, the layers are input, hidden and output layers. The B. FRAMEWORK FOR ANN MODEL
input layer with linear activation function is fed the input
values, which are multiplied by an input weight matrix, and There are different ANN models but each model can
the result is forward to hidden layer, and multiplied by output be specified by the following aspects:
weight matrix, and finally fed to the output layer. • A set of processing units
Premananda B.S et al. [3] proposed an 8-bit multiplier • A state of activation for each unit
using a Vedic mathematics for generating the partial products. • An output for each unit
The partial product addition in Vedic multiplier is obtained by • Topology of the network
using carry skip adder. An 8-bit Vedic multiplication process • An activation rule to update the activities of
is realized using a 4-bit multiplier and ripple carry adder. Any each unit
multiplication process speed is the main constraint, increase • An external environment provides
the speed can be achieved by reducing the number of steps in information to the network
the computation process. Speed of the multiplier determines A learning rule to modify the structure of
the efficiency of the system. So multiplier is important in the connectivity by using information provided by the external
processing unit of the MAC. environment. After the processing of information, the output
Partial product addition is one of the important function uses the activation value to calculate the output of the
processes in the multiplication, so need an efficient adder in unit. A simple artificial neuron tells how the process is done in
the multiplier to achieve accuracy. Damarla Paradhasaradhi the processing MAC unit. MAC operation is important one to
and K. Anusudha [4] have been proposed the square root carry get a accurate results from the neural networks.
select adder (SQRT CSLA) is one of the fastest adders used in Figure. 2. Shows the simple artificial neuron in the
many data processing processor to perform the arithmetic artificial neural network. It contains multiplication and
functions. SQRT CSLA adder using Binary to Excess-1 accumulation unit. Each input are multiplied with weights
converter (BEC) instead of using RCA. CSLA achieve lower individually, output from the multiplier is added by using
delay, but slightly increasing the area. Speed of the adder is addition unit. Added output is forward to activation unit, based
increased; automatically the performance of the MAC unit is on the threshold it produced the output 0 or 1.
increased.
Weights
II. ARTIFICAL NEURAL NETWORKS (ANNS)

A. NEURAL NETWORK ARCHITECTURE Threshold


An Artificial Neural Network is a parallel
information processing consists of processing units. The
neural network was changed to Artificial Neural Networks,
because it’s not dealing with biological neural networks. 0 or 1
ANN was deals with general computing architecture known as Input
Multiple Instruction Multiple Data (MIMD) parallel s

processing architecture.
Figure 1. Shows the architecture of simple neural
network, its contain three layers, input, hidden and output
layer. Input data is feed forward to the output via the hidden
layer. In-between the input and output, processing is Figure. 2. Simple Artificial Neuron
performed by the help of processing unit.
III. PROCESSING UNIT (MAC) the output in that neuron. Binary threshold neuron, Sigmoid
Multiplication and Accumulation (MAC) is one of neuron, Rectified linear neuron are the different activation
the processing units in the neural network. Based on the function used in the neural network. Output functions are
performance of the MAC only the accuracy of the network is varied by the activation function.
obtained. MAC operation was performed, using the Vedic
multiplier with SQRT-CSLA adder.

a. Vedic multiplier

Multiplication is one of the important arithmetic


operation in signal processing applications. Signal processing
involves multiplication, speed and accuracy is the main y is denotes the output function,
constraint in the multiplication process[10]. Speed can be
achieved by reducing the computation process in the z is denotes as impulse response function.
multiplication technique. Vedic multiplier is efficient
multiplication technique. The above equation describes the function of the
sigmoid neuron. Sigmoid neuron is used as the activation
Figure. 3. Shows the architecture of 8-bit Vedic function of the neural network. The above function is efficient
multiplier. It was designed by four 4 x 4 Vedic multiplier, compared to other activation functions.
each multiplier perform the operation separately[11]. Partial
products are added by 8-bit SQRT-CSLA; finally get a 16-bit
multiplication output.

The efficient Vedic multiplication technique is used. The


8-bit Vedic multiplier is designed by using four 4x4 Vedic
multiplier and square root carry select adder (SQRT-
CSLA)[11]. The 8-bit input sequence is divided into two 4-bit
numbers. Input to the 4-bit multiplier are a[7:4] & b[7:4],
a[3:0] & b[7:4], a[7:4] & b[3:0], a[3:0] & b[3:0]. Intermediate
partial products output are added using the three modified
adder, named as SQRT-CSLA.

b. SQRT-CSLA Adder

Carry propagation delay and low complexity are


recognized as high potential in every addition circuit[10]. To
achieve an efficient output, the proposed SQRT-CSLA
structure has designed. SQRT-CSLA adder circuit is classified
into two types based on selecting the carry inputs. a) Dual
RCA based SQRT CSLA; b) BEC based SQRT CSLA.
In the dual RCA (Ripple Carry Adder) based SQRT
CSLA circuit, each group has dual RCA pair for providing
carry select signals. RCA circuit would be more
disadvantageous due to the increasing propagation delay. To
overcome the problem, Binary to Excess 1 converter circuit
has been suggested in the SQRT-CSLA adder.

Figure. 4. shows the Architecture of BEC Based


SQRT CSLA, it contain BEC, RCA and mux. Half adders, full
adders and multiplexers are used for providing partial product
addition results. BEC circuits are used to provide same RCA
functions, but have different architectures with less gate count.

c. Activation function unit

The Activation function of a single neuron in the


artificial neural network is determined as function of
b[7:4] a[7:4] b[3:0] a[7:4] b[7:4] a[3:0] b[3:0] a[3:0]

4 X 4 Vedic 4 X 4 Vedic 4 X 4 Vedic 4 X 4 Vedic


multiplier multiplier multiplier multiplier

[7:0] [7:0] [7:0]

8-bit SQRT-CSLA adder

0000
Ca1 [7:0]

8-bit SQRT-CSLA adder

0000

Ca2
8-bit SQRT-CSLA adder
[7:4]
[3:0]

Ca3

P[15:8] P[7:4] P[3:0]

Figure. 3. Block Diagram of 8 x 8 Vedic Multiplier

A[15:11] B[15:11] A[10:7] B[10:7] A[6:4] B[6:4] A[3:2] B[3:2] A[1:0] B[1:0]
4 4 3 3 2 2 2 2
5 5

5-bit 0 4-bit 0 3-bit 0 2-bit 0 2-bit 0


RCA RCA RCA RCA RCA

6-bit 5-bit 4-bit 3-bit


BEC BEC BEC BEC

2 10 2 8 2 6 2 4

Mux 12:6 Mux 10:5 Mux 8:4 Mux 6:3

5 4 3 2 2

Cout Sum[15:11] Sum[10:7] Sum[6:4] Sum[3:2] Sum[1:0]

Figure. 4. Architecture of BEC based SQRT CSLA


1

0.5

0 0

Figure 5: Sigmoid Neuron

V. SIMULATION RESULTS
Simulation was done by using the ModelSim XE III 6.3c simulator. Parameters like area delay and power can be
analyzed by using Xilinx ISE 10.1 simulator. Output of the Vedic multiplier is same as other multiplier, compared to the other
multiplier speed and accuracy of the Vedic multiplier is higher. The results are shown in the figure 6 contains different
combination of inputs, based on the input it produced the output.

Figure. 6. Simulation Output

VI. PERFORMANCE EVALUATION AND COMPARISON

Table No.1 Comparison of Area and Delay between Existing and Proposed system

Parameters Existing Booth Multiplier Proposed Vedic Multiplier

LUT 764 717

Slices 402 397

Delay (ns) 19.114 19.1


Figure.7 shows the difference of Existing booth multiplier and Journal of Engineering Research and Applications,
proposed Vedic multiplier. vol. 3, Issue 6, Dec 2013.
[5]. Hariprasath S and T.N. Prabakar,” FPGA
Implementation of multilayer feed forward neural
network architecture using VHDL”.
900 [6]. S. Coric, I.Latinovic and A.Pavasovic,” A Neural
800 Network FPGA Implementation” ,IEEE, Neural-
2000.
700
[7]. G.Ganesh Kumar, V. Charishma,”Design of High
600 speed Vedic multiplier using Vedic mathematics
500 Techniques”, International Journal of scientific and
Booth Multiplier Research publication, volume 2, Issue 3, march 2012.
400
[8]. K.Saranya,”Low power and area efficient carry select
300 Vedic Multiplier adder”, International Journal of soft computing and
200 Engineering, volume-2, Issue-6, January 2013.
[9]. B. Ramkumar and Harish M kittur,”Low power and
100
area efficient carry select adder”, IEEE Transaction
0 on Very large scale Integration (VLSI) systems, vol.
LUT Slices Delay 20.
(ns) [10]. P.Siva Nagendra Reddy and A.G.Murali Krishna
“Implementation of RISC Processor for Convolution
Figure. 7. Graphical representations of Area and Delay Applications” in International journal of Computer
Trends and Technology, Volume 4, issue 6-2013
VII. CONCLUSION AND FUTURE SCOPE ISSN: 2231-2803(IJCTT).
Artificial Neural Networks are used in many [11]. R.Naresh Naik , P.Siva Nagendra Reddy and K.
applications, to analyze the methodology. MAC unit is one of Madan Mohan “Design of Vedic Multiplier for
the processing units in the artificial neural network. MAC Digital Signal Processing Applications” in
decides the output function is efficient or not. So designed a International Journal of Engineering Trends and
new MAC unit with the help of Vedic multiplier with SQRT- Technology, volume 4 issue 7-2013 ISSN: 2231-
CSLA. It produced the accurate and efficient output, 5381(IJETT).
compared to the existing booth multiplier with carry look
ahead adder. Our proposed MAC increases the speed of the
neural network. The MAC operation is performed well, entire
network performance also increased.

In future the multiplier circuit is designed by using


Reversible logic gates. It consumes less power compared to
our ordinary logic gates. So this technique is applied to the
neural network, get a better results.

REFERENCES
[1]. Saman Razavi and Bryan A. Tolson, “A New
formulation for feedforward neural networks”
IEEE Transactions on neural networks, vol.22,
October 2011.
[2]. Richard L. et al. “Comparison of feedforward and
feedback neural network architectures for short
term wind speed prediction”, International joint
conference on neural networks, June 2009.
[3]. Premananda B.S. et al. “Design and Implementation
of 8-bit Vedic multiplier”, International Journal of
Advanced Research in Electrical, Electronics and
Instrumentation Engineering, vol. 2, Issue 12,
December 2013.
[4]. Damarla paradhasaradhi and K. Anusudha,”An area
efficient SQRT carry select adder”, International

You might also like