You are on page 1of 63

AC MOTOR SPEED CONTROL

USING PWM TECHNIQUE

CONTENTS

CHAPTER NO

DESCRIPTION

PAGE NO

ABSTRACT

INTRODUCTION

BLOCK DIAGRAM

BLOCK DIAGRAM DESCRIPTION


4.1 MICROCONTROLLER
4.2 DAC
4.3 ZERO CROSSING DETECTOR
4.4 RAMP GENERATOR
4.5 OPTOCOUPLER
4.6 TRIAC

5
6
6
6
6
6

COMPONENTS DETAILED EXPLANATION


5.1 MICROCONTROLLER
5.2 POWER SUPPLIES
5.3 RAMP GENERATOR
5.4 ZERO CROSSING DETECTOR
5.5 DAC
5.6 OPTOCOUPLER
5.7 TRIAC

7
31
33
34
41
45
48

PCB DESIGN
8.1 INTRODUCTION
8.2 MANUFACTURING

55
55

8.3
8.4
8.5
8.6
8.7
8.8

SOFTWARE
PANELISATION
DRILLING
PLATING
ETCHING
SOLDER MASK
2

55
55
55
55
56
56

8.9

CHAPTER NO

HOT AIR LEVELING

DESCRIPTION

56

PAGE NO

SOFTWARE
SOFTWARE TOOLS
9.1 KEIL
9.2 ASSEMBLING & RUNNING
AN 8051

56
59

OVERALL CIRCUIT DIAGRAM

36

OVERALL CIRCUIT DIAGRAM DESCRIPTION

10

ADVANTAGES

11

APPLICATIONS

43

12

CONCLUSION

44

13

BIBLIOGRAPHY

44

37
43

1. Abstract
AC MOTOR SPEED CONTROL USING PWM METHOD
This AC motor speed controller can handle most universal type (brushed) AC
motors and other loads up to about 250W. It works in much the same was a light dimmer
circuit; by chopping part of the AC waveform off to effectively control voltage. Because
of this functionality, the circuit will work for a wide variety of loads including
incandescent light bulbs, heating elements, brushed AC motors and some transformers.
The circuit tries to maintain a constant motor speed regardless of load so it is also ideal
for power tools. Note that the circuit can only control brushed AC motors. Inductive
motors require a variable frequency control.
We have employed an Embedded Microcontroller for controlling the speed of the AC
Motor. This project uses the TRAIC to change the firing angle, so that we can vary the
speed of the AC Motor. When we operate we can vary the speed instantly by changing
the setting swatches, but the response will be controlled based on the pulse with
modulation.

2. Introduction:
This application note describes a traditional design solution for controlling a mono phase
motor or any AC load based on phase-angle adjusting with a TRIAC or AC switch and a
microcontroller as a driver. The electronic driver with a TRIAC and microcontroller
presented in this document is cost effective and easy for designers to implement.
These electronic devices are typically used in home appliances or in industrial
applications for various purposes, such as motor regulation in washing machines, vacuum
cleaner control, light dimming in lamps, heating in a coffee machines, or motor
regulation in ventilators. Analog solutions are being progressively replaced by
microcontroller designs even in low-cost applications.
Advantages include fewer external components, easy adaptation through simple software
modifications, control components can be designed with either a potentiometer or
keypad, easy feedback implementation, and flexibility. An analog ICs functionality is
tied to its application and the designer is limited to fixed device functions.

3. Block diagram:

Opto Coupler

TRIAC

Op - Amp

Microcontroller

Zero crossing
Detector

Speed set Switches

Ramp
Generator

DAC

4. Block Diagram Explanation:


4.1 Microcontroller:
The microcontroller has start by reading the dimmer set value from the decode
information provided from the touchpad. The control value is typically 8 bit numbers
where 0 means that motor is off, and 255 that motor is fully on.
The microcontroller can easily generate the necessary trigger signal using the following
algorithm:
1.

Convert the motor value to software loop count number

2.

First wait for a zero crossing

3.

Run a software loop which waits the necessary time until it is time to trigger the
triac

4.

Send a pulse to the triac circuit to trigger the triac to conduct

Software loop is a quite simple and useful method if the time needed to execute each
microcontroller command is definite. Another possibility is to utilize microcontroller
timers:

You can generate an interrupt at every zero crossings and at every timer count.

At every zero crossing the microcontroller loads the delay value to the timer
and starts counting.

When the counter time has elapsed it generates an interrupt. The timer interrupt
routine sends a trigger pulse to the triac circuit.

4.2 DAC:
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device that converts a
digital (usually binary) code to an analog signal (current, voltage, or electric charge). An
analog-to-digital converter (ADC) performs the reverse operation. Signals are easily
stored and transmitted in digital form, but a DAC is needed for the signal to be
recognized by human senses or other non-digital systems.
4.3 Zero crossing detector:
Ideally providing a narrow pulse that coincides exactly with the zero voltage condition.
4.4 Ramp generator:
Ramp generator is a function generator that increase its output voltage up to a specific
value called a ramp
4.5 Opto-coupler:
There are many situations where signals and data need to be transferred from one
subsystem to another within piece of electronic equipment, but without making a direct
ohmic electrical connection.
Often this is because the source and destination are at very different voltage level like a
microcontroller which is operating from 5V DC but being used to control a TRIAC which
is switching 240V AC.
In such situation link between the two must be isolated one, to protect the microcontroller
from overvoltage damage.
Relay can of course provide this kind of isolation but it is capable of low speed
operation. Where small size, higher speed and greater reliability is to use an opto-coupler.
4.6 TRIAC:
TRIAC, from Triode for Alternating Current, is a generalized trade name for an
electronic component that can conduct current in either direction when it is triggered
(turned on), and is formally called a bidirectional triode thyristor or bilateral triode
thyristor.
TRIACs are used in many applications such as light dimmers, speed controls for electric
fans and other electric motors, and in the modern computerized control circuits of many
household small and major appliances.
6

5. COMPONENTS DETAILED EXPLANATION:


5.1 MICROCONTROLLER
A microcontroller (also MCU or C) is a functional computer system-on-a-chip.
It contains a processor core, memory, and programmable input/output peripherals.
Microcontrollers include an integrated CPU, memory (a small amount of RAM, program
memory, or both) and peripherals capable of input and output. Microcontrollers are used
in automatically controlled products and devices.

BASICS:
A designer will use a Microcontroller to
Gather input from various sensors
Process this input into a set of actions
Use the output mechanisms on the Microcontroller to do something useful.
MEMORY TYPES:
RAM:
Random access memory.
Ram is a volatile (change) memory.
It general purpose memory that can store data or programs.
Ex: hard disk, USB device.

ROM:
Read only memory.
Rom is a non volatile memory.
This is typically that is programmed at the factory to have certain values it cannot
be changed.
Ex: cd...

ARCHITECTURE OF AT89S52

8051 Architecture:
8051 Architecture contains the following:
CPU
ALU
I/O ports
RAM
ROM
2 Timers/Counters
General Purpose registers
Special Function registers
Crystal Oscillators
Serial ports
Interrupts
PSW
Program Counter
Stack pointer

8051 Addressing Modes


An "addressing mode" refers to how you are addressing a given memory location. In
summary, the addressing modes are as follows, with an example of each:
Immediate Addressing MOV A,#20h
Direct Addressing
MOV A,30h
Indirect Addressing MOV A,@R0
External Direct
MOVX A,@DPTR
Code Indirect
MOVC A,@A+DPTR
Each of these addressing modes provides important flexibility.
Immediate Addressing
Immediate addressing is so-named because the value to be stored in memory immediately
follows the operation code in memory. That is to say, the instruction itself dictates what
value will be stored in memory.

For example, the instruction:


MOV A,#20h
This instruction uses Immediate Addressing because the Accumulator will be loaded with
the value that immediately follows; in this case 20 (hexidecimal).
Immediate addressing is very fast since the value to be loaded is included in the
instruction. However, since the value to be loaded is fixed at compile-time it is not very
flexible.
Direct Addressing
Direct addressing is so-named because the value to be stored in memory is obtained by
directly retrieving it from another memory location. For example:
MOV A,30h
This instruction will read the data out of Internal RAM address 30 (hexidecimal) and
store it in the Accumulator.
Direct addressing is generally fast since, although the value to be loaded isnt included in
the instruction, it is quickly accessable since it is stored in the 8051s Internal RAM. It is
also much more flexible than Immediate Addressing since the value to be loaded is
whatever is found at the given address--which may be variable.
Also, it is important to note that when using direct addressing any instruction which
refers to an address between 00h and 7Fh is referring to Internal Memory. Any
instruction which refers to an address between 80h and FFh is referring to the SFR
control registers that control the 8051 microcontroller itself.
The obvious question that may arise is, "If direct addressing an address from 80h through
FFh refers to SFRs, how can I access the upper 128 bytes of Internal RAM that are
available on the 8052?" The answer is: You cant access them using direct addressing. As
stated, if you directly refer to an address of 80h through FFh you will be referring to an
SFR. However, you may access the 8052s upper 128 bytes of RAM by using the next
addressing mode, "indirect addressing."
Indirect Addressing
Indirect addressing is a very powerful addressing mode which in many cases provides an
exceptional level of flexibility. Indirect addressing is also the only way to access the extra
128 bytes of Internal RAM found on an 8052.
10

Indirect addressing appears as follows:


MOV A,@R0
This instruction causes the 8051 to analyze the value of the R0 register. The 8051 will
then load the accumulator with the value from Internal RAM which is found at the
address indicated by R0.
For example, lets say R0 holds the value 40h and Internal RAM address 40h holds the
value 67h. When the above instruction is executed the 8051 will check the value of R0.
Since R0 holds 40h the 8051 will get the value out of Internal RAM address 40h (which
holds 67h) and store it in the Accumulator. Thus, the Accumulator ends up holding 67h.
Indirect addressing always refers to Internal RAM; it never refers to an SFR. Thus, in a
prior example we mentioned that SFR 99h can be used to write a value to the serial port.
Thus one may think that the following would be a valid solution to write the value 1 to
the serial port:
MOV
R0,#99h
;Load
the
address
of
the
serial
port
MOV @R0,#01h ;Send 01 to the serial port -- WRONG!!
This is not valid. Since indirect addressing always refers to Internal RAM these two
instructions would write the value 01h to Internal RAM address 99h on an 8052. On an
8051 these two instructions would produce an undefined result since the 8051 only has
128 bytes of Internal RAM.
External Direct
External Memory is accessed using a suite of instructions which use what I call "External
Direct" addressing. I call it this because it appears to be direct addressing, but it is used to
access external memory rather than internal memory.
There are only two commands that use External Direct addressing mode:
MOVXA,@DPTR
MOVX @DPTR,A
As you can see, both commands utilize DPTR. In these instructions, DPTR must first be
loaded with the address of external memory that you wish to read or write. Once DPTR
holds the correct external memory address, the first command will move the contents of
that external memory address into the Accumulator. The second command will do the
opposite: it will allow you to write the value of the Accumulator to the external memory
address pointed to by DPTR.
11

External Indirect
External memory can also be accessed using a form of indirect addressing which I call
External Indirect addressing. This form of addressing is usually only used in relatively
small projects that have a very small amount of external RAM. An example of this
addressing mode is:
MOVX @R0,A
Once again, the value of R0 is first read and the value of the Accumulator is written to
that address in External RAM. Since the value of @R0 can only be 00h through FFh the
project would effectively be limited to 256 bytes of External RAM. There are relatively
simple hardware/software tricks that can be implemented to access more than 256 bytes
of memory using External Indirect addressing; however, it is usually easier to use
External Direct addressing if your project has more than 256 bytes of External RAM.
8051 Program Flow
When an 8051 is first initialized, it resets the PC to 0000h. The 8051 then begins to
execute instructions sequentially in memory unless a program instruction causes the PC
to be otherwise altered. There are various instructions that can modify the value of the
PC; specifically, conditional branching instructions, direct jumps and calls, and "returns"
from subroutines. Additionally, interrupts, when enabled, can cause the program flow to
deviate from its otherwise sequential scheme.
Conditional Branching
The 8051 contains a suite of instructions which, as a group, are referred to as "conditional
branching" instructions. These instructions cause program execution to follow a nonsequential path if a certain condition is true.
Take, for example, the JB instruction. This instruction means "Jump if Bit Set." An
example of the JB instruction might be:
JB 45h,HELLO
NOP
HELLO: ....
In this case, the 8051 will analyze the contents of bit 45h. If the bit is set program
execution will jump immediately to the label HELLO, skipping the NOP instruction. If
the bit is not set the conditional branch fails and program execution continues, as usual,
with the NOP instruction which follows.

12

Conditional branching is really the fundamental building block of program logic since all
"decisions" are accomplished by using conditional branching. Conditional branching can
be thought of as the "IF...THEN" structure in 8051 assembly language.
An important note worth mentioning about conditional branching is that the program may
only branch to instructions located within 128 bytes prior to or 127 bytes following the
address which follows the conditional branch instruction. This means that in the above
example the label HELLO must be within +/- 128 bytes of the memory address which
contains the conditional branching instruction.
Direct Jumps
While conditional branching is extremely important, it is often necessary to make a direct
branch to a given memory location without basing it on a given logical decision. This is
equivalent to saying "Goto" in BASIC. In this case you want the program flow to
continue at a given memory address without considering any conditions.
This is accomplished in the 8051 using "Direct Jump and Call" instructions. As illustrated
in the last paragraph, this suite of instructions causes program flow to change
unconditionally.
Consider the example:
LJMP NEW_ADDRESS
.
.
.
NEW_ADDRESS: ....
The LJMP instruction in this example means "Long Jump." When the 8051 executes this
instruction the PC is loaded with the address of NEW_ADDRESS and program execution
continues sequentially from there.
The obvious difference between the Direct Jump and Call instructions and the conditional
branching is that with Direct Jumps and Calls program flow always changes. With
conditional branching program flow only changes if a certain condition is true.
There are two other instructions which cause a direct jump to occur: the SJMP and AJMP
commands. Functionally, these two commands perform the exact same function as the
LJMP command--that is to say, they always cause program flow to continue at the
address indicated by the command.
However, SJMP and AJMP differ in the following ways:
13

The SJMP command, like the conditional branching instructions, can only jump
to an address within +/- 128 bytes of the SJMP command.
The AJMP command can only jump to an address that is in the same 2k block of
memory as the AJMP command. That is to say, if the AJMP command is at code
memory location 650h, it can only do a jump to addresses 0000h through 07FFh
(0 through 2047, decimal).

You may be asking yourself, "Why would I want to use the SJMP or AJMP command
which have restrictions as to how far they can jump if they do the same thing as the
LJMP command which can jump anywhere in memory?" The answer is simple: The
LJMP command requires three bytes of code memory whereas both the SJMP and AJMP
commands require only two. Thus, if you are developing an application that has memory
restrictions you can often save quite a bit of memory using the 2-byte AJMP/SJMP
instructions instead of the 3-byte instruction.
Recently, I wrote a program that required 2100 bytes of memory but I had a memory
restriction of 2k (2048 bytes). I did a search/replace changing all LJMPs to AJMPs and
the program shrunk downto 1950 bytes. Thus, without changing any logic whatsoever in
my program I saved 150 bytes and was able to meet my 2048 byte memory restriction.
NOTE: Some quality assemblers will actually do the above conversion for you
automatically. That is, theyll automatically change your LJMPs to SJMPs whenever
possible. This is a nifty and very powerful capability that you may want to look for in an
assembler if you plan to develop many projects that have relatively tight memory
restrictions.
Direct Calls
Another operation that will be familiar to seasoned programmers is the LCALL
instruction. This is similar to a "Gosub" command in Basic.
When the 8051 executes an LCALL instruction it immediately pushes the current
Program Counter onto the stack and then continues executing code at the address
indicated by the LCALL instruction.
Returns from Routines
Another structure that can cause program flow to change is the "Return from Subroutine"
instruction, known as RET in 8051 Assembly Language.

14

The RET instruction, when executed, returns to the address following the instruction that
called the given subroutine. More accurately, it returns to the address that is stored on the
stack.
The RET command is direct in the sense that it always changes program flow without
basing it on a condition, but is variable in the sense that where program flow continues
can be different each time the RET instruction is executed depending on from where the
subroutine was called originally.
Interrupts
An interrupt is a special feature which allows the 8051 to provide the illusion of "multitasking," although in reality the 8051 is only doing one thing at a time. The word
"interrupt" can often be subsituted with the word "event."
An interrupt is triggered whenever a corresponding event occurs. When the event occurs,
the 8051 temporarily puts "on hold" the normal execution of the program and executes a
special section of code referred to as an interrupt handler. The interrupt handler performs
whatever special functions are required to handle the event and then returns control to the
8051 at which point program execution continues as if it had never been interrupted.
The topic of interrupts is somewhat tricky and very important. For that reason, an entire
chapter will be dedicated to the topic. For now, suffice it to say that Interrupts can cause
program flow to change.
8051 Tutorial: Instruction Set, Timing, and Low-Level Info
In order to understand--and better make use of--the 8051, it is necessary to understand
some underlying information concerning timing.
The 8051 operates based on an external crystal. This is an electrical device which, when
energy is applied, emits pulses at a fixed frequency. One can find crystals of virtually any
frequency depending on the application requirements. When using an 8051, the most
common crystal frequencies are 12 megahertz and 11.059 megahertz--with 11.059 being
much more common. Why would anyone pick such an odd-ball frequency? Theres a real
reason for it--it has to do with generating baud rates and well talk more about it in the
Serial Communication chapter. For the remainder of this discussion well assume that
were using an 11.059Mhz crystal.
Microcontrollers (and many other electrical systems) use crystals to syncrhronize
operations. The 8051 uses the crystal for precisely that: to synchronize its operation.
15

Effectively, the 8051 operates using what are called "machine cycles." A single machine
cycle is the minimum amount of time in which a single 8051 instruction can be executed.
although many instructions take multiple cycles.
A cycle is, in reality, 12 pulses of the crystal. That is to say, if an instruction takes one
machine cycle to execute, it will take 12 pulses of the crystal to execute. Since we know
the crystal is pulsing 11,059,000 times per second and that one machine cycle is 12
pulses, we can calculate how many instruction cycles the 8051 can execute per second:
11,059,000 / 12 = 921,583
This means that the 8051 can execute 921,583 single-cycle instructions per second. Since
a large number of 8051 instructions are single-cycle instructions it is often considered
that the 8051 can execute roughly 1 million instructions per second, although in reality it
is less--and, depending on the instructions being used, an estimate of about 600,000
instructions per second is more realistic.
For example, if you are using exclusively 2-cycle instructions you would find that the
8051 would execute 460,791 instructions per second. The 8051 also has two really slow
instructions that require a full 4 cycles to execute--if you were to execute nothing but
those instructions youd find performance to be about 230,395 instructions per second.
It is again important to emphasize that not all instructions execute in the same amount of
time. The fastest instructions require one machine cycle (12 crystal pulses), many others
require two machine cycles (24 crystal pulses), and the two very slow math operations
require four machine cycles (48 crystal pulses).
NOTE: Many 8051 derivative chips change instruction timing. For example, many
optimized versions of the 8051 execute instructions in 4 oscillator cycles instead of 12;
such a chip would be effectively 3 times faster than the 8051 when used with the same
11.059 Mhz crystal.
Since all the instructions require different amounts of time to execute a very obvious
question comes to mind: How can one keep track of time in a time-critical application if
we have no reference to time in the outside world?
Luckily, the 8051 includes timers which allow us to time events with high precision-which is the topic of the next chapter.

8051 Timers

16

The 8051 comes equipped with two timers, both of which may be controlled, set, read,
and configured individually. The 8051 timers have three general functions: 1) Keeping
time and/or calculating the amount of time between events, 2) Counting the events
themselves, or 3) Generating baud rates for the serial port.
The three timer uses are distinct so we will talk about each of them separately. The first
two uses will be discussed in this chapter while the use of timers for baud rate generation
will be discussed in the chapter relating to serial ports.
How does a timer count?
How does a timer count? The answer to this question is very simple: A timer always
counts up. It doesnt matter whether the timer is being used as a timer, a counter, or a
baud rate generator: A timer is always incremented by the microcontroller.
Programming Tip: Some derivative chips actually allow the program to
configure whether the timers count up or down. However, since this option only
exists on some derivatives it is beyond the scope of this tutorial which is aimed at
the standard 8051. It is only mentioned here in the event that you absolutely need
a timer to count backwards, you will know that you may be able to find an 8051compatible microcontroller that does it.
USING TIMERS TO MEASURE TIME
Obviously, one of the primary uses of timers is to measure time. We will discuss this use
of timers first and will subsequently discuss the use of timers to count events. When a
timer is used to measure time it is also called an "interval timer" since it is measuring the
time of the interval between two events.
How long does a timer take to count?
First, its worth mentioning that when a timer is in interval timer mode (as opposed to
event counter mode) and correctly configured, it will increment by 1 every machine
cycle. As you will recall from the previous chapter, a single machine cycle consists of 12
crystal pulses. Thus a running timer will be incremented:
11,059,000 / 12 = 921,583
921,583 times per second. Unlike instructions--some of which require 1 machine cycle,
others 2, and others 4--the timers are consistent: They will always be incremented once
per machine cycle.
Thus if a timer has counted from 0 to 50,000 you may calculate:
17

50,000 / 921,583 = .0542


.0542 seconds have passed. In plain English, about half of a tenth of a second, or onetwentieth of a second.
Obviously its not very useful to know .0542 seconds have passed. If you want to execute
an event once per second youd have to wait for the timer to count from 0 to 50,000 18.45
times. How can you wait "half of a time?" You cant. So we come to another important
calculation.
Lets say we want to know how many times the timer will be incremented in .05 seconds.
We can do simple multiplication:
.05 * 921,583 = 46,079.15.
This tells us that it will take .05 seconds (1/20th of a second) to count from 0 to 46,079.
Actually, it will take it .049999837 seconds--so were off by .000000163 seconds-however, thats close enough for government work. Consider that if you were building a
watch based on the 8051 and made the above assumption your watch would only gain
about one second every 2 months. Again, I think thats accurate enough for most
applications--I wish my watch only gained one second every two months!
Obviously, this is a little more useful. If you know it takes 1/20th of a second to count
from 0 to 46,079 and you want to execute some event every second you simply wait for
the timer to count from 0 to 46,079 twenty times; then you execute your event, reset the
timers, and wait for the timer to count up another 20 times. In this manner you will
effectively execute your event once per second, accurate to within thousandths of a
second.
Thus, we now have a system with which to measure time. All we need to review is how
to control the timers and initialize them to provide us with the information we need.
Timer SFRs
As mentioned before, the 8051 has two timers which each function essentially the same
way. One timer is TIMER0 and the other is TIMER1. The two timers share two SFRs
(TMOD and TCON) which control the timers, and each timer also has two SFRs
dedicated solely to itself (TH0/TL0 and TH1/TL1).
Weve given SFRs names to make it easier to refer to them, but in reality an SFR has a
numeric address. It is often useful to know the numeric address that corresponds to an
SFR name. The SFRs relating to timers are:

18

SFR Name

Description

SFR Address

TH0

Timer 0 High Byte

8Ch

TL0

Timer 0 Low Byte

8Ah

TH1

Timer 1 High Byte

8Dh

TL1

Timer 1 Low Byte

8Bh

TCON

Timer Control

88h

TMOD

Timer Mode

89h

When you enter the name of an SFR into an assembler, it internally converts it to a
number. For example, the command:
MOV TH0,#25h
moves the value 25h into the TH0 SFR. However, since TH0 is the same as SFR address
8Ch this command is equivalent to:
MOV 8Ch,#25h
Now, back to the timers. First, lets talk about Timer 0.
Timer 0 has two SFRs dedicated exclusively to itself: TH0 and TL0. Without making
things too complicated to start off with, you may just think of this as the high and low
byte of the timer. That is to say, when Timer 0 has a value of 0, both TH0 and TL0 will
contain 0. When Timer 0 has the value 1000, TH0 will hold the high byte of the value (3
decimal) and TL0 will contain the low byte of the value (232 decimal). Reviewing
low/high byte notation, recall that you must multiply the high byte by 256 and add the
low byte to calculate the final value. That is to say:
TH0
*
256
+
TL0
=
1000
3 * 256 + 232 = 1000
Timer 1 works the exact same way, but its SFRs are TH1 and TL1.
Since there are only two bytes devoted to the value of each timer it is apparent that the
maximum value a timer may have is 65,535. If a timer contains the value 65,535 and is
subsequently incremented, it will reset--or overflow--back to 0.
The TMOD SFR
Lets first talk about our first control SFR: TMOD (Timer Mode). The TMOD SFR is
used to control the mode of operation of both timers. Each bit of the SFR gives the
microcontroller specific information concerning how to run a timer. The high four bits
(bits 4 through 7) relate to Timer 1 whereas the low four bits (bits 0 through 3) perform
the exact same functions, but for timer 0.
19

The individual bits of TMOD have the following functions:


TMOD (89h) SFR
Bit Name

Explanation of Function

Timer

When this bit is set the timer will only run when
GATE1 INT1 (P3.3) is high. When this bit is clear the timer 1
will run regardless of the state of INT1.

C/T1

When this bit is set the timer will count events on


T1 (P3.5). When this bit is clear the timer will be 1
incremented every machine cycle.

T1M1

Timer mode bit (see below)

T1M0

Timer mode bit (see below)

When this bit is set the timer will only run when
GATE0 INT0 (P3.2) is high. When this bit is clear the timer 0
will run regardless of the state of INT0.

C/T0

When this bit is set the timer will count events on


T0 (P3.4). When this bit is clear the timer will be 0
incremented every machine cycle.

T0M1

Timer mode bit (see below)

0 T0M0 Timer mode bit (see below)


0
As you can see in the above chart, four bits (two for each timer) are used to specify a
mode of operation. The modes of operation are:
TxM1

TxM0

Timer Mode

Description of Mode

13-bit Timer.

16-bit Timer

8-bit auto-reload

Split timer mode

13-bit Time Mode (mode 0)


Timer mode "0" is a 13-bit timer. This is a relic that was kept around in the 8051 to
maintain compatibility with its predecessor, the 8048. Generally the 13-bit timer mode is
not used in new development.

20

When the timer is in 13-bit mode, TLx will count from 0 to 31. When TLx is incremented
from 31, it will "reset" to 0 and increment THx. Thus, effectively, only 13 bits of the two
timer bytes are being used: bits 0-4 of TLx and bits 0-7 of THx. This also means, in
essence, the timer can only contain 8192 values. If you set a 13-bit timer to 0, it will
overflow back to zero 8192 machine cycles later.
Again, there is very little reason to use this mode and it is only mentioned so you wont
be surprised if you ever end up analyzing archaeic code which has been passed down
through the generations (a generation in a programming shop is often on the order of
about 3 or 4 months).
16-bit Time Mode (mode 1)
Timer mode "1" is a 16-bit timer. This is a very commonly used mode. It functions just
like 13-bit mode except that all 16 bits are used.
TLx is incremented from 0 to 255. When TLx is incremented from 255, it resets to 0 and
causes THx to be incremented by 1. Since this is a full 16-bit timer, the timer may
contain up to 65536 distinct values. If you set a 16-bit timer to 0, it will overflow back to
0 after 65,536 machine cycles.
8-bit Time Mode (mode 2)
Timer mode "2" is an 8-bit auto-reload mode. What is that, you may ask? Simple. When a
timer is in mode 2, THx holds the "reload value" and TLx is the timer itself. Thus, TLx
starts counting up. When TLx reaches 255 and is subsequently incremented, instead of
resetting to 0 (as in the case of modes 0 and 1), it will be reset to the value stored in THx.
For example, lets say TH0 holds the value FDh and TL0 holds the value FEh. If we were
to watch the values of TH0 and TL0 for a few machine cycles this is what wed see:
Machine Cycle TH0 Value TL0 Value
1

FDh

FEh

FDh

FFh

FDh

FDh

FDh

FEh

FDh

FFh

FDh

FDh

FDh

FEh

21

As you can see, the value of TH0 never changed. In fact, when you use mode 2 you
almost always set THx to a known value and TLx is the SFR that is constantly
incremented.
Whats the benefit of auto-reload mode? Perhaps you want the timer to always have a
value from 200 to 255. If you use mode 0 or 1, youd have to check in code to see if the
timer had overflowed and, if so, reset the timer to 200. This takes precious instructions of
execution time to check the value and/or to reload it. When you use mode 2 the
microcontroller takes care of this for you. Once youve configured a timer in mode 2 you
dont have to worry about checking to see if the timer has overflowed nor do you have to
worry about resetting the value--the microcontroller hardware will do it all for you.
The auto-reload mode is very commonly used for establishing a baud rate which we will
talk more about in the Serial Communications chapter.
Split Timer Mode (mode 3)
Timer mode "3" is a split-timer mode. When Timer 0 is placed in mode 3, it essentially
becomes two separate 8-bit timers. That is to say, Timer 0 is TL0 and Timer 1 is TH0.
Both timers count from 0 to 255 and overflow back to 0. All the bits that are related to
Timer 1 will now be tied to TH0.
While Timer 0 is in split mode, the real Timer 1 (i.e. TH1 and TL1) can be put into
modes 0, 1 or 2 normally--however, you may not start or stop the real timer 1 since the
bits that do that are now linked to TH0. The real timer 1, in this case, will be incremented
every machine cycle no matter what.
The only real use I can see of using split timer mode is if you need to have two separate
timers and, additionally, a baud rate generator. In such case you can use the real Timer 1
as a baud rate generator and use TH0/TL0 as two separate timers.
The TCON SFR
Finally, theres one more SFR that controls the two timers and provides valuable
information about them.

22

The TCON SFR has the following structure:


TCON (88h) SFR
Bit Name

Bit
Address

Explanation of Function

TF1

8Fh

Timer 1 Overflow. This bit is set by the


1
microcontroller when Timer 1 overflows.

TR1

8Eh

Timer 1 Run. When this bit is set Timer 1 is turned


1
on. When this bit is clear Timer 1 is off.

TF0

8Dh

Timer 0 Overflow. This bit is set by the


0
microcontroller when Timer 0 overflows.

Timer

Timer 0 Run. When this bit is set Timer 0 is turned


0
on. When this bit is clear Timer 0 is off.
As you may notice, weve only defined 4 of the 8 bits. Thats because the other 4 bits of
the SFR dont have anything to do with timers--they have to do with Interrupts and they
will be discussed in the chapter that addresses interrupts.
4

TR0

8Ch

A new piece of information in this chart is the column "bit address." This is because this
SFR is "bit-addressable." What does this mean? It means if you want to set the bit TF1-which is the highest bit of TCON--you could execute the command:
MOV TCON, #80h
... or, since the SFR is bit-addressable, you could just execute the command:
SETB TF1
This has the benefit of setting the high bit of TCON without changing the value of any of
the other bits of the SFR. Usually when you start or stop a timer you dont want to
modify the other values in TCON, so you take advantage of the fact that the SFR is bitaddressable.
Initializing a Timer
Now that weve discussed the timer-related SFRs we are ready to write code that will
initialize the timer and start it running.
As youll recall, we first must decide what mode we want the timer to be in. In this case
we want a 16-bit timer that runs continuously; that is to say, it is not dependent on any
external pins.
We must first initialize the TMOD SFR. Since we are working with timer 0 we will be
using the lowest 4 bits of TMOD. The first two bits, GATE0 and C/T0 are both 0 since
we want the timer to be independent of the external pins.
23

16-bit mode is timer mode 1 so we must clear T0M1 and set T0M0. Effectively, the only
bit we want to turn on is bit 0 of TMOD. Thus to initialize the timer we execute the
instruction:
MOV TMOD,#01h
Timer 0 is now in 16-bit timer mode. However, the timer is not running. To start the
timer running we must set the TR0 bit We can do that by executing the instruction:
SETB TR0
Upon executing these two instructions timer 0 will immediately begin counting, being
incremented once every machine cycle (every 12 crystal pulses).
Reading the Timer
There are two common ways of reading the value of a 16-bit timer; which you use
depends on your specific application. You may either read the actual value of the timer as
a 16-bit number, or you may simply detect when the timer has overflowed.
Reading the value of a Timer
If your timer is in an 8-bit mode--that is, either 8-bit AutoReload mode or in split timer
mode--then reading the value of the timer is simple. You simply read the 1-byte value of
the timer and youre done.
However, if youre dealing with a 13-bit or 16-bit timer the chore is a little more
complicated. Consider what would happen if you read the low byte of the timer as 255,
then read the high byte of the timer as 15. In this case, what actually happened was that
the timer value was 14/255 (high byte 14, low byte 255) but you read 15/255. Why?
Because you read the low byte as 255. But when you executed the next instruction a
small amount of time passed--but enough for the timer to increment again at which time
the value rolled over from 14/255 to 15/0. But in the process youve read the timer as
being 15/255. Obviously theres a problem there.
You read the high byte of the timer, then read the low byte, then read the high byte again.
If the high byte read the second time is not the same as the high byte read the first time
you repeat the cycle. In code, this would appear as:
REPEAT: MOV A,TH0
MOV R0,TL0
CJNE A,TH0,REPEAT
In this case, we load the accumulator with the high byte of Timer 0. We then load R0
with the low byte of Timer 0. Finally, we check to see if the high byte we read out of
24

Timer 0--which is now stored in the Accumulator--is the same as the current Timer 0
high byte. If it isnt it means weve just "rolled over" and must reread the timers value-which we do by going back to REPEAT. When the loop exits we will have the low byte
of the timer in R0 and the high byte in the Accumulator.
Another much simpler alternative is to simply turn off the timer run bit (i.e. CLR TR0),
read the timer value, and then turn on the timer run bit (i.e. SETB TR0). In that case, the
timer isnt running so no special tricks are necessary. Of course, this implies that your
timer will be stopped for a few machine cycles. Whether or not this is tolerable depends
on your specific application.
Detecting Timer Overflow
Often it is necessary to just know that the timer has reset to 0. That is to say, you are not
particularly interest in the value of the timer but rather you are interested in knowing
when the timer has overflowed back to 0.
Whenever a timer overflows from its highest value back to 0, the microcontroller
automatically sets the TFx bit in the TCON register. This is useful since rather than
checking the exact value of the timer you can just check if the TFx bit is set. If TF0 is set
it means that timer 0 has overflowed; if TF1 is set it means that timer 1 has overflowed.
We can use this approach to cause the program to execute a fixed delay. As youll recall,
we calculated earlier that it takes the 8051 1/20th of a second to count from 0 to 46,079.
However, the TFx flag is set when the timer overflows back to 0. Thus, if we want to use
the TFx flag to indicate when 1/20th of a second has passed we must set the timer
initially to 65536 less 46079, or 19,457. If we set the timer to 19,457, 1/20th of a second
later the timer will overflow. Thus we come up with the following code to execute a
pause of 1/20th of a second:
MOV TH0,#76;High byte of 19,457 (76 * 256 = 19,456)
MOV TL0,#01;Low byte of 19,457 (19,456 + 1 = 19,457)
MOV
TMOD,#01;Put
Timer
0
in
16-bit
mode
SETB
TR0;Make
Timer
0
start
counting
JNB TF0,$;If TF0 is not set, jump back to this same instruction
In the above code the first two lines initialize the Timer 0 starting value to 19,457. The
next two instructions configure timer 0 and turn it on. Finally, the last instruction JNB
TF0,$, reads "Jump, if TF0 is not set, back to this same instruction."
The "$" operand means, in most assemblers, the address of the current instruction. Thus
as long as the timer has not overflowed and the TF0 bit has not been set the program will

25

keep executing this same instruction. After 1/20th of a second timer 0 will overflow, set
the TF0 bit, and program execution will then break out of the loop.
Timing the length of events
The 8051 provides another cool toy that can be used to time the length of events.
For example, let's say we're trying to save electricity in the office and we're interested in
how long a light is turned on each day. When the light is turned on, we want to measure
time. When the light is turned off we don't. One option would be to connect the
lightswitch to one of the pins, constantly read the pin, and turn the timer on or off based
on the state of that pin. While this would work fine, the 8051 provides us with an easier
method of accomplishing this.
Looking again at the TMOD SFR, there is a bit called GATE0. So far we've always
cleared this bit because we wanted the timer to run regardless of the state of the external
pins. However, now it would be nice if an external pin could control whether the timer
was running or not. It can. All we need to do is connect the lightswitch to pin INT0
(P3.2) on the 8051 and set the bit GATE0. When GATE0 is set Timer 0 will only run if
P3.2 is high. When P3.2 is low (i.e., the lightswitch is off) the timer will automatically be
stopped.
Thus, with no control code whatsoever, the external pin P3.2 can control whether or not
our timer is running or not.
USING TIMERS AS EVENT COUNTERS
We've discussed how a timer can be used for the obvious purpose of keeping track of
time. However, the 8051 also allows us to use the timers to count events.
How can this be useful? Let's say you had a sensor placed across a road that would send a
pulse every time a car passed over it. This could be used to determine the volume of
traffic on the road. We could attach this sensor to one of the 8051's I/O lines and
constantly monitor it, detecting when it pulsed high and then incrementing our counter
when it went back to a low state. This is not terribly difficult, but requires some code.

Let's say we hooked the sensor to P1.0; the code to count cars passing would look
something like this:
26

JNB P1.0,$
;If a car hasn't raised the signal, keep waiting
JB P1.0,$
;The line is high which means the car is on the sensor right now
INC COUNTER ;The car has passed completely, so we count it
As you can see, it's only three lines of code. But what if you need to be doing other
processing at the same time? You can't be stuck in the JNB P1.0,$ loop waiting for a car
to pass if you need to be doing other things. Of course, there are ways to get around even
this limitation but the code quickly becomes big, complex, and ugly.
Luckily, since the 8051 provides us with a way to use the timers to count events we don't
have to bother with it. It is actually painfully easy. We only have to configure one
additional bit.
Let's say we want to use Timer 0 to count the number of cars that pass. If you look back
to the bit table for the TCON SFR you will there is a bit called "C/T0"--it's bit 2
(TCON.2). Reviewing the explanation of the bit we see that if the bit is clear then timer 0
will be incremented every machine cycle. This is what we've already used to measure
time. However, if we set C/T0 timer 0 will monitor the P3.4 line. Instead of being
incremented every machine cycle, timer 0 will count events on the P3.4 line. So in our
case we simply connect our sensor to P3.4 and let the 8051 do the work. Then, when we
want to know how many cars have passed, we just read the value of timer 0--the value of
timer 0 will be the number of cars that have passed.
So what exactly is an event? What does timer 0 actually "count?" Speaking at the
electrical level, the 8051 counts 1-0 transitions on the P3.4 line. This means that when a
car first runs over our sensor it will raise the input to a high ("1") condition. At that point
the 8051 will not count anything since this is a 0-1 transition. However, when the car has
passed the sensor will fall back to a low ("0") state. This is a 1-0 transition and at that
instant the counter will be incremented by 1.
It is important to note that the 8051 checks the P3.4 line each instruction cycle (12 clock
cycles). This means that if P3.4 is low, goes high, and goes back low in 6 clock cycles it
will probably not be detected by the 8051. This also means the 8051 event counter is only
capable of counting events that occur at a maximum of 1/24th the rate of the crystal
frequency. That is to say, if the crystal frequency is 12.000 Mhz it can count a maximum
of 500,000 events per second (12.000 Mhz * 1/24 = 500,000). If the event being counted
occurs more than 500,000 times per second it will not be able to be accurately counted by
the 8051.
DESCRIPTION OF AT89S52:

27

The AT89S52 is a low-power, high-performance CMOS 8-bit microcomputer


with 4K bytes of Flash programmable and erasable read only memory (PEROM). The
device is manufactured using Atmels high-density nonvolatile memory technology and
is compatible with the industry-standard MCS-51 instruction set and pinout. The on-chip
Flash allows the program memory to be reprogrammed in-system or by a conventional
nonvolatile memory programmer. By combining a versatile 8-bit CPU with Flash on a
monolithic chip, the Atmel AT89S52 is a powerful microcomputer which provides a
highly-flexible and cost-effective solution to many embedded control applications.
The AT89S52 provides the following standard features: 4K bytes of Flash, 128
bytes of RAM, 32 I/O lines, two 16-bit timer/counters, five vector two-level interrupt
architecture, a full duplex serial port, and on-chip oscillator and clock circuitry. In
addition, the AT89S52 is designed with static logic for operation down to zero frequency
and supports two software selectable power saving modes. The Idle Mode stops the CPU
while allowing the RAM, timer/counters, serial port and interrupt system to continue
functioning. The Power-down Mode saves the RAM contents but freezes the oscillator
disabling all other chip functions until the next hardware reset.

OSCILLATOR CHARACTERISTICS:
XTAL1 and XTAL2 are the input and output, respectively, of an inverting
amplifier which can be configured for use as an on-chip oscillator; Either a quartz crystal
or ceramic resonator may be used. To drive the device from an external clock source,
XTAL2 should be left unconnected while XTAL1 is driven. There are no requirements
on the duty cycle of the external clock signal, since the input to the internal clocking
circuitry is through a divide-by-two flip-flop, but minimum and maximum voltage high
and low time specifications must be observed.
IDLE MODE:
In idle mode, the CPU puts itself to sleep while all the on chip peripherals remain
active. The mode is invoked by software. The content of the on-chip RAM and all the
special functions registers remain unchanged during this mode. The idle mode can be
terminated by any enabled interrupt or by a hardware reset. It should be noted that when
idle is terminated by a hard ware reset, the device normally resumes program execution,
from where it left off, up to two machine cycles before the internal reset algorithm takes
control. On-chip hardware inhibits access to internal RAM in this event, but access to the
port pins is not inhibited. To eliminate the possibility of an unexpected write to a port pin
when Idle is terminated by Reset, the instruction following the one that invokes Idle
should not be one that writes to a port pin or to external memory.
PIN DIAGRAM OF AT89S52
28

PIN DESCRIPTION
VCC:
Supply voltage.
GND:
Ground.
Port 0:
Port 0 is an 8-bit open-drain bi-directional I/O port. As an output port, each pin
can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as
high impedance inputs. Port 0 may also be configured to be the multiplexed low order
address/data bus during accesses to external program and data memory. In this mode P0
has internal pull-ups. Port 0 also receives the code bytes during Flash programming, and
outputs the code bytes during program verification. External pull-ups are required during
program verification.
Port 1:
Port 1 is an 8-bit bi-directional I/O port with internal pull-ups. The Port 1 output
buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins they are
29

pulled high by the internal pull-ups and can be used as inputs. As inputs, Port 1 pins that
are externally being pulled low will source current (IIL) because of the internal pull-ups.
Port 1 also receives the low-order address bytes during Flash programming and
verification.
Port 2:
Port 2 is an 8-bit bi-directional I/O port with internal pull-ups. The Port 2 output
buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins they are
pulled high by the internal pull-ups and can be used as inputs.
Port 2 pins that are externally being pulled low will source current (IIL) because of the
internal pull-ups.
RST:
Reset input a high on this pin for two machine cycles while the oscillator is
running resets the device.
ALE/PROG:
Address Latch Enable output pulse for latching the low byte of the address during
accesses to external memory. This pin is also the program pulse input (PROG) during
Flash programming. In normal operation ALE is emitted at a constant rate of 1/6 the
oscillator frequency, and may be used for external timing or clocking purposes. Note,
however, that one ALE pulse is skipped during each access to external Data Memory.
PSEN:
Program Store Enable is the read strobe to external program memory. When the
AT89S52 is executing code from external program memory, PSEN is activated twice
each machine cycle, except that two PSEN activations are skipped during each access to
external data memory.
EA/VPP:
External Access Enable. EA must be strapped to GND in order to enable the
device to fetch code from external program memory locations starting at 0000H up to
FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on
reset. EA should be strapped to VCC for internal program executions. This pin also
receives the 12volt programming enable voltage (VPP) during Flash programming, for
parts that require 12-volt VPP.
XTAL1:
Input to the inverting oscillator amplifier and input to the internal clock operating
circuit.
30

XTAL2:
Output from the inverting oscillator amplifier.
Port Pin
P3.0
P3.1
P3.2
P3.3
P3.4
P3.5
P3.6
P3.7

Alternate Functions
RXD (serial input port)
TXD (serial output port)
INT0 (external interrupt 0)
INT1 (external interrupt 1)
T0 (timer 0 external input)
T1 (timer 1 external input)
WR (external data memory write strobe)
RD (external data memory read strobe)

5.2 POWER SUPPLIES


The present chapter introduces the operation of power supply circuits built using
filters, rectifiers and voltage regulators. Starting with an AC voltage, a steady DC
voltage, is obtained by rectifying the ac voltage then filtering to a dc level and Finally
Regulation is usually obtained from an IC voltage regulator unit, which takes a dc voltage
and provides a some what lower dc voltage, which remains the same even if the input dc
voltage varies or the output load connected to the dc voltage changes.

BLOCK DIAGRAM:
Transformer
N

Rectifier

Filter

Regulator

The ac voltage, typically 230v is connected to transformer, which steps the ac


voltage down to the level for desired dc output. A diode rectifier provides a full wave
rectified Voltage that is initially filtered by a simple capacitive filter to produce a dc
voltage.
This resulting dc voltage usually has some ripple or ac voltage variation. A
regulator Circuit can use this dc input to provide a regulated that not only has much ripple
voltage
But also remain the same dc values even if the input dc voltage changes. This
voltage Regulation is usually obtained using one of a number of popular voltage
regulation IC Units.
31

TRANSFORMER:
A transformer is the static device of which electric power in one circuit is
transformed into electric power of the same frequency in another circuit. It can rise or
lower the voltage in a circuit but with a corresponding decrease or increase in current. It
works with the principles of mutual induction. In our project we are using step down
transformer for providing that necessary supply for the electronic circuits.

RECTIFIER:
The full wave rectifier conducts during both positive and negative half cycles of
input a.c. input; two diodes are used in this circuit. The a.c. voltage is applied through a
suitable power transformer with proper turns ratio. For the proper operation of the
circuit, a center-tap on the secondary winding of the transformer is essential.
During the positive half cycle of ac input voltage, the diode D1 will be forward
biased and hence will conduct; while diode D2 will be reverse biased and will act as open
circuit and will not conduct.
In the next half cycle of ac voltage, polarity reverses and the diode D2 conducts,
being forward biased, while D1 does not, being reverse biased. Hence the load current
flows in both half cycles of ac voltage and in the same direction. The diode we are using
here for the purpose of rectification is IN4001.
FILTER:
The filter circuit used here is the capacitor filter circuit where a capacitor is
connected at the rectifier output, and a DC is obtained across it. The filtered waveform
is essentially a DC voltage with negligible ripples, which is ultimately fed to the load.
REGULATOR:
The output voltage from capacitor is more filtered and finally regulated. The
voltage regulator is a device, which maintains the output voltage constant irrespective
of the change in supply variations, load variations and temperature changes. Hence
IC7805 is used which is a +5v regulator.
CIRCUIT DIAGRAM OF POWER SUPPLIES:

32

Since all electronic circuits work only with low dc voltage it needs a power supply
unit to provide the appropriate voltage supply. This unit consists of a transformer,
rectifier, filter and regulator. AC voltage typically 230v is connected to the transformer
that steps the AC voltage down to the level to the desired AC voltage. A diode rectifier
then provides a full wave rectified voltage that is initially filtered by a simple capacitive
filter to produce a DC voltage. This resulting DC voltage usually has some ripple or AC
voltage variations.

5.3 Ramp generator


This circuit is a real core of the dimmer system. This circuit generates ramp 100 Hz
signal which is synchronized to the incoming mains voltage. The ramp signal which is
generated will start form 10V and go linearly down to 0V in 10 milliseconds. At the next
mains voltage zero crossing the ramp signal will again immediately start from 10V and
go down to 0V. This same ramp signal is fed to microcontroller
Trimmer R5 is used in controlling the ramp signal. If you have an oscilloscope, then it is
best to use it to look at the situation so that the signal send by the circuit is what is
described earlier. A good approximation is so start at position that R5 is set to it's center
position.
Besides generating the ramp signal the sample generator circuit works as a power supply
comparator part of the dimmer. The 13.5V unstabilized output is used to power the
comparator section (that output can be loaded up to 100 mA). The 10V stabilized voltage
is used only for internal use in ramp generator (that stabilized 10V voltage can be also
used for some extra low power circuitry which need 10V voltage, for example local
dimmer controls if such thing are needed).
The ramp generator used a normal mains transformer which can output at least 200 mA
of current, because it powers both ramp generator itself and the voltage comparator
circuits. I selected a transformer which has an internal overload protection inside the
transformer (overheating protection fuse), so I did not need to add any extra fuses for this
transformer. If you use other kind of transformer, select a suitable fuse to protect it. In
any case a 200 mA fuse on the secondary would be a good idea, probably also a primary
fuse.
33

The picture below shows the output signals the microcontroller outputs to the triac at
different dimmer settings:

5.4 Zero crossing detector:


Zero crossing detectors as a group are not a well-understood application, although they
are essential elements in a wide range of products. It has probably escaped the notice of
readers who have looked at the lighting controller and the Linkwitz Cosine Burst
Generator, but both of these rely on a zero crossing detector for their operation.
A zero crossing detector literally detects the transition of a signal waveform from positive
and negative, ideally providing a narrow pulse that coincides exactly with the zero
voltage condition. At first glance, this would appear to be an easy enough task, but in fact
it is quite complex, especially where high frequencies are involved. In this instance, even
1kHz starts to present a real challenge if extreme accuracy is needed.
The not so humble comparator plays a vital role - without it, most precision zero crossing
detectors would not work, and we'd be without digital audio, PWM and a multitude of
other applications taken for granted.

Basic Low Frequency Circuit


Figure 1 shows the zero crossing detector as used for the dimmer ramp generator in
Project 62. This circuit has been around (almost) forever, and it does work reasonably
well. Although it has almost zero phase inaccuracy, that is largely because the pulse is so
broad that any inaccuracy is completely swamped. The comparator function is handled by
transistor Q1 - very basic, but adequate for the job.

34

The circuit is also sensitive to level, and for acceptable performance the AC waveform
needs to be of reasonably high amplitude. 12-15V AC is typical. If the voltage is too low,
the pulse width will increase. The arrangement shown actually gives better performance
than the version shown in Project 62 and elsewhere on the Net. In case you were
wondering, R1 is there to ensure that the voltage falls to zero - stray capacitance is
sufficient to stop the circuit from working without it.

Figure 1 - Basic 50/60Hz Zero Crossing Detector


The pulse width of this circuit (at 50Hz) is typically around 600us (0.6ms) which sounds
fast enough. The problem is that at 50Hz each half cycle takes only 10ms (8.33ms at
60Hz), so the pulse width is over 5% of the total period. This is why most dimmers can
only claim a range of 10%-90% - the zero crossing pulse lasts too long to allow more
range.
While this is not a problem with the average dimmer, it is not acceptable for precision
applications. For a tone burst generator (either the cosine burst or a 'conventional' tone
burst generator), any inaccuracy will cause the switched waveform to contain glitches.
The seriousness of this depends on the application.

Precision zero crossing detectors come in a fairly wide range of topologies, some
interesting, others not. One of the most common is shown in Project 58, and is commonly
used for this application. The exclusive OR (or XOR) gate makes an excellent edge
detector, as shown in Figure 2.

35

Figure 2 - Exclusive OR Gate Edge Detector


There is no doubt that the circuit shown above is more than capable of excellent results
up to quite respectable frequencies. The upper frequency is limited only by the speed of
the device used, and with a 74HC86 it has a propagation delay of only 11ns [1], so
operation at 100kHz or above is achievable.
The XOR gate is a special case in logic. It will output a 1 only when the inputs are
different (i.e. one input must be at logic high (1) and the other at logic low (0v). The
resistor and cap form a delay so that when an edge is presented (either rising or falling),
the delayed input holds its previous value for a short time. In the example shown, the
pulse width is 50ns. The signal is delayed by the propagation time of the device itself
(around 11ns), so a small phase error has been introduced. The rise and fall time of the
squarewave signal applied was 50ns, and this adds some more phase shift.
There is a pattern emerging in this article - the biggest limitation is speed, even for
relatively slow signals. While digital logic can operate at very high speeds, we have well
reached the point where the signals can no longer be referred to as '1' and '0' - digital
signals are back into the analogue domain, specifically RF technology.
The next challenge we face is converting the input waveform (we will assume a
sinewave) into sharply defined edges so the XOR can work its magic. Another terribly
under-rated building block is the comparator.
While opamps can be used for low speed operation (and depending on the application),
extreme speed is needed for accurate digitisation of an analogue signal. It may not appear
so at first glance, but a zero crossing detector is a special purpose analogue to digital
converter (ADC).

36

Comparators
The comparator used for a high speed zero crossing detector, a PWM converter or
conventional ADC is critical. Low propagation delay and extremely fast operation are not
only desirable, they are essential.
Comparators may be the most underrated and under utilised monolithic linear component.
This is unfortunate because comparators are one of the most flexible and universally
applicable components available. In large measure the lack of recognition is due to the IC
opamp, whose versatility allows it to dominate the analog design world. Comparators are
frequently perceived as devices that crudely express analog signals in digital form - a 1bit A/D converter. Strictly speaking, this viewpoint is correct. It is also wastefully
constrictive in its outlook. Comparators don't "just compare" in the same way that
opamps don't "just amplify". [2]
The above quote was so perfect that I just had to include it. Comparators are indeed
underrated as a building block, and they have two chief requirements ... low input offset
and speed. For the application at hand (a zero crossing detector), both of these factors
will determine the final accuracy of the circuit. The XOR has been demonstrated to give a
precise and repeatable pulse, but its accuracy depends upon the exact time it 'sees' the
transition of the AC waveform across zero. This task belongs to the comparator.

Figure 3 - Comparator Zero Crossing Detector


In Figure 3 we see a typical comparator used for this application. The output is a square
wave, which is then sent to a circuit such as that in Figure 2. This will create a single
pulse for each squarewave transition, and this equates to the zero crossings of the input
signal. It is assumed for this application that the input waveform is referenced to zero
volts, so swings equally above and below zero.
37

Figure 4 - Comparator Timing Error


Figure 4 shows how the comparator can mess with our signal, causing the transition to be
displaced in time, thereby causing an error. The significance of the error depends entirely
on our expectations - there is no point trying to get an error of less than 10ns for a
dimmer, for example.
The LM339 comparator that was used for the simulation is a very basic type indeed, and
with a quoted response time of 300ns it is much too slow to be usable in this application.
This is made a great deal worse by the propagation delay, which (as simulated) is 1.5us.
In general, the lower the power dissipation of a comparator, the slower it will be,
although modern IC techniques have overcome this to some extent.
You can see that the zero crossing of the sinewave (shown in green) occurs well before
the output (red) transition - the cursor positions are set for the exact zero crossing of each
signal. The output transition starts as the input passes through zero, but because of device
delays, the output transition is almost 5us later. Most of this delay is caused by the rather
leisurely pace at which the output changes - in this case, about 5us for the total 7V peak
to peak swing. That gives us a slew rate of 1.4V/us which is useless for anything above
100Hz or so.
One of the critical factors with the comparator is its supply voltage. Ideally, this should
be as low as possible, typically with no more than 5V.
The higher the supply voltage, the further the output voltage has to swing to get from
maximum negative to maximum positive and vice versa. While a slew rate of 100V/us
may seem high, that is much too slow for an accurate ADC, pulse width modulator or
zero crossing detector.

38

At 100V/us and a total supply voltage of 10V (5V), it will take 0.1us (100ns) for the
output to swing from one extreme to the other. To get that into the realm of what we
need, the slew rate would need to be 1kV/us, giving a 10ns transition time. Working from
Figure 3, you can see that even then there is an additional timing error of 5ns - not large,
and in reality probably as good as we can expect.
The problem is that the output doesn't even start to change until the input voltage passes
through the reference point (usually ground). If there is any delay caused by slew rate
limiting, by the time the output voltage passes through zero volts, it is already many
nanoseconds late. Extremely high slew rates are possible, and Reference 2 has details of a
comparator that is faster than a TTL inverter! Very careful board layout and attention to
bypassing is essential at such speeds, or the performance will be worse than woeful.
Using A Differential Line Receiver
This version is contributed by John Rowland [3] and is a very clever use of an existing IC
for a completely new purpose. The DS3486 is a quad RS-422/ RS-423 differential line
receiver. Although it only operates from a single 5V supply, the IC can accept an input
signal of up to 25V without damage. It is also fairly fast, with a typical quoted
propagation time of 19ns and internal hysteresis of 140mV.

Figure 5 - Basic Zero Crossing Detector Using DS3486

The general scheme is shown in Figure 5. Two of the comparators in the IC are used one detects when the input voltage is positive and the other detects negative (with respect
to earth/ ground). The NOR gate can only produce an output during the brief period when
both comparator outputs are low (i.e. close to earth potential). However, tests show that
the two differential receiver channels do not switch at exactly 0.00V. With a typical
39

DS3486 device, the positive detector switches at about 0.015V and the negative detector
switches at approximately -0.010V. This results in an asymmetrical dead band of 25mV
around 0V. Adding resistors as shown in Figure 6 allows the dead band to be made
smaller, and (perhaps more importantly for some applications), it can be made to be
symmetrical.

Figure 6 - Modified Zero Crossing Detector To Obtain True 0V Detection


Although fixed resistors are shown, it will generally be necessary to use pots. This allows
for the variations between individual comparators - even within the same package. This is
necessary because the DS3486 is only specified to switch with voltages no greater than
200mV. The typical voltage is specified to be 70mV (exactly half the hysteresis
voltage), but this is not a guaranteed parameter.
Indeed, John Rowland (the original designer of the circuit) told me that only the National
Semiconductor devices actually worked in the circuit - supposedly identical ICs from
other manufacturers refused to function. I quote ...
We did some testing with "equivalent" parts made by other manufacturers, and found
very different behavior in the near-zero region. Some parts have lots of hysteresis, some
have none, detection thresholds vary from device to device, and in fact even in a quad
part like the DS3486 they are different from channel to channel within the same package.
Eventually we settled on the National DS3486 with some added resistors on its input pins
as shown in Figure 6.
The most recent version of the circuit uses trimpots, 100 ohm on the positive detector and
200 ohm on the negative detector. These values allow us to trim almost every DS3486 to
balance the noise threshold in the +/-5mV to +/-15mV range. Occasionally we do get a
DS3486 which will not detect in this range. Sometimes, we find that both the positive and

40

negative detectors are tripping on the same side (polarity) of zero, if so we pull that chip
and replace it.
The additional resistors allow the detection thresholds to be adjusted to balance the
detection region around 0V. The resistor from pin 1 to earth makes the positive detector
threshold more positive. The resistor from the input to pin 7 forces the negative detector
threshold to become more negative. Typical values are shown for 25mV detection using
National's DS3486 parts. In reality, trimpots are essential to provide in-circuit
adjustment.
5.5 Digital-to-analog converter
Basics Of DAC:
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device that converts a
digital (usually binary) code to an analog signal (current, voltage, or electric charge). An
analog-to-digital converter (ADC) performs the reverse operation. Signals are easily
stored and transmitted in digital form, but a DAC is needed for the signal to be
recognized by human senses or other non-digital systems.
A common use of digital-to-analog converters is generation of audio signals from digital
information in music players. Digital video signals are converted to analog in televisions
and cell phones to display colors and shades. Digital-to-analog conversion can degrade a
signal, so conversion details are normally chosen so that the errors are negligible.
Due to cost and the need for matched components, DACs are almost exclusively
manufactured on integrated circuits (ICs). There are many DAC architectures which have
different advantages and disadvantages. The suitability of a particular DAC for an
application is determined by a variety of measurements including speed and resolution.
Overview

Ideally sampled signal.


A DAC converts an abstract finite-precision number (usually a fixed-point binary
number) into a physical quantity (e.g., a voltage or a pressure). In particular, DACs are
41

often used to convert finite-precision time series data to a continually varying physical
signal.
A typical DAC converts the abstract numbers into a concrete sequence of impulses that
are then processed by a reconstruction filter using some form of interpolation to fill in
data between the impulses. Other DAC methods (e.g., methods based on Delta-sigma
modulation) produce a pulse-density modulated signal that can then be filtered in a
similar way to produce a smoothly varying signal.
As per the NyquistShannon sampling theorem, a DAC can reconstruct the original
signal from the sampled data provided that its bandwidth meets certain requirements
(e.g., a baseband signal with bandwidth less than the Nyquist frequency). Digital
sampling introduces quantization error that manifests as low-level noise added to the
reconstructed signal.
Practical operation

Piecewise constant output of a conventional practical DAC.


Instead of impulses, usually the sequence of numbers update the analogue voltage at
uniform sampling intervals.
These numbers are written to the DAC, typically with a clock signal that causes each
number to be latched in sequence, at which time the DAC output voltage changes rapidly
from the previous value to the value represented by the currently latched number. The
effect of this is that the output voltage is held in time at the current value until the next
input number is latched resulting in a piecewise constant or 'staircase' shaped output. This
is equivalent to a zero-order hold operation and has an effect on the frequency response
of the reconstructed signal.
The fact that DACs output a sequence of piecewise constant values (known as zero-order
hold in sample data textbooks) or rectangular pulses causes multiple harmonics above the
Nyquist frequency. Usually, these are removed with a low pass filter acting as a
reconstruction filter in applications that require it.

42

6-bit DACs with I2C-bus:


FEATURES
Eight DACs with 6-bit resolution
Adjustable common output swing
Push-pull outputs
Outputs short-circuit protected
Three programmable slave address bits
Large supply voltage range
Low temperature coefficient
GENERAL DESCRIPTION:
The interface circuit is a bipolar IC in a DIP16, SO16, or SO20 package made in an I2Lcompatible 18 V process. The TDA8444 contains eight programmable 6-bit DAC
outputs, an I2C-bus slave receiver with three (two for SO16) programmable address bits
and one input (VMAX) to set the maximum output voltage. Each DAC can be
programmed separately by a 6-bit word to 64 values, but VMAX determines the
maximum output voltage for all DACs. The resolution will be approximately
164VMAX. At power-on all DACs are set to their lowest value.

The circuit will not react to other combinations of the 4 instruction bits I3 to I0 than 0 or
F, but will still generate an acknowledge. The difference between instruction 0 and F is
only important when more than one data byte is sent within one transmission. Instruction
0 causes the data bytes to be written into the DAC-latches with consecutive sub addresses
starting with the sub address given in the instruction byte (auto-increment of sub
43

address), while instruction F will cause a consecutive writing of the data bytes into the
same DAC-latch whose sub address was given in the instruction byte. In case of only one
data byte the DAC-latch with the sub address equal to the sub address in the instruction
byte will receive
the data. Valid sub addresses are: 0H to 7H. The sub addresses correspond to DAC0 to
DAC7.
The Auto-Increment (AI) function of instruction 0, however, works on all possible sub
addresses 0 to F in such a way that next to sub address F, sub address 0 will follow, and
so on. The data will be latched into the DAC-latch on the positive-going edge of the
acknowledge related clock pulse.
The specification of the SCL and SDA I/O meets the I2C-bus specification. For
protection against positive voltage pulses on pins 3 and 4, zener diodes are connected
between these pins and VEE. This means that normal bus line voltage should not exceed
5.5 V. The address inputs A0, A1 and A2 can be easily programmed by either a
connection to VEE (An = 0) or VCC (An = 1). If the inputs are left floating the result will
be An = 1.
VMAX
The VMAX input gives a means of compressing the DAC output voltage swing. The
maximum DAC output voltage will be equal to VMAX + VDAC(min), while the 6-bit
resolution is maintained. This enables a higher voltage resolution for smaller output
swings.
DACs
The DACs consist of a 6-bit data-latch, current switches and an op amp. The current
sources connected to the switches have values with weights 20 to 25. The sum of the
switched on currents is converted by the opamp into a voltage between approximately 0.5
and 10.5 V if VMAX = VCC = 12 V. The DAC outputs are short-circuit protected
against VCC and VEE. Capacitive load on the DAC outputs should not exceed 2 nF in
order to prevent possible oscillations at certain levels. The temperature coefficient for
each of the outputs remains in all possible conditions well below 0.1 LSB per Kelvin.
5.6 Optocoupler:
Basics of optocoupler:
There are many situations where signals and data need to be transferred from one
subsystem to another within a piece of electronics equipment, or from one piece of
44

equipment to another, without making a direct ohmic electrical connection. Often this is
because the source and destination are (or may be at times) at very different voltage
levels, like a microprocessor which is operating from 5V DC but being used to control a
triac which is switching 240V AC. In such situations the link between the two must be an
isolated one, to protect the microprocessor from overvoltage damage.
Relays can of course provide this kind of isolation, but even small relays tend to be fairly
bulky compared with ICs and many of todays other miniature circuit components.
Because theyre electro-mechanical, relays are also not as reliable and only capable of
relatively low speed operation. Where small size, higher speed and greater reliability are
important, a much better alternative is to use an optocoupler. These use a beam of light to
transmit the signals or data across an electrical barrier, and achieve excellent isolation.
Optocouplers typically come in a small 6-pin or 8-pin IC package, but are essentially a
combination of two distinct devices: an optical transmitter, typically a gallium arsenide
LED (light-emitting diode) and an optical receiver such as a phototransistor or lighttriggered diac. The two are separated by a transparent barrier which blocks any electrical
current flow between the two, but does allow the passage of light. The basic idea is
shown in Fig.1, along with the usual circuit symbol for an optocoupler.
Usually the electrical connections to the LED section are brought out to the pins on one
side of the package and those for the phototransistor or diac to the other side , to
physically separate them as much as possible. This usually allows optocouplers to
withstand voltages of anywhere between 500V and 7500V between input and output.
Optocouplers are essentially digital or switching devices, so theyre best for transferring
either on-off control signals or digital data. Analog signals can be transferred by means of
frequency or pulse-width modulation.
Key Parameters:
The most important parameter for most optocouplers is their transfer efficiency, usually
measured in terms of their current transfer ratio or CTR. This is simply the ratio between
a current change in the output transistor and the current change in the input LED which
produced it. Typical values for CTR range from 10% to 50% for devices with an output
phototransistor and up to 2000% or so for those with a Darlington transistor pair in the
output.
Note , however that in most devices CTR tends to vary with absolute current level.
Typically it peaks at a LED current level of about 10mA, and falls away at both higher
and lower current levels. Other optocoupler parameters include the output transistors
maximum collector-emitter voltage rating V CE(max) , which limits the supply voltage in
the output circuit; the input LEDs maximum current rating I F(max) , which is used to
calculate the minimum value for its series resistor; and the optocouplers bandwidth,
45

which determines the highest signal frequency that can be transferred through it
determined mainly by internal device construction and the performance of the output
phototransistor. Typical opto-couplers with a single output phototransistor may have a
bandwidth of 200 - 300kHz, while those with a Darlington pair are usually about 10 times
lower, at around 20 - 30kHz.
How they are used:
Basically the simplest way to visualize an optocoupler is in terms of its two main
components: the input LED and the output transistor or diac. As the two are electrically
isolated, this gives a fair amount of flexibility when it comes to connecting them into
circuit. All we really have to do is work out a convenient way of turning the input LED
on and off, and using the resulting switching of the photo- transistor/diac to generate an
output waveform or logic Basically the simplest way to visualize an optocoupler is in
terms of its two main components: the input LED and the output transistor or diac. As the
two are electrically isolated, this gives a fair amount of flexibility when it comes to
connecting them into circuit. All we really have to do is work out a convenient way of
turning the input LED on and off, and using the resulting switching of the phototransistor/diac to generate an output waveform or logic signal that is compatible with our
output circuitry.
For example just like a discrete LED, you can drive an optocouplers input LED from a
transistor or logic gate/buffer. All thats needed is a series resistor to set the current level
when the LED is turned on. And regardless of whether you use a transistor or logic buffer
to drive the LED, you still have the option of driving it in pull down orpull up mode.
This means you can arrange for the LED, and hence the optocoupler, to be either on or
off for a logic high (or low) in the driving circuitry.
In some circuits, there may be a chance that at times the driving voltage fed to the input
LED could have reversed polarity (due to a swapped cable connection, for example). This
can cause damage to the device, because optocoupler LEDs tend to have quite a low
reverse voltage rating: typically only 3 - 5V.
On the output side, there are again a number of possible connections even with a typical
optocoupler of the type having a single phototransistor receiver (such as the 4N25 or
4N28). In most cases the transistor is simply connected as a light-operated switch, in
series with a load resistor RL. The base of the transistor is left unconnected, and the
choice is between having the transistor at the top of the load resistor or at the bottom i.e.,
in either pull-up or pull-down mode. This again gives plenty of flexibility for driving
either logic gates or transistor.
46

If a higher bandwidth is needed, this can be achieved by using only the collector and base
connections, and using the transistor as a photodiode. This lowers the optocouplers CTR
and transfer gain considerably, but can increase the bandwidth to 30MHz or so. An
alternative approach is still to use the output device as a photo-transistor, but tie the base
down to ground (or the emitter) via a resistor Rb, to assist in removal of stored charge.
This can extend the optos bandwidth usefully (although not dramatically), without
lowering the CTR and transfer gain any more than is necessary. Typically youd start
with a resistor value of 1Mohm, and reduce it gradually down to about 47kohm to see if
the desired bandwidth can be reached.

MOC3021(400 Volts Peak)


The MOC3020 Series consists of gallium arsenide infrared emitting diodes,
optically coupled to a silicon bilateral switch.
They are designed for applications requiring isolated triac triggering.
Recommended for 115/240 Vac(rms) Applications:
Solenoid/Valve Controls Static ac Power Switch
Lamp Ballasts Solid State Relays
Interfacing Microprocessors to 115 Vac Peripherals Incandescent Lamp Dimmers
Motor Controls

47

5.7 Basics of TRIAC:


TRIAC, from Triode for Alternating Current, is a generalized trade name for an
electronic component that can conduct current in either direction when it is triggered
(turned on), and is formally called a bidirectional triode thyristor or bilateral triode
thyristor.
TRIACs belong to the thyristor family and are closely related to Silicon-controlled
rectifiers (SCR). However, unlike SCRs, which are unidirectional devices (i.e. can
conduct current only in one direction), TRIACs are bidirectional and so current can flow
through them in either direction. Another difference from SCRs is that TRIACs can be
triggered by either a positive or a negative current applied to its gate electrode, whereas
SCRs can be triggered only by currents going into the gate. In order to create a triggering
current, a positive or negative voltage has to be applied to the gate with respect to the A1
terminal (otherwise known as MT1).
Once triggered, the device continues to conduct until the current drops below a certain
threshold, called holding current.
The bidirectionality makes TRIACs very convenient switches for AC circuits, also
allowing them to control very large power flows with milliampere-scale gate currents. In
addition, applying a trigger pulse at a controlled phase angle in an AC cycle allows one to
control the percentage of current that flows through the TRIAC to the load (phase
control), which is commonly used, for example, in controlling the speed of low-power
induction motors, in dimmering lamps and in controlling AC heating resistors.

48

Gate threshold current, latching current and holding current


A TRIAC starts conducting when a current flowing into or out of its gate is sufficient to
turn on the relevant junctions in the quadrant of operation. The minimum current able to
do this is called gate threshold current and is generally indicated by IGT. In a typical
TRIAC, the gate threshold current is generally few milliampres, but one has to take into
account also that:

IGT depends on the temperature: indeed, the higher the temperature, the higher the
reverse currents in the blocked junctions. This implies the presence of more free
carriers in the gate region, which lowers the gate current needed.
IGT depends on the quadrant of operation, since a different quadrant implies a
different way of triggering, as explained in the section "Physics of the device". As a
rule, the first quadrant is the most sensitive (i.e. requires the least current to turn on),
whereas the fourth quadrant is the least sensitive.
When turning on from an off-state, IGT depends on the voltage applied on the two
main terminals MT1 and MT2. Higher voltage between MT1 and MT2 cause greater
reverse currents in the blocked junctions requiring less gate current similar to high
temperature operation. Generally, in datasheets, I GT is given for a specified voltage
between MT1 and MT2.

When the gate current is discontinued, if the current flowing between the two main
terminals is more than the so-called latching current the device keeps conducting,
otherwise the device might turn off. Latching current is the minimum that can make up
for the missing gate current in order to keep the device internal structure latched. The
value of this parameter varies with:

gate current pulse (amplitude, shape and width)


temperature
control circuit (resistors or capacitors between the gate and MT1 increase the
latching current because they steal some current from the gate before it can help the
complete turn-on of the device)
quadrant of operation

In particular, if the pulse width of the gate current is sufficiently large (generally some
tens of microseconds), the TRIAC has completed the triggering process when the gate
signal is discontinued and the latching current reaches a minimum level called holding
49

current. Holding current the minimum required current flowing between the two main
terminals that keeps the device on after it has achieved commutation in every part of its
internal structure.
In datasheets, the latching current is indicated as I L, while the holding current is indicated
as IH. They are typically in the order of some milliampres.
Static dv/dt
A high dv/dt between A2/MT2 and A1/MT1 may turn on the TRIAC when it is off.
Typical values of critical static dv/dt are in the tens of volts per microsecond.
The turn-on is due to a parasitic capacitive coupling of the gate terminal with the
A2/MT2 terminal, which lets currents flow into the gate in response to a large rate of
voltage change at A2/MT2. One way to cope with this limitation is to design a suitable
RC or RCL snubber network. in many cases this is sufficient to lower the impedance of
the gate towards A1/MT1. By putting a resistor or a small capacitor (or both in parallel)
between these two terminals, the capacitive current generated during the transient, flows
out of the device without activating it.
A careful reading of the application notes provided by the manufacturer and testing of the
particular device model to design the correct network is in order. Typical values for
capacitors and resistors between the gate and A1/MT1 may be up to 100nF and up to
1k.

In datasheets, the static dv/dt is usually indicated as


and, as mentioned before, is
in relation to the tendency of a TRIAC to turn on from the off state after a large voltage
rate of rise even without applying any current in the gate.
Critical di/dt
A high rate of rise of the current flowing between A1/MT1 and A2/MT2 (in either
direction) when the device is turning on can damage or destroy the TRIAC even if the
pulse duration is very short. The reason is that during the commutation, the power
dissipation is not uniformly distributed across the device. When switching on, the device
starts to conduct current before the conduction finishes to spread across the entire
junction. The device typically starts to conduct the current imposed by the external
circuitry after some nanoseconds or microseconds but the complete switch on of the
whole junction takes a much longer time, so too swift a current rise may cause local hot
spots that can permanently damage the TRIAC.
50

In datasheets, this parameter is usually indicated as


tens of ampere per microsecond

and is typically in the order of the

Commutating dv/dt and di/dt


The commutating dv/dt rating applies when a TRIAC has been conducting and attempts
to turn off with a partially reactive load, such as an inductor. The current and voltage are
out of phase, so when the current decreases below the holding value, the triac attempts to
turn off, but because of the phase shift between current and voltage, a sudden voltage step
takes place between the two main terminals, which turns the device on again.

In datasheets, this parameter is usually indicated as


of up to some volts per microsecond.

and is generally in the order

The reason why commutating dv/dt is less than static dv/dt is that, shortly before the
device tries to turn off, there is still some excess minority charge in its internal layers as a
result of the previous conduction. When the TRIAC starts to turn off, these charges alter
the internal potential of the region near the gate and A1/MT1, so it is easier for the
capacitive current due to dv/dt to turn on the device again.
Another important factor during a commutation from on-state to off-state is the di/dt of
the current from A1/MT1 to A2/MT2. This is similar to the recovery in standard diodes:
the higher the di/dt, the greater the reverse current. Because in the TRIAC there are
parasitic resistances, a high reverse current in the p-n junctions inside it can provoke a
voltage drop between the gate region and the A1/MT1 region which may make the
TRIAC stay turned on.

In a datasheet, the commutating di/dt is usually indicated as


the order of some ampres per microsecond.

and is generally in

The commutating dv/dt is very important when the TRIAC is used to drive a load with a
phase shift between current and voltage, such as an inductive load. Suppose one wants to
turn the inductor off: when the current goes to zero, if the gate is not fed, the TRIAC
attempts to turn off, but this causes a step in the voltage across it due to the afore-

51

mentioned phase shift. If the commutating dv/dt rating is exceeded, the device will not
turn off.
Application:
Low power TRIACs are used in many applications such as light dimmers, speed controls
for electric fans and other electric motors, and in the modern computerized control
circuits of many household small and major appliances.
However, when used with inductive loads such as electric fans, care must be taken to
assure that the TRIAC will turn off correctly at the end of each half-cycle of the AC
power. Indeed, TRIACs can be very sensitive to high values of dv/dt between A1/MT1
and A2/MT2, so a phase shift between current and voltage (as in the case of an inductive
load) leads to sudden voltage step that can make the device turn on in an unwanted
manner.
Unwanted turn-ons can be avoided by using a snubber circuit (usually of the RC or RCL
type) between A1/MT1 and A2/MT2. Snubber circuits are also used to prevent premature
triggering, caused for example by voltage spikes in the mains supply.
Because turn-ons are caused by internal capacitive currents flowing into the gate as a
consequence of a high voltage dv/dt, a gate resistor or capacitor (or both in parallel) may
be connected between the gate and A1/MT1 to provide a low-impedance path to A1/MT1
and further prevent false triggering. This, however, increases the required trigger current
or adds latency due to capacitor charging. On the other hand, a resistor between the gate
and A1/MT1 helps draw leakage currents out of the device, thus improving the
performance of the TRIAC at high temperature, where the maximum allowed dv/dt is
lower. Values of resistors less than 1k and capacitors of 100nF are generally suitable
for this purpose, although the fine-tuning should be done on the particular device model.
For higher-powered, more-demanding loads, two SCRs in inverse parallel may be used
instead of one TRIAC. Because each SCR will have an entire half-cycle of reverse
polarity voltage applied to it, turn-off of the SCRs is assured, no matter what the
character of the load. However, due to the separate gates, proper triggering of the SCRs is
more complex than triggering a TRIAC.
In addition to commutation, a TRIAC may also not turn on reliably with non-resistive
loads if the phase shift of the current prevents achieving holding current at trigger time.
To overcome that, pulse trains may be used to repeatedly try to trigger the TRIAC until it
finally turns on. The advantage is that the gate current does not need to be maintained

52

throughout the entire conduction angle, which can be beneficial when there is only
limited drive capability available.
MAIN FEATURES:

DESCRIPTION
Available either in through-hole or surface-mount packages, the BTA/BTB12 and T12
triac series is suitable for general purpose AC switching. They can be used as an ON/OFF
function in applications such as static relays, heating regulation, induction motor starting
circuits... or for phase control operation in light dimmers, motor speed controllers,...
The snubberless versions (BTA/BTB...W and T12 series) are specially recommended for
use on inductive loads, thanks to their high commutation performances. By using an
internal ceramic pad, the BTA series provides voltage insulated tab (rated at 2500V
RMS) complying with UL standards (File ref.: E81734)

53

6. PCB DESIGN
Design and Fabrication of Printed circuit boards
6.1 INTRODUCTION:
Printed circuit boards, or PCBs, form the core of electronic equipment domestic and
industrial. Some of the areas where PCBs are intensively used are computers, process
control, telecommunications and instrumentation.
54

6.2 MANUFATCURING:
The manufacturing process consists of two methods; print and etch, and print, plate and
etch. The single sided PCBs are usually made using the print and etch method. The
double sided plate through hole (PTH) boards are made by the print plate and etch
method.
The production of multi layer boards uses both the methods. The inner layers are printed
and etch while the outer layers are produced by print, plate and etch after pressing the
inner layers.
6.3 SOFTWARE:
The software used in our project to obtain the schematic layout is MICROSIM.
6.4 PANELISATION:
Here the schematic transformed in to the working positive/negative films. The circuit is
repeated conveniently to accommodate economically as many circuits as possible in a
panel, which can be operated in every sequence of subsequent steps in the PCB process.
This is called penalization. For the PTH boards, the next operation is drilling.
6.5 DRILLING:
PCB drilling is a state of the art operation. Very small holes are drilled with high speed
CNC drilling machines, giving a wall finish with less or no smear or epoxy, required for
void free through whole plating.
6.6 PLATING:
The heart of the PCB manufacturing process. The holes drilled in the board are treated
both mechanically and chemically before depositing the copper by the electro less copper
platting process.

6.7 ETCHING:
Once a multiplayer board is drilled and electro less copper deposited, the image available
in the form of a film is transferred on to the out side by photo printing using a dry film
printing process. The boards are then electrolytic plated on to the circuit pattern with
copper and tin. The tin-plated deposit serves an etch resist when copper in the unwanted
area is removed by the conveyors spray etching machines with chemical etch ants. The
55

etching machines are attached to an automatic dosing equipment, which analyses and
controls etch ants concentrations
6.8 SOLDERMASK:
Since a PCB design may call for very close spacing between conductors, a solder mask
has to be applied on the both sides of the circuitry to avoid the bridging of conductors.
The solder mask ink is applied by screening. The ink is dried, exposed to UV, developed
in a mild alkaline solution and finally cured by both UV and thermal energy.
6.9 HOT AIR LEVELLING:
After applying the solder mask, the circuit pads are soldered using the hot air leveling
process. The bare bodies fluxed and dipped in to a molten solder bath. While removing
the board from the solder bath, hot air is blown on both sides of the board through air
knives in the machines, leaving the board soldered and leveled. This is one of the
common finishes given to the boards. Thus the double sided plated through whole printed
circuit board is manufactured and is now ready for the components to be soldered.

7 SOFTWARE TOOLS
7.1 KEIL Assembler:
Keil development tools for the 8051 Microcontroller Architecture support every level of
software developer from the professional applications engineer to the student just
learning
about
embedded
software
development.
The industry-standard Keil C Compilers, Macro Assemblers, Debuggers, Real-time
Kernels, Single-board Computers, and Emulators support all 8051 derivatives and help
you get your projects completed on schedule.
The Keil 8051 Development Tools are designed to solve the complex problems facing
embedded software developers.
When starting a new project, simply select the microcontroller you use from the Device
Database and the Vision IDE sets all compiler, assembler, linker, and memory options
for you.
Numerous example programs are included to help you get started with the most popular
embedded 8051 devices.

56

The Keil Vision Debugger accurately simulates on-chip peripherals (IC, CAN, UART,
SPI, Interrupts, I/O Ports, A/D Converter, D/A Converter, and PWM Modules) of your
8051 device.
Simulation helps you understand hardware configurations and avoids time wasted on
setup problems. Additionally, with simulation, you can write and test applications before
target hardware is available.
When you are ready to begin testing your software application with target hardware, use
the MON51, MON390, MONADI, or FlashMON51 Target Monitors, the ISD51 InSystem Debugger, or the ULINK USB-JTAG Adapter to download and test program
code on your target system.
It's been suggested that there are now as many embedded systems in everyday use as
there are people on planet Earth. Domestic appliances from washing machines to TVs,
video recorders and mobile phones, now include at least one embedded processor. They
are also vital components in a huge variety of automotive, medical, aerospace and
military systems. As a result, there is strong demand for programmers with 'embedded'
skills, and many desktop developers are moving into this area.
We look at the inside of 8051. We demonstrate some of the widely used registers of the
8051 with simple instruction such as MOV and ADD.
We discuss about assembly language & machine language programming and define terms
such as mnemonics, op-code, and operand etc.
The process of assembling and creating a ready to run program for the 8051.
Step by step execution of an 8051 program and role of program counter.
Then we look about some widely used assembly language directives, pseudo code and
data types related to the 8051.
We discuss about flag bits and how they are affected by arithmetic instructions.
Inside 8051:
Registers:
D7
D6
D5
D4
D3
D2
D1
D0
In the cpu, registers are used to store information temporarily that information could be a
byte of data to be processed or address pointing to the data to be processed or address
pointing to the data to be fetched.
The majority of 8051 registers are 8 bit register. The 8 bit register are classified into
MSB (Most Significant Bit)
LSB (Lost Significant Bit)
With an 8 bit data type, any data longer than 8 bits must be broken into 8 chunks before it
is processed.
The most widely used registers of the 8051 are AC(Accumulator),
B,R0,R1,R2,R3,R4,R5,R6,R7, DPTR(Data Pointer) and PC(program counter).
All of the above registers are 8 bits except DPTR and PC.
57

MOV (Instruction):
The MOV instruction copies data from one location to another. It has the
following format.
MOV destination, source, copy source to destination.

Example:
MOV A, #55H; Load value 55H into register A
MOV R0, A; copy contacts of A into R0
MOV R1, A; copy contacts of A into R1
1. Value can be loaded directly into any of the registers A, B or R0 R7. However to
indicate that it is an immediate value it must be proceeded with a pound sign (#).
MOV A, #23H
MOV R0, #12H
MOV R5, #0F9H
MOV R5, #F9H will cause error.
0 is used between # and F to indicate that F is hex number and not a letter.
2. If the values 0 to F are moved to 8-bit register, the rest of the bits are assumed to be
zero. For example, in MOV, #5 the result will be A=05; that is A = 0000 0101.
3. Moving a Value that is too large into a register will cause error.
MOV A, #7F2H
7F2H > 8 bits (FFH)
4. A value to be loaded into a register must be proceeded with a pound sign (#) otherwise
it must be load from a memory location.
For example MOV A, 17H
It means to MOV A the value hold in memory location 17H, which could have any value.
In order to load the value 17H into the accumulator we must write MOV A, # 17H
Notice that the absence of the # sign will not cause an error by the assembler. Since it is a
valid instruction. However the result would not be what the programmer intended.
ADD Instruction:
ADD A, source; add the source operand to the accumulator
MOV A, #25H
MOV R2, #34H
ADD A, R2
; Add R2 to the accumulator
; ( A = 25+34)
A=59H
INTRODUCTION TO 8051 ASSEMBLY PROGRAMMING
o While the CPU can work only in binary, it can do so at a very high speed.
o A program consists of 0s and 1s is called Machine language.
58

o In the earlier days of the computer, programmers coded programs in machine


language.
o Eventually, assembly languages were developed that provided mnemonics for the
machine code instructions, other features that made programming faster and less
error.
o Assembly language programs must be translated into machine code by a program
called Assembler.
o Assembly language is referred to as a low-level language because it deals directly
with the internal structure of the CPU.
o Assembler is used to translate an assembly language program into machine code
for the operation code.
o Today one can use many different programming languages such as BASIC,
PASCAL, C, C++, JAVA etc., and these languages are called as High-level
languages.
o The high level languages are translated into machine code by a program called
Compiler.
7.2 Assembling and Running an 8051 Program:
1. First we use an editor to type in a program. Many excellent editors are available
that can be used to create and edit the program. We are using assembly
language .asm as the extension.
2. The asm source file containing the program code created in step1 typed to 8051
assembler. The assembler converts the instruction into machine code. The
assembler will produce an object file and list file. The extension for the object file
is obj while extension for the list files lst.
3. Assembler requires a third step called linking. The link program tasks one or more
object file and produce an absolute file with the extension abs.
4. Next the abs file is fed into a program called OH (object to hex converter)
which creates a file extension hex that is ready to burn into ROM.
8051 Data types and directives
DB (Define Byte)

59

The DB directive is the most widely used data directive in the assembler.
It is used to define 8 bit data.
When DB is used to define data, the numbers can be in decimal, binary,
hex or ASCII formats.
To indicate ASCII, simply phase the characters in quotation marks (like
this).
The assembler will assign the code for the numbers or characters
automatically.

ORG 00H
DATA1: DB 28; decimal (1c in hex)
DATA2: DB 00110101B; binary (35 in hex)
DATA3: DB 39H;

hex

Assembler directives
ORG (Origin):
The ORG directive is used to indicate the beginning of the address.
EQU (Equate):
o This is used to define a constant without occupying a memory
location.
o The EQU directive does not set aside storage for a data item but
associate a constant value with a data label so that when the label
appears in the program; its constant value will be substituted for the
label.
Example:
COUNT EQU 25
------------------MOV R3, COUNT
When executing the instruction MOV R3, 3 COUNT the register R3 will be
loaded with the value25.
Assume that there is a constant (fixed value) used in many different places in the
program, and the programmer wants to change its value throughout. By the use of
60

EQU, the programmer can change its once and the assembler will change all of its
occurrences, rather than search the entire program trying to find every occurrence.
END directive:
This indicates to the assembler to the end of the source (asm) file.

10. Advantages:

Constant speed control


speed regulation irrespective of load conditions

11. Applications:

Used in all industrial places

12. Conclusion:

Within this evolution, the microcontrollers (MCU) progressively replace analog


controllers and discrete solutions even in low cost applications. They are more flexible,
often need less components and provide faster time to market. With an analog IC, the
designer is limited to a fixed function frozen inside the device. With a DIAC control,
features like sensor feedback or enhanced motor drive can not be easily implemented.
With the MCU proposed in this note (the ST6210), the designer can include his own
ideas and test them directly using EPROM or One Time Programmable (OTP) versions.
The triac is the least expensive power switch to operate directly on the 110/240V mains.
Thus it is the optimal switch for most of the low-cost power applications operating online. The LOGIC LEVEL or SNUBBERLESS triacs are a complement to the ST6210
MCU for such appliances. These triacs can operate with low gate current and can be
directly triggered by the MCU, while still maintaining a high switching capability. This
application note describes three different MCU based applications: a universal motor
drive, an AC switch and a light dimmer. They all operate with the same user interfaces
and almost the same software and hardware.

61

REFERENCES :
1. www.lsicsi.com/pdfs/Data_Sheets/LS7634_LS7635.pdf

2. www.opensourcepartners.nl/~costar/leddimmer/

3. www.datasheets.org.uk/BTA08%20ST%20light%20dimmer%20sche.

4. www.freescale.com/files/microcontrollers/doc/app.../AN2839.pdf

5. www.discovercircuits.com/L/lite-dimmer2.htm

6. www.allaboutcircuits.com ... THYRISTORS

7. www.electronicsteacher.com/list-of-schematics/.../Dimmer_Circuits.

8. educypedia.karadimov.info/electronics/lightdimmer.htm

9. www.kitsrus.com/pdf/k19.pdf

10. www.datasheetarchive.com/230v%20light%20dimmer-d... - United States

11. www.ee.teihal.gr/labs/electronics/web/.../Light_dimmer_circuits.pdf

12. www.datasheets4u.com/Triac+BTB12+600+B-datasheet.html

62

63

You might also like