You are on page 1of 40

NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Application of MEMS technology in Enhancing Automobile Accelerometers

S.Naveen1 L.Parthasarathy2
Dept. of Mechatronics, BIT A1, Dept. of Mechatronics, BIT 17/1A,
Ravi Murugan Padmanaba Apts, Kavundampalayam, Thaneer Panthal, Gnaniyam Palayam Post, V.Vellode,
Kovai 30. naveen18101994@gmail.com Erode-12. parthaerode@gmail.com.

C.V. Kirupakaran3 K.Aravindh4 5


P.Kavin
Dept. of Mechatronics, BIT Dept. of Mechanical Engg., BIT
Dept. of Mechatronics, BIT
2/54, Raj residency, Periyar Nagar, Vadavelli, 17/1, New teachers colony, Erode-11
kavinmce@gmail.com
Kovai 41. Kirupakaran173@gmail.com. aravindhmail123@gmail.com

ABSTRACT - For years, the trusty seat belt provided the sole INTRODUCTION
form of passive restraint in our cars. After some years, the
concept of the airbag - a soft pillow to land against in a crash
As we all know, Airbags are a type of automobile safety restraint
came. This invention made a great impact by saving many lives.
The airbag system involves the use of various sensors in which like seatbelts. Inventors.about.com describes them as gas-
accelerometer is an important one. They help in measuring inflated cushions built into the steering wheel, dashboard, door,
proper acceleration of the vehicle. roof, or seat of your car that use a crash sensor to trigger a rapid
At the same time, MEMS began to gain importance in the expansion to protect you from the impact of an accident.
industrial areas. As it deals with particles in micro-scale level, it Although Breed invented a "sensor and safety system" in 1968,
helps in reducing the size of the devices to a larger extent. Thus the world's first electromechanical automotive airbag system,
Grayson (2007) says that the result was the MEMS these airbags more popular because of its durability &
accelerometer, which are small in size and are efficient than the flexibility. It became an integral part of a system because of the
then existing accelerometers. In this case, these accelerometers growth of MEMS technology.
are used to detect the rapid negative acceleration of the vehicle
to determine when a collision has occurred and the severity of I. MEMS TECHNOLOGY:
the collision. These tiny micro scale accelerometers
revolutionized car air-bag deployment systems in the mid- As Wikipedia says, it is the technology of very small devices; it
1990s. Silicon-based materials (e.g., single crystal silicon,
merges at the nano-scale into nanoelectromechanical systems
polycrystalline silicon, silicon dioxide, and silicon nitride) are
the primary materials used for constructing the MEMS devices, (NEMS) and nanotechnology. They are miniaturized
and manufacturing approaches are derived from mechanical and electro-mechanical elements (i.e., devices and
microfabrication processes developed for integrated circuits structures) that are made using the techniques of micro
(ICs) [2]. The microfabrication process as used for silicon- fabrication, feels Matej Andrejai (2008). Although they have
based MEMS (especially for prototyping) is time-consuming their presence in many products like inkjet printer head, a video
and requires access to cleanroom equipment. Both materials projector DLP system, a disposable bio-analysis chip, it came to
and use of cleanroom are expensive; and although the spotlight after this technology (MEMS) was blended with the
performance of silicon-based MEMS can be excellent, their then existing airbag system and made the airbag system a more
relatively high cost has limited the applications they can reliable one, says H. Baltes (2002).
address. Here, we are suggesting an alternate method in
producing the MEMS accelerometer by using papers. These This was made possible by making the accelerometers using
can actually replace the existing silicon accelerometers and thus MEMS technology and producing much smaller and reliable
saving some space, weight and cost too. They can be
device which can be made use of instead of the existing ones
manufactured easily as compared to the silicon accelerometers.
Xinyu Liu (2011) feels that Paper is readily available, which are far less reliable than those made of MEMS.
lightweight, and easy to manufacture; the paper substrate makes
integration of electrical signal-processing circuits onto the II. MEMS ACCELEROMETER:
paper-based MEMS devices straightforward.
Accelerometer is a device which measures all possible
Keywords - MEMS, airbag, accelerometer, paper acceleration forces both static and dynamic. It sees the
acceleration associated with the phenomenon of weight
experienced by any test mass at rest in the frame of reference of
the accelerometer device. Hence, it is used in the Airbag system

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

of an automobile so that it can act as a sensor which signals MECHANICAL PROPERTIES OF PAPER
sudden change of force (in case of crash). They are not only used ACCELEROMETER:
in the airbag system but also in other parts of automobiles.
One of the most commonly used accelerometer is the We have to check its stiffness so as to compare with other
piezoelectric accelerometer. But, since they are bulky and materials in constructing the MEMS. So we are in need of its
cannot be used for all operations, a smaller and highly functional Young's Modulus. For this, various force-deflection tests were
device like the MEMS accelerometer was developed. Though taken and atlast the young's modulus of the paper was found out
the first of its kind was developed 25 years ago, it was not to be much lower (80 times) than that of silicon.
accepted until lately, when there was need for large volume
industrial applications. Due to its small size and robust sensing ELECTRICAL & THERMAL PROPERTIES:
feature, they are further developed to obtain multi-axis sensing.
The MEMS are in fact used in many parts, other than The current-voltage characteristics of the carbon resistor placed
accelerometer, in automobiles as shown in figure 3. in the cantilever. As shown in figure 5A, we obtain a linear V-I
curve. So, this must not pose a trouble and it's not the major
III. PAPER ACCELEROMETER: deciding factor for accelerometer. We also tested its temperature
co-efficient. The effect of temperature on the output of the
Although Silicon-based MEMS accelerometer functions qui1et sensor could be made negligible by laying out another carbon
excellent, they too have defects because of their scaling property. resistor on the cantilever for temperature compensation and
Their disposal also poses a trouble. Although they are so small, integrating it into the circuit for signal readout like a Wheatstone
when considering many masses, it'll be a problem. Hence came bridge like circuit as Xinyu Liu (2011) says.
the idea of making those sensors out of paper. The new paper Another great advantage of the paper accelerometer over the
made device emulates the current photo resistive silicon MEMS silicon-based is that they can be folded and its stiffness gets
sensors that are at the heart of many modern accelerometers. increased automatically.
Paper accelerometers are very cheap ($0.1 per m2 for printing
paper) and they can be disposed very easily. They can be V. INTEGRATION OF ELECTRONIC DEVICE ONTO THE
manufactured very easily even in a normal laboratory with basic PAPER ACCELEROMETER:
tools and are easily available. The paper substrate makes
integration of electrical signal-processing circuits onto the H.Baltes (2002) feels that the resistance obtained from the
paper-based MEMS devices straightforward. carbon resistor must be converted to voltage in order to send it as
The figure 4A shows the paper based force sensor a signal. There are some ways to do this. Here, we are going to
(accelerometer) in a cantilever shape. Here, the carbon resistor is see a monolithic approach where we are mirofabricating the
placed where the maximum surface strain occurs during MEMS and a conventional IC onto a single chip as expressed by
deflection. Xinyu Liu (2011) feels that when a force is applied to A. C. Siegel (2003).
the beam structure, the resistor experiences a mechanical As shown in figure 6A, a wheatstone approach is given in
strain/stress, which then induces a change in resistance of the microfabricating it with the MEMS. The respective calibration
resistor. Measuring the change in resistance allows
curve is obtained (figure 6C) and the force resolution results in
quantification of the applied force.
120mN.
IV. FABRICATING PAPER ACCELEROMETER: CONCLUSION:
The cantilever can be fabricated by using small thickness papers. Here, we saw many options like feasibility of MEMS using
In this case, we took chromatography paper. Then, it should be paper as its construction material and finally developed a paper
cut using laser cutting. Required dimensions should be noted based accelerometer using MEMS. The paper's hydrophilic
correctly and can be applied in the cantilever. This process nature was removed and hydrophobic nature was made to
hardly takes an hour (Figure 4B and 4C). induce in it. Then, we tested its mechanical, electrical and
But we must think of one main point paper is water-lover thermal characteristics and made appropriate conclusions.
(hydrophilic). If it starts absorbing the available water, then These type of paper based MEMS has many advantages
the mechanical, electrical properties of the sensor may get papers are of low-cost and can result in low-cost MEMS
damaged. So we must render hydrophobic character to it by devices; they are easy to handle and they can be folded,
treating the surface hydroxyl groups of paper. It can lower the increasing their stiffness; they can be made by using low-cost
impact of the environment on them. After fabrication, the tools and requires only a normal environment; we can modify
product is made to test under some conditions. the surface of paper due to its great chemical properties and its
high ratio of surface area to weight, offer opportunities for
surface tailoring to generate new types of sensitivities.
Thus, paper based accelerometers are of great use in the near
future and they can increase the lifetime of the MEMS devices

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

and cut the cost of the automobile to some extent. Not only the
automobile, they can also be used in various other fields too
(navigation, gyrometer, astrophysical area etc.).

VI. REFERENCES

[1] A. C. R. Grayson, R. S. Shawgo, A. M. Johnson, N. T.


Flynn, L. I. Yawen, M. J. Cima and R. Langer,Paper-based
piezoresistive MEMS sensors, Proc. IEEE, 2004, 92,
621. 2 C. Liu, Adv. Mater., 2007, 19, 37833790.

[2] Xinyu Liu, Martin Mwangi,a XiuJun Li,a Michael


O'Briena and George M. Whitesides, Lab Chip. 2011 Jul
7;11(13):2189-96. doi: 10.1039/c1lc20161a. Epub 2011
Fig. 2 MEMS
May 12.

[3] http://inventors.about.com/od /astartinventions/


a/air_bags.htm

[4] h t t p : / / e n . w i k i p e d i a . o r g / w i k i /
Microelectromechanical_systems

[5] Matej Andrejai, seminar on MEMS Accelerometers,


2008.

[6] h t t p : / / w w w. i n s t r u m e n t a t i o n t o d a y. c o m / m e m s -
accelerometer/2011/08

[7] H. Baltes, O. Brand, A. Hierlemann, D. Lange and C. Fig. 3 MEMS and Accelerometers in Automobiles.
Hagleitner, in Proc. IEEE Conf. Micro Electro Mechanical
Systems, 2002, pp. 459 466.

[8] A. C. Siegel, S. T. Phillips, M. D. Dickey, N. Lu, Z. Suo and


G. M. Whitesides, Adv. Funct. Mater., 2010, 20, 2835. 17
J. L. Tan, J. Tien, D. M. Pirone, D. S. Gray, K. Bhadriraju
and C. S. Chen, Proc. Natl. Acad. Sci. U. S. A., 2003, 100,
14841489.

FIGURES:

Fig.1Airbag in an Automobile
Fig. 4 Paper based force sensor.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Fig. 5 Electrical and Thermal properties of paper accelerometer. Fig. 6 Monolithic integration of MEMS with a wheatstone
bridge circuit.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Development of Automated, Vision-based Component Sorting System


1. R. Ramesh, 2. N.Lavanya, 3. K. Sumalohitha
Department of Mechanical Engineering MVGR College of engineering, Vizianagaram
dr_r_ramesh@yahoo.co.uk

Abstract - In this work, an attempt has been made to propose an II. REVIEW OF LITERATURE
automated inspection system to facilitate selection and sorting
of components for assembly operations. The proposed system
has three modules namely the image capturing module, image Extensive research and analysis have been carried out by
processing module, and an interface module. The system uses a many researchers over the last several years in developing
web camera for image capturing, image processing routine for automatic inspection systems. Many of these systems have been
image analysis and a dialog box for interface created using employed in manufacturing industries, food processing
Visual C++. The system enables the operator to control industries and so on. An automated visual inspection station was
inspection without his direct intervention in the process being developed by Lui et, al (2005). This station uses cameras and
carried out. The operator is provided with a dialog box for computers to replace human vision in evaluation and inspection
interface. With a click on the icons of the dialog box, the image of bearing diameters to ensure that defective products may not
is captured automatically, loaded into the system, enhanced, be selected. To achieve the above requirements, the developed
processed and finally output is generated. Basing on the output automated visual inspection system was equipped with an
the subsequent sorting and selection processes are carried out. industrial PC, a CCD camera to capture images, image
Index Terms Automation, image processing. processing system and a mechanical move table. The set up
allows accurate non-contact measurement while the industrial
products are moving on conveyor belt continuously. Yan and Cui
I. INTRODUCTION (2006) proposed an on-line crack inspection system for glass
bottles that inspects cracks with the help of an intelligent
As the demands of consumer economy grow exponentially, the automated system. In order to strengthen the system's ability to
ever-increasing requirements of the consumers in terms of both learn and adapt itself, back propagation neural network was
quality and quantity are to be satisfied. This places very introduced in the system and a model for crack inspection of
significant challenges on the manufacturing industry that is glass bottles was built.
charged with the onerous task of meeting the demand and at the During the past few years there has been a growing demand for
same time maintaining the quality of the product being improving the overall quality and reducing the production cost
manufactured. Automation in manufacturing improves the of finished products. Recent availability of higher speed, more
productivity significantly without compromise in the quality versatile, lower cost digital computer hardware coupled with
level of the product. Flexible manufacturing systems constitute advances in areas such as computer aided design and
a number of automated work stations, their functioning manufacturing, robotics, computer vision, AI etc. have spurred
depending upon the product to be manufactured with all the research and development efforts in the use of these new
workstations working in integration with each other and technologies for the automation of industrial production lines.
controlled by a central computer. This increases the productivity The research carried out by Pai et al .( 1983) is concerned with
of the system. Inspection is an important phase of a the visual inspection of holes drilled in air craft engine
manufacturing process and for a FMS setup to function combustor assemblies. Using a digital image processor,
effectively, this phase, whether performed as on-line inspection combustor assemblies were examined and scanned images were
or off-line inspection, also needs to be automated along with the processed for analysis of parameters such as hole areas and
other processes. Several different techniques such as machine diameters. Mitchell and Sarhadi (1992) developed a system for
vision, range sensing, inspection based on surface reflections, the automated visual inspection of carbon fabric work pieces
image processing etc. have been adopted into the fold of (called plies) in a robotic assembly cell. The main task of the
automation approaches to inspection, one of the most widely system is to inspect the positional accuracy of each ply as it is
used being image processing. In this work, an attempt has been laid-up. The texture boundary is used as a first estimate of the
made to develop a system that automates the process of position of the real ply edge. Birch wood veneer boards have a
inspection using a vision and image processing approach. The variety of uses including furniture, flooring and vehicle sides. To
objective of automating inspection is to facilitate sorting and increase grading accuracy of the veneer boards, attempts were
selection of components for subsequent assembly operations made to replace manual inspection by Automated Visual
that would be carried out in a FMS. The presented system Inspection (AVI) which employs a camera and image processing
automatically captures the image of a component placed at routines. Pham and Alcock (1997) developed an AVI system
specified location, loads it, carries out processing and analyzing wherein the image of the sheet is first acquired using appropriate
of the loaded image, and conveys the nomenclature of the cameras and lighting. The image is then segmented into two
component supplied to the system so as to facilitate sorting and areas namely clear wood and defects such as coloured streaks,

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

hard rot, holes, pin knots, rotten knots, sound knots, splits, The system uses Microsoft Lifecam VX-1000 for
streaks and worm holes. Each region was classified into a defect image capturing. The LifeCam VX-1000's small, ring-shaped
type using the extracted features and a classifier. Fernandez et al. base is stable and works for a variety of mounting situations. The
(2002) developed an automated visual inspection system (AVIS) head swivels 360 degrees and tilts 45 degrees down and 90
for quality control of preserved orange segments, widely degrees up. The Microsoft LifeCam VX-6000 comes with a
applicable to production processes of preserved fruits and drivers and software for taking pictures and video clips. 2-D
vegetables. Yalcin and Bozma (1998) developed an automated image processing can be efficiently done using the Image
visual inspection system that uses the concept of selective Processing Tool Box built in Matlab, which is a collection of
fixation as a basis for all of its visual processing including functions that extend the capability of the Matlab numeric
inspection. A computer vision system was developed for computing environment.
locating and inspecting integrated circuit chips for their As seen from the above, to automate the process of inspection,
automatic alignment during various manufacturing processes at all the modules of the system need to have direct user interface in
the General Motors Delco Electronics Division (1978). This order to enable an unskilled operator (having no acquaintance
system, called SIGHT-I, is currently in production use. with applications used) to control the process of inspection. The
It is evident from the above review that lot of work has been done work presented herein attempts to link the powerful features of
to automate inspection so as to apply it to various manufacturing Matlab and the Lifecam. In addition, the VC++ project handling
processes thereby improving their productivity. Focus has been this can be converted into an executable (.exe) file the execution
on replacing human vision with machine vision so that it leads to of which could be automatically timed and invoked from the
increased flexibility, lower cost, increased reliability and Windows platform.
accuracy, and greater speed and serviceability. With the increase On the above basis, a dialog box was created in VC++ to accept
in the complexity and demand for the industrial components the inputs from the operator and save these to different files. This
necessity of automating inspection has become inevitable. facilitates the invoking and execution of any Matlab program
Based on the study of the work carried out in this area an directly from the VC++ executable file without the operator
automated inspection station using image processing technique having to start Matlab separately. This facility offers a good
is proposed in the sections below. opportunity to automate the whole process right from taking the
images of the components, saving them, executing the Matlab
III. SYSTEM OF CONFIGURATION programme, and finally giving the result. The dialog box for the
operator to work with is as shown in Figure 3.2.
The configuration of the proposed intelligent system
for automatic inspection is as shown in Fig 3.1. The objective of
the automated inspection system is to provide the operator with
a facility to carry out selection and sorting of the components of
a FMS without direct intervention in the process being carried
out. The supplied input is then processed by the system to report
the component nomenclature. The processing of the input image
is done through the Image Processing Tool Box that is built in
Matlab. VC++ handles the interface between the operator,
camera and Matlab. The Image Processing Tool Box facilitates
the processing of the input image and gives the desired output.
The development of this system was taken up in different
modules namely,
a. Image capturing Module
b. Image Processing Module
c. Interface Module
Fig. 3.2: Dialog box for operator interface

IV. CONSTRUCTION OF SUPPORT STRUCTURE

It is obvious that result or output of the analyzing


system depends upon to capture image. During development of
proposed system it was noticed that output of image processing
and analysis is based on certain parameters such as image
resolution, position of the component, light intensity, uniformity
and illumination of object, distance between lens and object and
noise intensity. Keeping the above parameters in mind an
Fig. 3.1 System Configuration attempt has been made to minimize the variation and

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

accordingly a support structure with wooden base that supports V. IMAGE PROCESSING AND ANALYSIS
horizontal beam made of polymer material which has vertical
attachment of cantilever type at its top is designed. As discussed earlier, the proposed system's objective is to select
and sort out components for assembly. This is achieved by
means of image processing and analysis using Image Processing
Tool Box of Matlab. Selection and sorting of components can be
done based on various factors which in turn depend on the
variety of components being manufactured. This is a very vast
area of work but focus has been only on the following regions of
differentiating the components.
Differentiation Based On Shape
Differentiation Based On Size
Differentiation Based On Special Features
Specific For A Given FMS

A. DIFFERENTIATION BASED ON SHAPE


The proposed system is capable of differentiating among round,
square and rectangular shaped objects. The following algorithm
clearly explains the technique of differentiation:
Fig. 4.1 Automated Inspection Station }
main
Read image;
To ensure free working of the system it is necessary to make sure Convert RGB to Logical;
that image capturing is proper. In view of all parameters that Remove noise;
effect image quality, images of various components were Identify boundary;
acquired under proper conditions to obtain perfect images. Compute properties;
Identify component;
Print result;
}

Fig. 4.2 Image of round component as captured by camera

Fig. 5.1 Original image as captured by camera

Fig. 4.3 Image of square component as captured by camera

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

On similar lines square component were also used for analysis


of the system

Fig. 5.2 RGB Image converted to grayscale image

Fig. 5.5 Original image as captured by the camera

Fig. 5.3 image Converted to Logical Type i.e, Black and White

Fig. 5.6 RGB image converted to logical image

Fig. 5.4 final output window specifying that object is round.

Fig. 5.7 Final Output Window Specifying That Object is square

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

B. DIFFERENTIATION BASED ON SIZE

For differentiation of components based


on size, the dimensional analysis of the component is the
basis. A complete report of the component nomenclature is the
required output. The following algorithm clearly explains the
technique of differentiation.
}
main Fig. 5.9 Matlab Output Specifying Number of Holes
Read image;
Convert RGB to logical:
VI. SYSTEM OF AUTOMATION
Remove noise;
Identify boundary;
As discussed previously, the main objective of the system
Compute dimensions;
is automation of inspection. Visual c++ has been used to handle
Find relative factor;
the interface between all the remaining components of the
Open text file;
system. A dialog box has been created for interface. The dialog
Print result;
box has icons or buttons on it. The dialog box is shown below in
{ Figure 6.1

C. DIFFERENTIATION BASED ON SPECIAL


FEATURES

The proposed system carries out the sorting


of plates on the basis of number of through holes on their
surfaces. In addition to counting no of holes, the system also
gives the dimensions of the plate, holes and also specifies the
pitch of the holes. The following algorithm explains the
technique of differentiation.
}
main
Read image;
Convert RGB to logical;
Remove noise;
Identify boundary; Fig. 6.1 Dialog Box for User Interface
Determine number of holes;
Find out region;
Compute; Clicking on the button 'LifeCam' opens a window
Print result; that enables the operator to capture the image of
the component placed under the camera. The
{ image is stored at a certain location and it is made
ready to be accessed by the next component of the
system.

Clicking on the button 'Analyze' activates the


Matlab software, runs the Matlab program for
image analyzing, and finally generates the output.

Clicking on the button 'Result' opens a text file


displaying the final result of the Image Analysis
Thus the system simplifies the process of
inspection enabling the operator to control it without his
intervention and supervision. The powerful features of different
applications such as Lifecam and Matlab are linked together
using a common interface without the need to run each of them
Fig. 5.8 Image With Features Converted to Black and White separately and for doing this even an unskilled operator having
no knowledge regarding these applications can be employed

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

VII. CONCLUSION

Productivity and performance of a given manufacturing system


can be significantly enhanced by automation and minimizing
human intervention to the extent possible. Extensive research
has been undertaken in the field of automation of inspection by
several researchers in order to develop a system that simplifies
the process. In this work, an attempt has been made to develop
an automated inspection system for selection and sorting of
components for further assembly operations carried out in a
FMS. The heart of the model is the Image Processing Module
that incorporates functions from Matlab for image
enhancement, processing and analysis. During the development
of the system several problems were encountered during image
capturing. To overcome them, a suitable support structure for
the image capturing unit was constructed. Using Visual C++, a
dialog box was built and all the components of the system were
linked to a common platform so as to enable the user to automate
the inspection process and thus bring down lead time.

VIII. REFERENCES

[1] Liang-yu, Z. Xiao-jun, and P. Ming- quing, Automated


visual inspection for measurement of workpiece, IMTC-2005,
vol.2, pp. 872-877, May 2005

[2] C. Fernandez, and J. Suardiaz, Automated inspection


system for classification of preserved vegetables, Reports on
practical oncology and radio therapy, ISIE-2002,vol.1, pp.265-
269.

[3]A. L. Pai, K. Lee, K. L. Palmer and D. G. Selvidge,


Automated visual inspection of aircraft engine combustor
analysis, International conference on robotics and
automation,IEEE-1983,vol.6,pp.563-572.

[4] H. Yalrpn, H. Bozma, Automated inspection system with


biologically inspired vision intelligent systems laboratory,
Intelligent robots and systems, vol.3, pp.1808-1813, oct 1998

[5] Yan Tai-Shan, Cui Du-Wu, The method of intelligent


inspection of product quality based on computer vision,
computer aided industrial design and conceptual design, pp.1-
6,nov2006.

[6] T. A. Mitchell, M. Sarhadi, A machine vision system using


fast texture analysis for automated visual inspection, IEEE-
International Conference, pp.399-402, 1992.

[7]Michael L. Baird, A Computer Vision System for


automated IC Chip Manufacture, IEEE-1978, vol.8, pp.133-
139.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Invasion Of Robotics In Medical Science,Future Trends And Its Applications

P. M. Menghal, Neha Dimri, Vishal Dahiya


Faculty of Degree Engineering,
Military College of Electronics & Mechanical Engineering, Secunderabad, 500 015
Andhra Pradesh, India.
prashant_menghal@ yahoo.co.in | nehadimri29@gmail.com | vishu_dahiya499@yahoo.com

AbstractWith the advent of robotics in surgery, human beings o the necessary organs. Rather than creating large incisions
have sparked off another revolution. After the White and Green several inches long to gain access to underlying tissues,
Revolution in our country, time has now come for the robotic minimally invasive surgical techniques typically rely on small
revolution especially in the medical field. Though it may not be half-inch incisions encircling the surgical field in order to insert
possible to predict for sure as to when will this happen, the idea small scopes and instruments. Minimally invasive surgery has
is rapidly catching fire amongst the scientists and the medical caused a change in the route of access and has significantly and
community. But even robotic surgery has its set of pros and cons. irrevocably changed the surgical treatment of most disease
It is the patient who got to bear the brunt of complications or processes. Patients still undergo interventions to treat disease,
benefits from this new technique. Anesthesiologists as always but minimally invasive surgery makes possible a reduction or
must do their part to be the patient's 'best man' in the preoperative complete elimination of the 'collateral damage' required to gain
period. In this particular article we have tried to bring out the access to the organ requiring surgery. While the benefits of this
introduction of surgical robots in Operation Theater, their approach were numerous for the patient, early technology
development and future trends. Also an endeavor has been made limited the application of minimally invasive surgery to some
to keep it simple for the layman to understand. procedures. Specifically, surgeons using standard minimally
Keywords- Robotic surgery, Non-invasive, da-vinci invasive techniques lost the valueof a natural three dimensional
image, depth perception, and articulated movements.
Magnification of small structures was often difficult and
instruments were rigid and without joints. Robotic surgery has
provided the technology to address these limitations and allow
the application of minimally invasive surgery to a broader
spectrum of patients and their diseases. Surgical robots relieve
some of these limitations by providing fine motor control,
magnified three dimensional imaging and articulated
instruments. The use of robotics in surgery is now broad-based
across multiple surgical specialties and will undoubtedly
expand over the next decades as new technical innovation and
techniques increase the applicability of its use. Today's medical
robotic system was the brainchild of the United States
Department of Defense's desire to decrease war casualties with
development of Remote Surgery. What prompted this initiative,
were the wounded soldiers in Vietnam War, one third of the total
deaths were due to exsanguinating hemorrhage that had
Fig. 1 An Operation Theatre Robotic Surgery. potential to survive. The first documented use of Robotic
surgery came in 1985 when PUMA 560 robotic surgical arm was
used to take a neurosurgical biopsy. Robots are amazing models
I. INTRODUCTION of virtual workforce that functions with only electricity and
software. Robotics surgeons are an amazing advantage to have
Surgery has traditionally been a specialty within the medical in medical field. However they are still in their infancy and its
profession that has revolved around invasive procedures to treat miles to go before they replace humans.
various maladies. Initially, trauma induced by the therapeutic
procedure was necessary and reasonable to provide benefit to
the patient. But now, through the innovation of digital imaging
technology, combined with optical engineering and improved
video displays, surgeons can operate inside of

body cavities for therapeutic intervention without the larger


incisions previously necessary to allow a surgeon hands access

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Robotic surgery prolongs the surgery time.


Requires larger operating space, is bulky.
The staff must be adequately trained to operate in
conjunction with the machine.
Limited dexterity and hand eye co-ordination.

II. WHAT IS ROBOSURGERY

Due to their precision robots are being increasingly used for


certain type of microsurgery. This lets the surgeon to perform
very delicate procedures that would otherwise be too fine to for
the human hand. The surgeon can perform surgery from a
(a) Advantages
remote location whilst the patient is being attended to by a
robot. Assisted with tactile/feedback sensors the surgeon can
The revolution in Robotic Surgery has brought benefits to
feel the tissue underneath the robot's instrument. Around a
patients like:
decade ago surgeons at the Presbyterian Hospital at New York
Robots allow unprecedented control and precision of
performed a Robotic Heart Surgery using the 'da vinci' surgical
surgical instruments with minimum invasive technique system. This particular method entails making four puncture
and microsurgery. wounds, each about 2.5 cm in diameter through which surgeons
No tremors of surgeons hand and precise movement of operate the heart using pencil size instruments. The surgeons
instruments. can comfortably view the progress on a monitor sitting at the
Also present day robotic surgical systems have 7 console a few meters' away. The doctors who performed the
degrees of freedom just like human forelimb in contrast surgery were quite satisfied with the results and the patient also
to the laparoscopic arm providing only 4 degrees of recovered well before the required time frame. The 'da vinci'
movement. surgical system is getting worldwide popularity and has been
Robotic Surgeons are free from fatigue and can work established in hundreds of health centers across the world. In
for extended hours. today's operating rooms, you'll find two or three surgeons, an
This surgery also entails-
anesthesiologist and several nurses, all needed for even the
Short stay at hospital. simplest of surgeries. Most surgeries require nearly a dozen
Less patient morbidity. people in the room. As with all automation, surgical robots will
Shorter convalescence for patients. eventually eliminate the need for some personnel. Taking a
glimpse into the future, surgery may require only one surgeon,
(b) Limitation an anesthesiologist and one or two nurses. In this nearly empty
operating room, the doctor sits at a computer console, either in
Although rapidly developing, robotic surgical technology has or outside the operating room, using the surgical robot to
not achieved its full potential owing to a few limitations. Cost- accomplish what it once took a crowd of people to perform. The
effectiveness is a major issue; 2 recent studies comparing use of a computer console to perform operations from a distance
robotic procedures with conventional operations showed that opens up the idea of telesurgery, which would involve a doctor
although the absolute cost for robotic operations was higher, the performing delicate surgery miles away from the patient. If the
major part of the increased cost was attributed to the initial cost doctor doesn't have to stand over the patient to perform the
of purchasing the robot (estimated at $1,200,000) and yearly surgery, and can control the robotic arms from a computer
maintenance ($100,000). Both factors are expected to decrease station just a few feet away from the patient, the next step would
be performing surgery from locations that are even farther away.
as robotic systems gain more widespread acceptance. However,
If it were possible to use the computer console to move the
it is conceivable that further technical advances may at first drive
robotic arms in real-time, then it would be possible for a doctor
prices even higher. Decreasing operative time and hospital stay
in California to operate on a patient in New York. A major
will also contribute to the cost-effectiveness of robotic surgery.
obstacle in telesurgery has been latency -- the time delay
Other drawbacks to robotic surgery include the bulkiness of the
between the doctor moving his or her hands to the robotic arms
robotic equipment currently in use. Lack of tactile and force
responding to those movements. Currently, the doctor must be
feedback to the surgeon is another major problem, for which
in the room with the patient for robotic systems to react instantly
haptics (ie, systems that recreate the feel of tissues through
to the doctor's hand movements. Having fewer personnel in the
force feedback) offers a promising, although as yet unrealized,
operating room and allowing doctors the ability to operate on a
solution.
patient long-distance could lower the cost of health care in the
Robot is a machine that requires monitoring by human
long term. In addition to cost efficiency, robotic surgery has
beings. several other advantages over conventional surgery, including
In the event of crash down or malfunction patient's enhanced precision and reduced trauma to the patient. For
safety is a matter of concern.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

instance, traditional heart bypass surgery requires that the Detachable Instruments: The Endowrist detachable
patient's chest be "cracked" open by way of a 1-foot (30.48-cm) instruments allow the robotic arms to maneuver in ways that
long incision. However, with the da Vinci system, it's possible to simulate fine human movements. Each instrument has its own
operate on the heart by making three or four small incisions in function from suturing to clamping, and is switched from one to
the chest, each only about 1 centimeter in length. Because the the other using quick-release levers on each robotic arm. The
surgeon would make these smaller incisions instead of one long device memorizes the position of the robotic arm before the
one down the length of the chest, the patient would experience instrument is replaced so that the second one can be reset to the
less pain, trauma and bleeding, which means a faster recovery. exact same position as the first. The instruments' abilities to
Robotic assistants can also decrease the fatigue that doctors rotate in full circles provide an advantage over non-robotic
experience during surgeries that can last several hours. arms. The seven degrees of freedom (meaning the number of
Surgeons can become exhausted during those long surgeries, independent movements the robot can perform) offers
and can experience hand tremors as a result. Even the steadiest considerable choice in rotation and pivoting. Moreover, the
of human hands cannot match those of a surgical robot. surgeon is also able to control the amount of force applied,
Engineers program robotic surgery systems to compensate for which varies from a fraction of an ounce to several pounds. The
tremors, so if the doctor's handshakes the computer ignores it Intuitive Masters technology also has the ability to filter out
and keeps the mechanical arm steady. hand tremors and scale movements. As a result, the surgeon's
large hand movements can be translated into smaller ones by the
III . WORKING OF ROBOTIC SYSTEM robotic device. Carbon dioxide is usually pumped into the body
cavity to make more room for the robotic arms to maneuver.
Today's robotics devices typically have a computer software
component that controls the movement of mechanical parts of (d) 3-D Vision System: The camera unit or endoscope arm
the device as it acts on something in its environment The provides enhanced three-dimensional images. This high-
software is "command central" for the device's operation. resolution real-time magnification showing the inside of the
Surgeon sits in the console of the surgical system several feet patient allows the surgeon to have a considerable advantage over
from the patient. He looks through the vision system - like a pair regular surgery. The system provides over a thousand frames of
of binoculars -and gets a huge, 3-D view of inside the patient's the instrument position per second and filters each image
body and area of the operation. The surgeon, while watching through a video processor that eliminates background noise.
through the vision system, moves the handles on the console in The endoscope is programmed to regulate the temperature of the
the directions he wants to move the surgical instruments. The endoscope tip automatically to prevent fogging during the
handles make it easier for the surgeon to make precise operation. Unlike The Navigator Control, it also enables the
movements and operate for long periods of time without getting surgeon to quickly switch views through the use of a simple foot
tired. The robotic system translates and transmits these precise pedal.
hand and wrist movements to tiny instruments that have been
inserted into the patient through small access incisions. This
combination of increased view and tireless dexterity is helping
us overcome some of the limitations of other types of less
invasive surgery. It's also allowing us to finally use minimally
invasive surgery for more complex operations.

(a) Surgeon Console: The surgeon is situated at this console


several feet away from the patient operating table. The surgeon
has his head tilted forward and his hands inside the system's
master interface. The surgeon sits viewing a magnified three-
dimensional image of the surgical field with a real-time
progression of the instruments as he operates. The instrument
controls enable the surgeon to move within a one cubic foot area
of workspace.

(b) Patient-side Cart: This component of the system contains


the robotic arms that directly contact the patient. It consists of
two or three instrument arms and one endoscope arm. As of Fig.2 Set Up For Robotic Surgery.
2003, Intuitive launched a fourth arm, costing $175,000, as a
part of a new system installation or as an upgrade to an existing
unit. It provides the advantages of being able to manipulate
another instrument for complex procedures and removes the
need for one operating room nurse.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Table 1
Comparison of Conventional Laparoscopic Surgery
versus Robot Assisted Surgery

Fig.3 Supervisory Controlled Surgery

(ii) Telesurgical: With this system, a surgeon actually directs the


actions of robotic arms. In effect, the robot becomes an
extension of the surgeon. The system consists of surgical arms
within a surgical suite and a separate viewing and control
console (see figure below). The surgeon views the surgical field
in a three-dimensional viewing screen and manipulates the
robotic arms from hand-held controls within the console.
Another surgeon in the surgical suite changes out tools on the
robotic arms as needed during the procedure. Small incisions
are made in the body and the tools are then inserted. Once this
step is complete, the surgeon can then operate. This technology
allows surgeons to make quicker, more controlled and more
accurate movements by using the robot arm with its wider range
of motions and hence provides a means to perform minimally
IV. CATEGORISATION OF ROBOTIC SURGERY
invasive surgical procedures. The most common variety, the da
This categorization is based on the involvement of surgeon with Vinci Robotic Surgical System, enhances the surgery by
the robot during the procedure: providing 3-D visualization deep within hard-to-reach places
like the heart, as well as enhancing wrist dexterity and control of
(I) Supervisory controlled: A supervisor-controlled robotic tiny instruments. These systems also allow more surgeons to
surgical system offers the highest level of automation. It does perform these procedures, since many of the techniques
require a significant amount of preparation to set up to perform performed by robot assistants are highly skilled and extremely
each surgery. A specific set of commands, unique to both the difficult for humans to master. Now more procedures (like
patient and the procedure being performed, is first entered into artery repair and valve repair) can be done without long
the system by the surgeon and support team. This is recovery times or bodily injury. Here the surgeon directs the
accomplished with extensive mapping of the body using three- motion of the robot. Using real time image feedback the surgeon
dimensional medical imaging. Just prior to the surgery, the is able to operate from a remote location using sensor data from
system is then registered to match the patient's body to the the robot.
mapping in the surgical system. The surgeon tests the robots Telesurgical systems include:
motions and places the robot in the appropriate start position. (i) The Da Vinci Surgical System (Approved by FDA in July
Once the surgery is under way, the robotic system will 2000)
automatically execute the procedure. The surgeon will observe (ii) Zeus Robotic Surgical System
the procedure intently, and intervene only if necessary (see (iii) Aesop Robotic Surgical System (the first robot to be
figure below). The most famous prototype is the ROBODOC cleared by FDA for assisting surgery in the OR)
system developed by Integrated Surgical Systems, which is
commonly used in orthopedic surgeries. It is executed solely by
the robot. The robot acts as per the program fed by the surgeon.
Still the intervention by the surgeon is required.

Fig.4 Telesurgery in progress.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

(iii) The Da Vinci System


External arms with remote centre of motion: the
movement is mechanically constrained around a pivot
point.
3 DOF's moved by the external arm (2 orientations and
1 translation considering the roll as internal DOF).
3 DOF internal, actuated by a cable driven system:

Fig.6 Schematic Diagram of Robo Surgery

V. PRACTICAL USE OF ROBOTIC SURGEONS TODAY

In today's competitive healthcare market, many organizations


are interested in making themselves cutting-edge institutions
with the most advanced technological equipment and the very
newest treatment and testing modalities. Doing so allows them
to capture more of the healthcare market. Acquiring a surgical
Fig.5 The Da-Vinci System robot is in essence the entry fee into marketing an institution's
surgical specialties as the most advanced. It is not uncommon,
for example, to see a photo of a surgical robot on the cover of a
hospital's marketing brochure and yet see no word mentioning
(iii) Shared Control System: robotic surgery inside. As far as ideas and science, surgical
In this system, the human surgeon does the bulk of the work, robotics is a deep, fertile soil. It may come to pass that robotic
actually operating the surgical tools by hand, while the robot systems are used very little but the technology they are
assists in performing surgical procedures when needed. The generating and the advances in ancillary products will continue.
system monitors the activities of the surgeon, providing stability Already, the development of robotics is spurring interest in new
and support to the surgeon's movements using a technique tissue anastomosis techniques, improving laparoscopic
called active constraint. In this technique, the surgeons program instruments, and digital integration of already existing
the robot to recognize theareas of the surgical field as forbidden, technologies. As mentioned previously, applications of robotic
boundary, close and safe. Safe regions are the main focus of the surgery are expanding rapidly into many different surgical
surgery. Close regions border easily damaged soft tissue and the disciplines. The cost of procuring one of these systems remains
boundary is where soft tissue begins. As the surgeon nears these high, however, making it unlikely that an institution will acquire
dangerous areas, the robot pushes back against the surgeon, or in more than one or two. This low number of machines and the low
some cases, when the forbidden zone is reached, the robotic number of surgeons trained to use them makes incorporation of
system actually locks up to prevent any further injury. robotics in routine surgeries rare. Whether this changes with the
Therefore, through forced feedback in the surgical tools, the passing of time remains to be seen.
system limits their use within the appropriate area. As with
supervisor-controlled systems, there is some setup required by VI. THE FUTURE OF ROBOTIC SURGERY
the surgeon prior to the procedure to define the regions of the
surgical field. The robots aid the surgeon during the surgery but Robotic surgery is in its infancy. Many obstacles and
humans do most of the work. This enables both the entities to disadvantages will be resolved in time and no doubt many other
jointly perform the task. questions will arise. Many questions have yet to be asked;
(iv) Robotic Radiosurgery Systems questions such as malpractice liability, credentialing, training
Robots are also used in delivering radiation for the treatment of requirements, and interstate licensing for tele-surgeons, to name
tumors. These systems use robotics to control highly focused just a few. Many of current advantages in robotic assisted
beams of ionizing radiation to precise locations within the body. surgery ensure its continued development and expansion. For
Medical imaging first locates the tumor and a map of the area to example, the sophistication of the controls and the multiple
be treated is created. A series of commands are then entered by degrees of freedom afforded by the Zeus and da Vinci systems
the physician into the system to instruct it how to deliver the allow increased mobility and no tremor without comprising the
treatment. The patient is then registered with the system for visual field to make micro anastomosis possible. Many have
proper positioning of the body and the treatment is begun. The made the observation that robotic systems are information
robot then follows the commands to precisely deliver a series of systems and as such they have the ability to interface and
doses to the tumor. This reduces the risk of damage to integrate many of the technologies being developed for and
surrounding tissues. currently used in the operating room. One exciting possibility is
expanding the use of preoperative (computed tomography or

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

magnetic resonance) and intraoperative video image fusion to develop and obtain robotic devices has been largely driven by the
better guide the surgeon in dissection and identifying pathology. market. There is no doubt that they will become an important
These data may also be used to rehearse complex procedures tool in the surgical armamentarium, but the extent of their use is
before they Robotic surgery is in its infancy. Many obstacles and still evolving. Robotic surgery is restricted to a few centers in our
disadvantages will be resolved in time and no doubt many other country. The huge costs of the Machine and the set up along with
questions will arise. Many questions have yet to be asked; the spiraling cost of operation are the major dissuading factors. It
questions such as malpractice liability, credentialing, training was Escorts Heart Institute and Research Center that was the
requirements, and interstate licensing for tele-surgeons, to name first in India to acquire a surgical robot (da vinci Surgical
just a few.Many of current advantages in robotic assisted System). In India the first robotic surgery was performed in Apr
surgery ensure its continued development and expansion. For 2005, and the first robotic thoracic surgery in 2008. Care
example, the sophistication of the controls and the multiple foundation is working in collaboration with Indian Institute of
degrees of freedom afforded by the Zeus and da Vinci systems Information Technology (IIIT) Hyderabad to develop
allow increased mobility and no tremor without comprising the indigenous Robotic Surgical System.
visual field to make micro anastomosis possible. Many have
made the observation that robotic systems are information VIII. CONCLUSION
systems and as such they have the ability to interface and
integrate many of the technologies being developed for are Robotic Surgery will replace conventional laparoscopic
undertaken. The nature of robotic systems also makes the procedure in times to come thus revolutionizing the field of
possibility of long-distance intraoperative consultation or growing medical feats. The future of the robotic surgery is as
guidance possible and it may provide new opportunities for promising as the will to initiate better ways to perform medical
teaching and assessment of new surgeons through mentoring procedures. The separation of the patient from the human
and simulation. Computer Motion, the makers of the Zeus contact during surgery may herald an era of 'No infection, No
robotic surgical system, is already marketing a device called antibiotic'. Robotic surgery is an emerging technology in the
SOCRATES that allows surgeons at remote sites to connect to medical field. It gives us even greater vision, dexterity and
an operating room and share video and audio, to use a precision than possible with standard minimally invasive
telestrator to highlight anatomy, and to control the AESOP surgery, so we can now use minimally invasive techniques for a
endoscopic camera. Technically, much remains to be done wider range of procedures. But it's main drawback is high cost.
before robotic surgery's full potential can be realized. Although Besides the cost, Robotic System still has many obstacles that it
these systems have greatly improved dexterity, they have yet to must overcome before it can be fully integrated into the existing
develop the full potential in instrumentation or to incorporate healthcare system. More improvements in size, tactile sensation,
the full range of sensory input. More standard mechanical tools cost, and are expected for the future
and more energy directed tools need to be developed. Some
authors also believe that robotic surgery can be extended into the
realm of advanced diagnostic testing with the development and
use of ultrasonography, near infrared, and confocal microscopy
equipment. Much like the robots in popular culture, the future of
robotics in surgery is limited only by imagination. Many future
advancements are already being researched. Some
laboratories, including the authors' laboratory, are currently
working on systems to relay touch sensation from robotic
instruments back to the surgeon. Other laboratories are working
on improving current methods and developing new devices for
suture-less anastomoses. When most people think about
robotics, they think about automation. The possibility of
automating some tasks is both exciting and controversial. Fig.7 Growth Of Robotic Surgery Over The Years
Future systems might include the ability for a surgeon to
program the surgery and merely supervise as the robot performs
IX. REFERENCE
most of the tasks. The possibilities for improvement and
advancement are only limited by imagination and cost.
[1] Satava RM. Surgical robotics: the early chronicles: a
personal historical perspective.
VII. THE EMERGING TRENDS FOR ROBOTIC
SurgLaparoscEndoscPercutan Tech. 2002;12:616.
SURGERY IN INDIA
[PubMed]
Surgical robotics is a new technology that holds significant
[2] Felger JE, Nifong L. The evolution of and early experience
promise. Robotic surgery is often heralded as the new
w i t h r o b o t a s s i s t e d m i t r a l va l v e s u r g e r y.
revolution, and it is one of the most talked about subjects in
S u rg L a p a r o s c E n d o s c P e r c u t a n Te c h .
surgery today. Up to this point in time, however, the drive to
2002;12:5863.[PubMed]

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

[3] Marescaux J, Leroy J, Rubino F, et al. Transcontinental


robot-assisted remote telesurgery: feasibility and potential
applications. Ann Surg. 2002;235:487492

[4] Cheah WK, Lee B, Lenzi JE, et al. Telesurgical laparoscopic


cholecystectomy between two countries. SurgEndosc.
2000;14:1085. [PubMed]

[5] Jones SB, Jones DB. Surgical aspects and future


developments in laparoscopy.AnesthiolClin North Am.
2001;19:107124. [PubMed]

[6] Kim VB, Chapman WH, Albrecht RJ, et al. Early


experience with telemanipulative robot-assisted
laparoscopic cholecystectomy using Da Vinci.
SurgLaparoscEndoscPercutan Tech. 2002;12:3440.
[PubMed]

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Implementation of PIC Microcontroller based Vehicle Monitoring System using


Control Area Network (CAN) Protocol

Dhiraj Sunehra, V. Rajanesh and V. Sai Krishna


JNTUH College of Engineering
Nachupally (Kondagattu), Jagtial, Karimnagar-505501
Email: dhirajsunehra@yahoo.co.in

Abstract With the advent of the 21st century, the inclusion of nodes in real-time applications, and is known for its simplicity,
electronic systems in almost all the aspects of life including reliability, and high performance. The priority based message
vehicular technologies is evident. Modern day cars represent a scheduling used in CAN has a number of advantages, some of
symbiosis of several electronic subsystems that collaboratively the most important being the efficient bandwidth utilization,
give a safe and sound driving experience. Various electronic flexibility, simple implementation and small overhead (Steve
driver assist systems have been employed for ensuring ease and Corrigan, 2008).
safety of the users. These vehicles incorporate many electronic CAN was designed and improved upon for systems that need to
circuits for efficient vehicle control and monitoring such as transmit and receive relatively small amounts of information (as
circuits for monitoring speed, fuel level, battery voltage, engine compared to Ethernet or USB, which are designed to move
temperature, fire or spark detection in the engine, and detection much larger blocks of data) reliably to any or all other nodes on
of different combustible gases. The complexity of the functions the network. Applications of CAN in intelligent vehicular
implemented in these systems necessitates an exchange of data systems are evident. The basis of an intelligent vehicle is ability
between them. With conventional systems, data is exchanged by
of the vehicle to navigate and maneuver in a rapidly changing
means of dedicated signal lines, but this is becoming
increasingly difficult and expensive as control functions environment without compromising on the safety of the
become ever more complex. Controller Area Network (CAN) is commuters. For this the vehicle not only has to communicate
a high performance and reliable advanced serial communication with an intelligent guiding architecture but also has to provide a
protocol which effectively supports distributed real time reliable and fast communication within its internal modules
control. This paper presents implementation details of a which would be working in sync for providing corrective
prototype vehicle monitoring system using CAN protocol. The navigation (Renjun Li et al, 2008). Now the best protocol
main feature of the system includes monitoring of various available for this intra vehicular communications is CAN. The
vehicle parameters such as temperature, presence of CO level in CAN thus can be considered as the internal architecture which
the exhaust, battery voltage and light due to spark or fire. It uses continuously interacts with the external guiding architecture in
a PIC based data acquisition system that uses ADC to bring all order to perform the required functions for efficient navigation.
control data from analog to digital format which is visualized The CAN protocol is based on a bus topology and only two wires
through an LCD display. The communication module used in are needed for communication over a CAN bus (Karl Henrik
this project is embedded networking by CAN which has Johansson et al, 2005). The bus has a multi master structure
efficient data transfer. It also takes feedback of various vehicle where each device on the bus can send or receive data. Only one
conditions and is controlled by main controller. The schematic device can send data at any time while all the others listen. If two
is prepared using OrCAD. Hardware is implemented and or more devices attempt to send data at the same time, the one
software porting is done. with the highest priority is allowed to send its data while the
Keywords Controller Area Network protocol, battery others return to receive mode. The four different message types,
voltage, CO level, light due to spark or fire, Orcad, PIC or frames that can be transmitted on a CAN bus are the data
microcontroller. frame, the remote frame, the error frame, and the overload frame
(Robert Bosch GmbH, 1991).
I. INTRODUCTION The robustness of CAN may be attributed in part to its abundant
error-checking procedures. The CAN protocol incorporates five
CAN is a serial bus communications protocol developed by
methods of error checking: three at the message level and two at
Robert Bosch in the early 1980s. The CAN bus was traditionally
the bit level. Error checking at the message level is enforced by
used in automotive applications (Robert Bosch GmbH, 1991).
CAN became an international standard (ISO 11898) in 1994, the CRC and the ACK slots. The 16-bit CRC contains the
specially developed for fast serial data exchange between checksum of the preceding application data for error detection
electronic controllers in motor vehicles. By networking the with a 15-bit checksum and 1-bit delimiter. The ACK field is two
electronics in vehicles with CAN, they could be controlled from bits long and consists of the acknowledge bit and an
a central point, the Engine Control Unit (ECU), thus increasing acknowledge delimiter bit. At the bit level, each bit transmitted
functionality, adding modularity, and making diagnostic is monitored by the transmitter of the message. If a data bit (not
processes more efficient. CAN offer an efficient communication arbitration bit) is written onto the bus and its opposite is read, an
protocol between sensors, actuators, controllers, and other

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

error is generated. The final method of error detection is with the PIC18 family come with ADC (10-bit, up to 8-channel Analog-
bit-stuffing rule where after five consecutive bits of the same to-Digital Converter for PIC18Fxx8), USART, and I/O ports.
logic level, if the next bit is not a complement, an error is The CAN bus module supports message bit rates up to 1 Mbps
generated. and conforms to CAN 2.0B active specifications (Microchip
This paper presents implementation details of a vehicle Technology Inc., 2006).
monitoring system using CAN protocol that monitor various
vehicle parameters such as temperature, CO percentage in the 2.2 Operation of MCP 2551 CAN Transceiver and its
exhaust, battery voltage and light due to spark using an LDR. Interfacing Details
The MCP2551 is a high-speed CAN, fault-tolerant device that
II. HARDWARE IMPLEMENTATION serves as the interface between a CAN protocol controller and
the physical bus. The MCP2551 provides differential transmit
The block diagram of the vehicle monitoring system is shown in and receive capability for the CAN protocol controller and is
Fig.1. Brief details of various modules used in the system fully compatible with the ISO-11898 standard, including 24V
including PIC18F458 microcontroller are presented. requirements. It will operate at speeds of up to 1 Mb/s.
Typically, each node in a CAN system must have a device to
convert the digital signals generated by a CAN controller to
signals suitable for transmission over the bus cabling
(differential output). It also provides a buffer between the CAN
controller and the high-voltage spikes that can be generated on
the CAN bus by outside sources. The pin diagram of MCP2551
is shown in Fig.2 (Microchip Technology Inc., 2007).
The CAN bus has two states: Dominant and Recessive. A
dominant state occurs when the differential voltage between
CANH and CANL is greater than a defined voltage (e.g.,1.2V).
A recessive state occurs when the differential voltage is less than
a defined voltage (typically 0V). The dominant and recessive
states correspond to the low and high state of the TXD input pin,
respectively. The MCP2551 CAN outputs will drive a minimum
load of 45, allowing a maximum of 112 nodes to be
connected. The RXD output pin reflects the differential bus
voltage between CANH and CANL. The low and high states of
the RXD output pin correspond to the dominant and recessive
Fig. 1 Block diagram of Vehicle Monitoring System states of the CAN bus, respectively. The RS pin allows three
modes of operation to be selected: 1. High-Speed, 2. Slope-
Control, 3. Standby.
2.1 Salient Features of PIC18F458 microcontroller
Peripheral Interface Controller (PIC18F458) is a high
performance, enhanced flash microcontroller with CAN
module. Introduced by Microchip Technology, Inc, it possesses
an array of features that make it attractive for a wide range of
applications. It is packaged in a 40-pin DIP package or 44-pin
surface mount package. PIC18 has a RISC architecture that
comes with some standard features such as linear program
memory addressing up to 2Mbytes, data memory addressing up
to 4Kbytes. To use PIC18F to develop a microcontroller-based
system requires a ROM burner that supports flash memory;
however, a ROM eraser is not needed, because flash is an
EEPROM. In a flash memory, the entire contents of ROM must Fig. 2 Pin diagram of MCP 2551 CAN Transceiver
be erased in order to program it again. The erasing of flash is
done by the ROM programmer itself, and so a separate eraser is PIC18F458 is connected to CAN Transceiver at the engine side
not needed (Mazidi et al, 2008). When operated at its maximum as shown in the Fig.3. Temperature sensor and battery voltage
clock rate a PIC executes most of its instructions in 0.2 s or 5 circuit is connected to RA0 and RA1 ports.
instructions/s. A watchdog timer resets the PIC if the chip ever Gas sensor and LDR is connected to RC0 and RC1 ports. In
malfunctions and deviates from its normal operation. Three order to display in LCD, Data and control lines are connected to
versatile timers can be characterize inputs, control outputs and port D. CANH and CANL of engine is connected to CANH and
provide internal timings for program execution. Up to 12 CANL of dashboard side.
independent interrupt sources can control when the CPU will
deal with each source (Peatman, 2009). All the members of

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

The second PIC18F458 is connected to CAN Transceiver at the dashboard side as shown in the Fig.4. An external key is connected
to RB7. By pressing this key, it can request data from engine. This data can be seen through LCD.

Fig. 3 Schematic of PIC 18F458 connected to MCP2551 at engine side

Fig. 4 Schematic of PIC 18F458 connected to MCP2551 at dashboard side

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

2.3 LM 35 Temperature Sensor


LM35 is a precision integrated-circuit temperature sensor with
an output voltage linearly proportional to the centigrade
temperature (Scale Factor: 10 mV/ C). It measures temperature
more accurately than a using a thermistor. It generates a higher
output voltage than thermocouples and may not require the
output voltage to be amplified. The LM35 does not require any
external calibration or trimming to provide typical accuracies of Fig. 6 Pin diagram of LM 358
+/-0.4 oC at room temperature and +/- 0.8 oC over a range of 0oC to
+100oC. It can operate from 4 to 30 volts. As it draws only 60 A
2.5 Gas Sensor Circuit
from its supply, it has very low self-heating, less than 0.1C in
The circuit uses a gas sensor (MQ-6) which can detect the
still air. The LM35 is rated to operate over a -55C to +150C
presence of LPG, propane, methane and other combustible
temperature range (Website 1).
materials (Website 4). Sensitive material of MQ-6 gas sensor is
SnO2, which has lower conductivity in clean air. When the target
2.4 LDR circuit for detecting spark or fire
combustible gas exists, the sensor's conductivity is higher along
Light Dependent Resistor or Photoresistor, which is a passive
with the gas concentration. Gas sensor output voltage is
electronic component is basically a resistor which has a
connected to LM358. If V+ >V-, then the output voltage is high
resistance that varies inversely to the light intensity. An LDR is
and the LED connected to the output is ON. If V + < V- , then the
made of a high resistance semiconductor that absorbs photons
output voltage is low and LED is OFF. The schematic of Gas
and based on the quantity and frequency of the absorbed photons
sensor circuit used to detect CO level is shown in Fig.7. The
the semiconductor material give bound electrons enough energy
operating voltage range of PIC18F458 is 2.0V to 5.5V. So a 5.1V
to jump into the conduction band. The resulting free electrons
Zener diode is connected to get a regulated output voltage.
conduct electricity resulting in lowering resistance of the LDR.
The number of electrons is dependent of the photons frequency
(Website 2).
The LDR sensor is connected to an op-amp of IC LM358 as
shown in Fig.5. LM358 consists of two independent, high-gain
frequency-compensated operational amplifiers designed to
operate from a single supply over a wide range of voltages.
Operation from split supplies also is possible if the difference
between the two supplies is 3 V to 32 V (Texas Instruments,
2013). The pin diagram of LM358 is shown in Fig.6. LDR is
connected to one of the inputs of LM358 and a reference voltage
is fed to the other input pin. When light falls, LDR resistance
decreases and if the negative terminal voltage is less compared to
reference voltage, then LED turns ON. This circuit can be used to
detect fire or spark in the engine.
Fig. 7 Schematic of Gas sensor circuit used to detect CO level

2.6 LCD display


Most of the cars use mechanical dashboards, electronic
implementation of the same will have enough flexibility to cater
to varying customers' choice without too much additional cost.
Liquid Crystal Displays (LCDs) are a good choice for the
electronic dashboards as they operate at low voltages, consume
very little power and are also economical (Knoll and
Kosmowski, 2002). In fact high-end cars have LCD panels in
their dashboards. A simple 8-bit 16x2 LCD is used to interface
with PIC 18F458. There are three control signals Register
select, Enable and Read/Write. There are two important
registers in LCD, viz. Command register and Data register. If an
8-bit data bus is used, the LCD will require a total of 11 data
lines (3 control lines plus 8 lines for the data bus). When RS is
low (0), the data is to be treated as a command or a special
Fig. 5 Schematic of LDR circuit for detecting spark or fire instructions (such as clear screen, position cursor, etc). When
in the engine (Website 3) RS is high (1), the data being sent is text data which should be

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

displayed on the screen. R/W input allows the user to write 3.2 At Dashboard side
information to LCD or read information from it. When RW line The CAN controller checks if any data is available at the receive
is low (0), the information on the data bus is being written to the buffers. If there is any, then that value is displayed through LCD.
LCD. When R/W is high (1), the program is effectively querying By pressing the key placed on the dashboard, the data is
(or reading) the LCD. The enable pin is used by the LCD to latch requested by sending an identifier. Then after 10 seconds the
information at its data pins. When data is supplied to data pins, a message is obtained from engine section via CAN protocol
high to low pulse must be applied to this pin in order for the LCD .
to latch the data present in the data pins. We have to prepare an IV. CONCLUSIONS
LCD properly before the character is to be displayed. For this a
number of commands have to be provided to the LCD before In this paper, the implementation details of vehicle monitoring
inputting the required data. LCD doesn't know about the content system that monitors various parameters such as temperature,
(data or commands) supplied to its data bus. It is the user who battery voltage, light due to spark or fire and CO level in the
has to specify whether the content at its data pins are data or exhaust are presented. For monitoring the above parameters,
commands. Here 2 ports Port B and Port C of PIC 18F458 are LM35 sensor, 9V battery, LDR and MQ6 sensors are used.
taken. Port B is used for providing control signals and Port C is Hardware schematics are drawn using OrCAD. For
used for providing data signals. implementing this system, the programming of LED, ADC and
LCD interfacing with microcontroller is to be done using
2.7 Universal Synchronous Asynchronous Receiver Transmitter Embedded C. The microcontroller interfacing using CAN
(USART) protocol is being carried out. It is intended to transfer the various
(USART) module is one of the three serial I/O modules measurements including temperature of the engine, battery
incorporated into PIC18FXX8 devices. The USART can be voltage, presence of light and CO level in the exhaust from
configured in the following modes: engine to dashboard via CAN Protocol when a key at the
Asynchronous (full-duplex) dashboard is pressed, and display these readings on a LCD on
Synchronous Master (half-duplex) the dashboard.
Synchronous Slave (half-duplex).
The MAX232 IC is used to convert the TTL/CMOS logic levels
to RS232 logic levels during serial communication of V. REFERENCES
microcontrollers with PC. The controller operates at TTL logic
level (0-5V) whereas the serial communication in PC works on
[1] Karl Henrik Johansson, Martin Trngren, and Lars Nielsen,
RS232 standards (-25 V to + 25V). This makes it difficult to
2005. Vehicle Application of Controller Area Network,
establish a direct link between them to communicate with each
Handbook of Networked and Embedded Control Systems
other. The intermediate link is provided through MAX232.
Control Engineering, Springer, pp. 741-765.
III. DESIGN METHODOLOGY
[2] Knoll, P.M. and Kosmowski, B.B., 2002. Liquid crystal
display unit for reconfigurable instrument for automotive
3.1 At Engine side
applications, Opto-Electronics Review, Vol. 10, No. 1,
CAN 2.0B is a network protocol that was specially developed
pp.75.
for connecting the sensors, actuators and ECUs of a vehicle.
CAN 2.0B supports data rates from 5Kbps to 1Mbps, which
[3] Mazidi, M.A., McKinlay, R.D., and Danny Causey, 2008.
allows the CAN network to be used to share status information
PIC Microcontroller and Embedded System, Pearson
and real time control. It can transfer up to 8 data bytes within a
Education Inc.
single message. In this investigation two nodes are used for
monitoring parameters. The various sensors used are
[4] Microchip Technology Inc., 2006. PIC18FXX8 Data
temperature, battery voltage, LDR and CO sensors.
Sheet, DS41159E.
Temperature and battery voltage are connected to ADC. LDR
and CO2 sensors are connected to digital ports. Values are
[5] Microchip Technology Inc., 2007. MCP2551 High-Speed
transferred to microcontrollers in the dashboard from ADC and CAN Transceiver Datasheet, DS21667E.
digital ports via CAN Protocol at an interval of 10 seconds and is
displayed in the LCD. Also the message can be seen through [6] Peatman, John B., 2009. Designing with PIC
computer via UART. If any data is required by the dashboard, the Microcontrollers, Pearson Education.
CAN controller at the engine side checks the identifier
transmitted by it. After 10 seconds, the engine side controller [7] Renjun Li, Chu Liu and Feng Luo, 2008. A Design for
sends the required data to the dashboard. Automotive CAN Bus Monitoring System, IEEE Vehicle
Power and Propulsion Conference (VPPC), September 3-5,
Harbin, China.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

8] Robert Bosch GmbH, 1991. CAN specification, Ver. 2.0.,


Germany.

[9] Steve Corrigan, 2008. Introduction to the Controller Area


Network, Application Report SLOA101A, Texas
Instruments, Dallas.

[10] Texas Instruments, 2013. Dual Operational Amplifiers,


SLOS068S (www.ti.com).

[11] Website 1: http://www.ti.com/lit/dls/symlink/lm35.pdf

[12] Website 2: www.electroschematics.com.

[13] Website 3:
http://electronicswork.blogspot.in/2011/02/ldr555-dark-
sensor.html.

[14] Website 4: www.hwsensor.com.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Telerobotic And Its Applications

H.K Saini, RC Bhatt


Faculty of Degree Engineering,
Military College of Electronics & Mechanical Engineering, Secunderabad, 500 015, Andhra Pradesh
hemantsaini125@gmail.com

Abstract - Telerobotic is a combination of two major subfields, robotics, which is required to perform operations in unknown
teleoperation and telepresence. Teleoperation indicates or hazardous environments where it replaced a human being to
operation of a machine at a distance. It is similar in meaning to reduce operation cost, time and to avoid loss of life. In this
the phrase "remote control" but is usually encountered in system we control a robot to support operation at a distance. The
research, academic and technical environments. It is most robot may be in another room/country, different scale to the
commonly associated with robotics and mobile robots but can operator. The goal of teleoperation is to allow an operator to
be applied to a whole range of circumstances in which a device interact and operate a telerobot at remote environment via
or machine is operated by a person from a distance. This paper communication channel. The human operator is responsible for
will give information regarding telerobotics, its major high level control such as planning, perception and making
components and its application in major field. This paper will decisions at operator's site, while the robot performs the low
also help in understanding the main requirements for level instructions such as navigation and localization at the
telerobotics. remote site.
Keywords- Telepresence, Teleoperation, telemanipulator,
Visual and control system II. WHAT IS TELEROBOTICS?

There are two essential components of telerobotic system i.e.


Visual and control system. The visual system of robot presents
visual feedback to the operator about environment through
human system interface (HSI) and control system facilitates the
operator to navigate a robot over internet or Wi-Fi connection at
remote site. The most common critical problem of the
telerobotic stereo-vision system is the communication delay.
Due to this problem, a teleoperator sometimes has to go for a
move and wait strategy. Thus, it leads to teleoperation
instability. Various techniques such as predictive feedbacks,
supervisory control and Augmented Reality (AR) have been
proposed to solve the problems.
Telerobotic can be further divided into two major subfields i.e.
teleoperation and telepresence. Teleoperation is an operation of
a machine at a distance. It is similar in meaning to the phrase
"remote control" but is usually encountered in research,
academic and technical environments. It is most commonly
associated with robotics and mobile robots but can be applied to
I. INTRODUCTION a whole range of circumstances in which a device or machine is
operated by a person from a distance. Teleoperation is standard
Telerobotics can be explained as the area of robotics in which term in use both in research and technical communities and is by
we control the robots from a distance, chiefly using wireless far the most standard term for referring to operation at a
connections, "tethered" connections, or the Internet. Robotics distance. This is opposed to "telepresence" that is a less standard
was born as science to develop machines, destined to factories, term and might refer to a whole range of existence or interaction
capable to accomplish boring, repetitive and simple operation that include a remote connotation. A telemanipulator (or
as human substitute to product goods in a cheaper way. In the teleoperator) is a device that is controlled remotely by a human
last two decades, a lot of research and development has been operator. If such a device has the ability to perform autonomous
done on the robots to work in hazardous and unreachable work, it is called a telerobot. If the device is completely
environments instead of humans. In these cases, it is very useful autonomous, it is called a robot.
to teleoperate the robot from a remote site, even from the other One of the key factors which are responsible to enhance the
side of the world connecting via internet. For this motivation a performance of a telerobotic system during teleoperation is
new branch of robotics is gaining importance. It is a branch of Telepresence. Telepresence implies a feeling of presence at
the remote site by displaying information about the remote

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

environment to the operator in a natural manner. A good degree robot using the user-interface through which he can able to see
of telepresence assures the feasibility of the required environment and make further decision. ROBOVOLC is made
manipulation task. Visual and control applications are two major for volcano exploration, capable to approach active volcano and
components of Telerobotics and Telepresence. A remote camera can perform various kind of operations by keeping human
provides a visual representation of the view from the robot. operators at safe distance. It was mainly built to minimize risks
Placing the robotic camera in a perspective that allows intuitive to volcanologists, to enhance the chances to understand and
control is a recent technique has not been fruitful as the speed; study conditions during volcano eruption. Two separate PCs
resolution and bandwidth have only recently been adequate to used to assist operation, the first system has a user interface
the task of being able to control the robot camera in a meaningful through which volcanologists can drive and control the robot
way. Using a head mounted display, the control of the camera remotely in the volcano from the base station. The control of
can be facilitated by tracking the head. This only works if the robot can be accomplished by two joysticks and a touch
user feels comfortable with the latency of the system, the lag in screen/mouse. The second PC controls manipulator to collect
the response to movements, and the visual representation. Any samples of rocks by using three fingers gripper, which has a
issues such as, inadequate resolution, latency of the video force sensor to measure and control grasping strength, and to
image, lag in the mechanical and computer processing of the collect gases from volcano during eruption for the scientific
movement and response, and optical distortion due to camera measurements. The communication between the operator and
lens and head mounted display lenses, can cause the user the remote environment can be accomplished by a high-power
'simulator sickness' that is exacerbated by the lack of vestibular wireless LAN.
stimulation with visual representation of motion. Mismatch
between the users motions such as registration errors, lag in
movement response due to over filtering, inadequate resolution
for small movements, and slow speed can contribute to these
problems. The same technology can control the robot, but then
the eyehand coordination issues become even more pervasive
through the system, and user tension or frustration can make the
system difficult to use. Ironically, the tendency to build robots
has been to minimize the degrees of freedom because that
reduces the control problems. Recent improvements in
computers has shifted the emphasis to more degrees of freedom,
allowing robotic devices that seem more intelligent and more
human in their motions. This also allows more direct
teleoperation as the user can control the robot with their own
motions. A telerobotic interface can be as simple as a common
MMK (monitor-mouse-keyboard) interface. While this is not
immersive, it is inexpensive. Telerobotics driven by internet
connections are often of this type. A valuable modification to
MMK is a joystick, which provides a more intuitive navigation Figure 1 showing the working of ROBOVOLC
scheme for planar robot movement. Dedicated telepresence
setups utilize a head mounted display with either single or dual (b) Medical Robots. To assist surgeons in various surgical
eye display, and an ergonomically matched interface with specialities we have medical robots. The research and
joystick and related button, slider, trigger controls. development of medical robotic system is growing
Future interfaces will merge fully immersive virtual reality predominately in telerobotics during last two decades. ZEUS
interfaces and port real-time video instead of computer- and Da Vinci surgical systems are best examples of medical
generated images. Another example would be to use an Omni robots. As precision plays key role in surgeries, the design
directional treadmill with an immersive display system so that requirements for a teleoperation controllers of a robot are
the robot is driven by the person walking or running. Additional
significantly different from other telerobotic applications.
modifications may include merged data displays such as
Medical robots are equipped with computer-integrated
Infrared thermal imaging, real-time threat assessment, or device
schematics. technology and are comprised of programming languages,
advanced sensors and controllers for teleoperation. A surgeon
III. APPLICATIONS OF TELEROBOTICS. can see and work inside body by making tiny hole to the patient
by giving instructions to robot from surgeon's console in which
(a)Industrial Robots he can able to see 3D images from the stereo cameras. It needs
As mentioned earlier, telerobots can help human beings to further development in medical robots due to limitations like
operate in environments where it is not possible for a human to poor judgement, limited dexterity and hand-eye coordination,
work and it includes space, at volcanoes and under water expensive, technology in flux, difficult to construct and debug,
applications. Such telerobotic system gives advantages to the and limited to relatively simple procedure
scientists or users to work from the safe places and can monitor a

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

(d)Telerobotics for Space : With the exception of the Apollo


program most space exploration has been conducted with
telerobotic space probes. Most space-based astronomy, for
example, has been conducted with telerobotic telescopes. The
Russian Lunokhod-1 mission, for example, put a remotely-
driver rover on the moon, which was driven in real time (with a
2.5-second lightspeed time delay) by human operators on the
ground. Robotic planetary exploration programs use spacecraft
that are programmed by humans at ground stations, essentially
achieving a long time-delay form of telerobotic operation.
Recent noteworthy examples include the Mars exploration
rovers (MER) and the Curiosity rover. In the case of the MER
mission, the spacecraft and the rover operated on stored
Figure 2 Da Vinci surgical system and the surgeon sit programs, with the rover drivers on the ground programming
at a viewfinder and remotely operate robot arms during surgery each day's operation. The International Space Station (ISS) uses
a two-armed telemanipulator called Dextre. More recently, a
humanoid robot Robonaut has been added to the space station
The Da Vinci robotic surgical system from Intuitive Surgical Inc for telerobotic experiments.
performs surgical tasks and provides visualization during
endoscopic surgery. It consists of an ergonomically designed
surgeon's console, a patient side cart with four interactive
robotic arms (one to control the camera and three to manipulate
instruments) operated by the surgeon and a high performance
3D Vision System to provide a true stereoscopic picture to a
surgeon's console. It is used to perform a minimally invasive
heart bypass, surgery for prostate cancer, hysterectomy and
mitral valve repair, and is used in more than 800 hospitals in the
Americas and Europe. Remote surgery is essentially advanced
telecommuting for surgeons, where the physical distance
between the surgeon and the patient doesn't matter. The use of
Wi-Fi technology in the medical robots allows a surgeon to
communicate and examine a patient visually from anywhere in
the world. Figure 3.Soviet telerobotic vehicle Lunokhod-1

Undersea Robots: It is dangerous to work at deep level of sea NASA has proposed use of highly capable telerobotic systems
for future planetary exploration using human exploration from
for human being because of high pressures and possibility of
orbit. In a concept forMars Exploration proposed by Landis, a
harmful chemicals. Undersea robots are gaining much
precursor mission to Mars could be done in which the human
importance and growing body of telerobotic devices during last vehicle brings a crew to Mars, but remains in orbit rather than
two decades. Remotely operated vehicles (ROVs) allows landing on the surface, while a highly capable remote robot is
scientists, oceanologists and companies to monitor water operated in real time on the surface. Such a system would go
quality in lakes and reservoirs using robotic-fish, to measure the beyond the simple long time delay robotics and move to a
Greenland ice sheet using drones, to know about world and life regime of virtual telepresence on the planet. One study of this
of creatures under water, for undersea volcano exploration and concept, the Human Exploration using Real-time Robotic
to activate the deepwater horizon blowout preventer (BOP) in Operations (HERRO) concept, suggested that such a mission
GULF Mexico etc. In the next century, undersea robotics will could be used to explore a wide variety of planetary destinations.
play an essential role for the exploration and production of SPACE exploration may have a new direction. In the 1960s,
mineral resources that lay in the seas. Operators can humans did the exploring but since the last moon landing in
communicate with AUVs (Automated Undersea Vehicles) in 1972, NASA's only explorers beyond low Earth orbit have been
several different ways, including low-frequency acoustics for semi-autonomous robots. Now the agency is pondering a third
approach, sending astronauts who would remain in orbit around
long distances and high-frequency coded acoustics for medium
alien worlds and explore via robotic rovers. On Earth, human-
distances. Romeo was designed like an operational test-bed. Its controlled robots are used for tasks ranging from delicate
aim is to perform research on intelligent vehicles in the real surgery to exploration of the deep sea. But in space, robotic
subsea environment and to develop advanced methodologies "telepresence" could be even more promising.Telerobotics
for marine science. Romeo is intended as a prototype would be orders of magnitude more productive for exploration
demonstrator for robotics, biological, and geological research. than semi-autonomous robots like the Mars rovers Spirit and

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Opportunity. "Nothing beats having human cognition and sometimes referred to as remote-presence devices have been a
dexterity in the field". But there is a hitch in trying to control vision of the tech industry. Until recently, engineers did not have
from Earth a robot that's exploring another planet: the huge time the processors, the miniature microphones, cameras and
lag as the signals travel back and forth. Real-time reactions are sensors, or the cheap, fast broadband necessary to support them.
needed for it to work. For example, surgeons can perform But in the last five years, a number of companies have been
operations well as long as the robot responds to their actions introducing functional devices. As the value of skilled labor
within about half a second. Greater latencies cause problems. rises, these companies are beginning to see a way to eliminate
Latency on Earth is no more than a few hundred milliseconds, the barrier of geography between offices. Traditional
but latency between the Earth and moon is about 3 seconds, and videoconferencing systems and telepresence rooms generally
that delay is enough to slow telerobotics dramatically. "You offer Pan / Tilt / Zoom cameras with far end control. The ability
could use telepresence to tie a knot in 30 seconds on Earth, but it for the remote user to turn the device's head and look around
would take 10 minutes to tie it with 3-second latency."Latency naturally during a meeting is often seen as the strongest feature
for signals to Mars is much longer - from 8 to 40 minutes of a telepresence robot. For this reason, the developers have
depending on the planets' positions - so real-time control from emerged in the new category of desktop telepresence robots that
Earth is impossible. The most plausible way to have robotic concentrate on this strongest feature to create a much lower cost
telepresence on Mars would be to station astronauts in orbit robot. The Desktop Telepresence Robots, also called Head and
around the planet. The first step towards this might be testing out Neck Robots allow users to look around during a meeting and
robotic telepresence on Earth with simulated latencies. Rovers are small enough to be carried from location to location,
on the moon controlled from lunar orbit might come next. eliminating the need for remote navigation.
Rovers could also be controlled in real-time to explore the far
side of the moon, not visited by the Apollo missions. To do this, (f)Other Telerobotic Applications
NASA would have to station astronauts at lunar Lagrangian Remote manipulators are used to handle radioactive materials.
point L2 - a gravitationally neutral area of space which lies about Telerobotics has been used in installation art pieces; Telegarden
60,000 kilometres beyond the moon, in line with Earth.Mars is a is an example of a project where a robot was operated by users
bigger challenge, of course, as is Venus, which is usually through the Web.
considered beyond the scope of human exploration because of
its boiling, corrosive atmosphere. A Venus mission could be IV. CONCLUSION
shorter as it is closer to us than Mars. However, any robots would
require extensive modification to survive in Venus's hostile Telerobotics is one of the most intriguing fields of robotics. It
environment and, even then, they would not last the years that a allows to break down the barriers of scale and distance, enabling
Mars rover might. Nevertheless, having human telepresence human beings to manipulate objects situated in remote locations
would make exploration much more productive than if and/or to interact with micro/nano-scale environments,
autonomous robots had to await commands from Earth. otherwise inaccessible. Research in telerobotics has been very
Telepresence opens up massive opportunities for exploration, active in the last decades and huge theoretical and technological
says Lester. "Once you go to Venus, you can go to a lot more results have been obtained. For space applications, there are now
places," he says. "You could go scuba diving in the methane systems that allow users to remotely perform maintenance
lakes on Titan. operations on the International Space Station. In Minimally
Invasive Robotic surgery, teleoperation systems allow surgeons
(e)Telepresence/Videoconferencing to operate in previously unreachable areas. In industrial
The prevalence of high quality video conferencing using mobile applications, engineers can remotely drive mobile robots for the
devices, tablets and portable computers has enabled a drastic maintenance of parts of machineries located in places too
growth in Telepresence Robots to help give a better sense of dangerous for humans or for inspecting some parts of the plants.
remote physical presence for communication and collaboration Despite of these significant results, there are still many
in the office, home, school, etc. when one cannot be there in challenges that need to be addressed for reaching the ultimate
person. The robot avatar can move or look around at the goal of telerobotics: telepresence. The best teleoperation system
command of the remote person. There have been two primary should be designed and controlled in such a way that the user
approaches that both utilize videoconferencing on a display 1) feels as being directly interacting with the environment s/he is
desktop telepresence robots - typically mount a phone or tablet manipulating, despite of the scale differences, of the distance
on a motorized desktop stand to enable the remote person to look and of the presence of the robots. To achieve telepresence,
around a remote environment by panning and tilting the display several issues have to be addressed. For example, the very nature
or 2) drivable telepresence robots - typically contain a display of the human interaction has to be further explored, new devices
(integrated or separate phone or tablet) mounted on a roaming allowing multimodal interaction have to be developed, new
base Some examples of desktop telepresence robots include control strategies for coupling in the best way possible the action
Kubi by Revolve Robotics, Galileo by Motrr, and Swivl. Some and the perception are required and new ways for fusing the
examples of roaming telepresence robots include Beam by streams of information arriving to the user have to be sought.
Suitable Technologies, Double by Double Robotics, RP-Vita by
iRobot, Anybots, Vgo, TeleMe by Mantarobot, and Romo by
Romotive. For over 20 years, telepresence robots, also

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

REFERENCE

[1] Cheah WK, Lee B, Lenzi JE, et al. Telesurgical


laparoscopic cholecystectomy between two countries.
SurgEndosc. 2000;14:1085. [PubMed]

[2] Jones SB, Jones DB. Surgical aspects and future


developments in laparoscopy.AnesthiolClin North Am.
2001;19:107124. [PubMed]

[3] Kim VB, Chapman WH, Albrecht RJ, et al. Early


experience with telemanipulative robot-assisted
laparoscopic cholecystectomy using Da Vinci.
SurgLaparoscEndoscPercutan Tech. 2002;12:3440.
[PubMed]

[4] Schaffernicht, E., Martin, C., Scheidig, A., Gross, H.: A


Probabilistic Multimodal Sensor Aggregation Scheme
Applied for a Mobile Robot, Proc., KI 2005, 28th
German Conference on Artificial Intelligence, Koblenz
2005 LNAI 3698, pp. 320-334

[5] Mester, Gy.: Introduction to Control of Mobile Robots,


Proceedings of the YUINFO-2006, pp. 1-4,
Kopaonik, Serbia and Montenegro, 2006

[6] Corley, Anne-Marie (September 2009). "The Reality of


Robot Surrogates". spectrum.ieee.com. Retrieved 19
March 2013.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Robotic Soldier : A Necessity In Future Battles


H.S. Gill Vaibhaw Kala, P. M. Menghal
Faculty of Degree Engineering,
Military College of Electronics & Mechanical Engineering, Secunderabad, 500 015, Andhra Pradesh
71lines@gmail.com | vabykala@gmail.com | prashant_menghal@yahoo.co.in

Abstract The R&D funding in robotics is singularly for the intervention. A powered exoskeleton would be available and
Military and security applications. As predicted by many integrated into the soldier system. A very good example is from
military stalwarts, robots will be the main and key fighters in the the movie Iron Man. The suit has the capability to perform
future warfare which will be driven technologically. The maneuver beyond the capability of any human. The use of the
increase in number of semi autonomous robots being deployed VR capability would enable soldiers to interact with robotics,
by Armed forces and police is gaining importance. It is a fact that software systems, and hardware platforms via an array of third
the future decision making capability of these robots will be generation interfaces that will rely on natural language
dictated by the choices they make based on calculations, which commands, gestures, and virtual display/control systems.
will make a human soldier just a puppet. This trend raises Consumer demand and scientific exploration will yield an
questions about the direction of both robotics as a field and explosion in cognitive and physical enhancers, including neural
humanity as a whole. Nowadays the events throughout the world prosthetics, and permanent physical prosthetics. These could
indicates and proves that robots are proving themselves yield dramatic enhancements in Soldier performance and
extremely valuable, irreplaceable in dealing with the natural provide a tremendous edge in combat[1-2].
disasters, industrial mishaps and other dangerous situations
which are harmful for the humans. They can help prevent II. UNDERSTANDING HUMANS
problems or monitor risks, carry out tasks in hostile or polluted
environments and support search and rescue operations. The Military robotics will be such a domain for understanding this
paper aims to focus on robotic soldiers which will be a necessity process; on one hand, programmers and operators can move out
and an asset in future battlefield. It will also help in informing of the range of danger, but on the other, they are distanced from
the readers about the need to be considered in responsibly experiencing the consequences of conflict. An international
introducing advanced technologies into the battlefield and, study, suggests that remotely piloted drone strikes may result in
eventually, into society. With history as a guide, we know that greater pilot safety for reduced targeting accuracy. In many cases
foresight is critical to both mitigate undesirable effects as well as those pilots are thousands of miles from any danger and return
to best promote or leverage the benefits of technology. home to their families in the evening. Military and security
Key wo r d s N a t i o n a l S a f t e t y, S e c u r i t y, M i l i t a r y robotics have opened new questions about mediated experience
Robotics,Robotic solider. and the line, if there is one, between video games and real-world
human struggles[3-4].
I. INTRODUCTION

The augmented and virtual reality will be ubiquitous and will


support almost every facet of future war including
communications, system control, and training. Soldiers will be
able to move seamlessly among real, augmented, and virtual
environments without any difficulty. Using virtual reality (VR)
systems and new gaming technologies will be the prime mode of
delivery for selection of personnel and training of the
personnel's. The training would be augmented with the use of
intelligent software agents, modeling and simulation tools Fig.1 Pilot Drone
resident on every soldier, giving them the required analytic and
decision-making capabilities which will dwarf the currently III. MILITARY ROBOTICS
available major command posts and the rearward C4ISR
centers. It will be embedded and available anytime, anywhere Military robotics will revolutionize the warfare today and in
and under any situations. Augmenting this capability would be future through the use of advanced technologies that help the
mental and physical readiness assessments that would monitor a military on the battlefield and create a better, more flexible and
Soldiers status in real time using a suite of behavioral, neural, cost efficient military. Military Robotics can be used to help in
and physiological sensors that would be embedded within all the diffusing bombs, or unmanned aerial vehicles can provide a
aspects of the Soldiers ensemble. The data would then be "birds-eye-view" of territories for military troops.Military
captured and used to drive command decisions regarding unit Robots are not necessarily shaped like humans. It depends on the
tasking, Soldier assignments, and medical/psychological nature of jobs the robot is built to carry out. Robots that have to

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

negotiate difficult terrain use tank treads. Flying robots look like (a) The HULC exo-skeletal suit: Lockheed-Martin has designed
small airplanes. Some robots are the size of trucks, and they look and prototyped an exo-skeletal suit which is intended to increase
like trucks or bulldozers. Other, smaller robots have a very low the load-carrying capability of soldiers in the field or operations.
profile to allow for great maneuverability. The plan to have
robotic soldiers is a comprehensive strategy to upgrade the (b)TALON military robots: TALON military robots from
nation's military systems across all branches of the Armed QinetiQ are designed for multiple military applications,
Forces. The plan calls for an integrated battle system -- a fleet of including reconnaissance and combat. The U.S. military has
different vehicles that will use up to 80 percent of the same parts, deployed more than 3,000 of TALON.
new unattended sensors designed to collect intelligence in the
field, an unmanned launch systems that can fire the missiles at
enemies outside the line of sight.

IV. TYPES OF MILITARY ROBOTS

These robots are divided into four categories based on the nature
of their employability:
(a )Unmanned Aerial Vehicles (UAV): for surveillance and
reconnaissance missions.
(b) Small Unmanned Ground Vehicles (UGV): can enter
hazardous areas and gather information without the soldiers
coming in contact.
(c)Multifunctional Utility/Logistics and Equipment (MULE):
designed for providing combat support in conflict situations.
Fig 3: The TALON
(d)Armed Robotic Vehicles (ARV): can either carry powerful
weapons platforms or sophisticated surveillance equipment.
The above types of robots may or may not include humanoids. VI. AUTOMATING TASKS
The most common robots currently in use by the military are
Initial robotic deployments are likely to be one-sided
small, flat robots mounted on miniature tank treads. These
engagements. But what happens when robots are deployed to
robots are tough and have the ability to tackle almost any terrain. fight other robots? As increasingly sophisticated systems take on
They have usually a variety of sensors built in, including audio more capabilities, questions are raised about the nature of war.
and video surveillance and chemical detection[5-6]. Will war become a matter of detached robot chess, or will
martial combat be replaced with new ways of expressing
V. AUGMENTING HUMANS conflict.

The resilience of any military force depends on the resilience of (a)Automated missile defence: The U.S. Army has developed a
individual soldiers. Traditionally, this has involved physical and counter rocket, artillery, and mortar(C-RAM) capability that
vocational training regimens. However, a new generation of will use radar to detect incoming rockets and mortar rounds and
robotic systems is interfacing with soldiers to increase their automatically direct fire against them. This is a kind of active
reach, strength, and effectiveness. These range from remote- defence system which is nowadays being used in many
controlled robotic systems to body suits that literally amplify weapons. A classical example of this is during the Gulf war- I,
individual soldiers' strength and endurance. Few examples of the Patriot missiles were used to counter the Scud missiles being
same are listed below: fired by the Iraqi's. This was the first time that such a technology
was demonstrated.

Fig 2: The HULC exo-skeletal suit


Fig 4: The C- RAM

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

(b) Military reconnaissance robot fuelled by biomass: The (b) MATILDA:Mesa Associates' Tactical Integrated Light-
DARPA-funded Energetically Autonomous Tactical Robot Force Deployment Assembly, made by Mesa Robotics, is
(EATR) is a prototype military reconnaissance robot designed to similar to other small robot designs but has a higher profile due
use waste heat engine to continually fuel itself on plants and to its triangular tread shape. It weighs 28 kg with the batteries,
other biomass, including dead bodies. can be carried by one or two people and fits in the trunk of a car.
It can be equipped with a mechanical arm or a variety of cameras
and sensors, and it can even tow a small trailer. The robot has a
top speed of 3 feet per second and a single-charge run time of
four to six hours. In the event of tread damage, the quick-change
tracks can be swapped in about five minutes.
(c) ACE: It is about the size of a small bulldozer. It can handle
many heavy-duty tasks, such as clearing out explosives with a
mechanical arm, clearing and cutting obstacles down with a
plow blade or a giant cutter, pulling disabled vehicles, hauling
cargo in a trailer and serving as a weapons platform. This robot
can roll along with a mine-sweeper attached to the front,
clearing a field of anti-personnel mines before any humans have
to walk there. One of its most innovative uses is as a
Fig 5 The EATR firefighting/decontamination platform. Equipped with a pan-
and-tilt nozzle, it can pull its own supply of foam retardant or
VII. MILITARY ROBOTS ACROSS GLOBE decontaminant in a 1,325-litre tank. A nozzle can also be
mounted on a mechanical arm for very precise aiming. This
Military robots have been in use, around the world, since second heavy-duty robot has a maximum speed of 10 kmph and runs on
half of the 20th century, when the first unmanned aircraft was a diesel engine
developed. Early military robots also appeared during Second
World War, when Germany employed small remotely controlled
vehicles known as tracked mines. These tracked mines were
some of the first UGVs to appear, though they suffered from
weaknesses such as easily destroyed control cables. The Soviet
Union also used radio controlled tank UGVs around this time.
At present, the following military robots are being used and
developed in various countries[7-10]:

(a) PACKBOT :It is another small robot that operates on treads.


It is even smaller and lighter than the TALON, weighing about
18 kg. It is man-portable and is designed to fit into the U.S.
Army's new standard pack, the Modular Lightweight Load
Carrying Equipment (MOLLE).Controlled by a Pentium
processor that has been designed specially to withstand rough Fig 7 MATILDA
treatment. It chassis has a GPS system, an electronic compass
and temperature sensors built in with eight modular payload
ports.

Fig 8: ACER

Fig. 6: PACKBOT

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

(d) ARTS: The All-Purpose Remote Transport System (ARTS)


was developed by the U.S. Air Force for disposing off dangerous
explosives in operation Iraqi freedom. It is basically a bulldozer,
but instead of a bulldozer's blade, it has mine-clearing devices, a
mechanical arm and a water cutting tool attached. It can be
remotely operated from a distance of up to 5 km with line of
sight. It can also set charges to detonate explosives from a
distance.

Fig 11: Northrop Grumman RQ-4A Global Hawk

(g) FLYING BOTS-PREDATOR : This reconnaissance UAV


plays a key role in military planning by helping military
commanders keep track of their own troops and also spot enemy
that might be waiting to ambush soldiers. Flying robots like the
Predator provide constant real-time data on troop movements,
enemy locations and weather. Predators can be
Fig 9: ARTS

(e)RAAS & ARV : The Robotic Armored Assault System


(RAAS) and the Armed Robotic Vehicle (ARV) are both in
development by the U.S. military. These are large-scale robots
weighing about 6 tons and capable of carrying up to 1 ton of
payload. Potential weapons to be mounted on these tank-size
robots include the 30mm Mk 44 chain gun or a turret system
capable of firing Hellfire missiles. They have been designed so
that they can be carried and deployed by the military's primary
cargo-carrying aircraft, the C-130. fitted with Hellfire missiles also.

Fig 12: PREDATOR

(h)PETMAN :Protection Ensemble Test Mannequin is being


developed by Boston Dynamics for the US Army for testing
chemical protection clothing and is the first of its kind. The
anthropomorphic robot will be able to balance itself while
walking, crawling, doing calisthenics, and generally moving
freely like a human while being exposed to chemical warfare
agents. It has a hydraulic actuation system and articulated legs
with shock-absorbing elements. The robot is under the control
of an on-board computer and an array of sensors and internal
monitoring systems. The prototype robot walks heel-to-toe just
Fig 10 The RAAS like a human, and remains balanced even when pushed.

(f) FLYING BOTS-GLOBAL HAWK AND POINTER: The


military uses different flying robots, mainly for reconnaissance.
It can be held by a person and launched with a good throw, like
the FQM-151 Pointer, to full-size airplanes that operate by
remote control, like the RQ-4A Global Hawk.

Fig 13: PETMAN

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

(i)BIGDOG:This is a prototype of an all-terrain robot. The VIII. CONCLUSION


quadruped robot is equipped with a computer featuring sensors
that aid its movements over harsh terrain. The robot is powered While simple automated systems were deployed as early as
by a gasoline engine that drives the hydraulic system. It has a World War II and many military roboticists insist that humans
computer built-in that controls locomotion. The sensors provide must remain in the loop, a new generation of war-fighting
stereo vision, joint force, joint position and ground contact that technologies promises to redefine the nature and scope of that
aids in continuous movement. Most importantly, the robot is loop. After sometime, it will become relatively rare for combat
equipped with a laser gyroscope that aids in balance under air missions from one state to be manned, and a majority of on-
extreme conditions. the-ground military personnel will be trained to interact with
robots on a daily basis. Military robotics does not fit cleanly into
any of our current protocols around the rules of engagement.
Within the next decade we will need to assess not only the ethics
of automated armed combat but also the allocation of
accountability. For example, who will be responsible for system
malfunctions or oversights that have human costs? In the next
ten years we will see increasing levels of autonomous military
activity. Suits that multiply physical strength and endurance,
will be seen more often which will be used in future battle to
enhance the capability of the soldier .Robotic systems will
increasingly be used not only to insulate humans from potential
harm but also to flag potentially suspicious activities and
individuals. Therefore ,the day is not far when robotic soldier
Fig 14: Big dog
will provide an edge over adversaries in battlefields and hence
will be a necessity and an asset.
(j) Daksh is one of the most current military robots of India. It is
REFERENCE
an electrically powered and remote controlled robot which is
used to locate, handle and destroy risky objects safely. The main [1] M.K.Mishra ,P.Jananidurga,S.Siva,U.Aarthi ,S.Komal,
role of this military robot is to recover improvised explosive Application of Robotics in Disaster Management in
devices. This robot can even climb stairs. Moreover, it can also Land Slides International Journal of Scientific and
scan objects using its portable X-ray Device. Research Publications, Volume 3, Issue 3, March 2013,pp
1- 4.

[2] Saradindu Naskar, Soumik Das, Abhik Kumar Seth,


Application of Radio Frequency Controlled Intelligent
Military Robot in Defense IEEE International
Conference on Communication Systems and Network
Technologies 2011,pp 396-401.

[3] Albert Ko and Henry Y. K. Lau. Robot Assisted


Emergency Search and Rescue System, International
Journal of Advanced Science and Technology Vol. 3
February, 2009.
Fig. 15 Daksh Military Robot [4] Priyadarshi Bhattacharya and Marina L. Gavrilova,"
RoadmapBasedPathPlanning," IEEE Robotics &
Robots would be developed with very high level of intelligence Automation Magazine,Vol-15,No-2,Jun 2008,pp
to enable them to differentiate between a enemy or a friend. 5866.
These can then be deployed in difficult warfare zones, like the
Line of Control (LOC), a step that would help avert the loss of [5] Garcia, E.,Jimenez, M.A., De Santos, P.G. Armada, M.,
human lives. In the initial phase, the robotic soldier will be The evolution of robotics research. IEEE Robotics &
trained by the human soldier to identify an enemy or a Automation Magazine, Volume-14, Issue-1, 2007, pp 90
combatant. The 'Robotic Soldier' would be at the front end and 103.
the human soldier would be assisting him from far away.
[6] Dario, P. ; Guglielmelli, E. Allotta, B., Robotics in
medicine. IEEE International Conference on Intelligent
Robots and Systems IROS '94. pp 739 752.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

[7] SeulJung, Experiences in Developing an Experimental


Robotics , IEEE Transactions on Education Vol. 56 ,,
2013,pp 129-136.

[8] Hirai S., Robotics as a social technology IEEE


International Conference Mechatronics and
Automation, 2009, pp xl xli

[9] Tennyson Samuel John, Advancements in Robotics &


Its Future uses International Journal of Scientific &
Engineering Research Volume 2, Issue 8, Aug-2011, pp
1-6.

[10] Satish Kumar K N, Sudeep C S Robots for Precision


Agriculture 13th National Conference on Mechanisms
and Machines (NaCoMM07), IISc, Bangalore, India,
December 12-13, 2007, pp 1-4.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

Autonomous Underwater Vehicles, a Platform for Future use on Marine


Research a Review

K Sai Kiran Reddy1, P Sai Teja1 J R Ravi Kiran Kopelli2


B.Tech Student, Department of Mechanical Engineering, Assistant Professor, Department of Mechanical
1
VBIT, Hyderabad, India Engineering, Vignana Bharathi Institute of Technology,
2
Hyderabad, India

Abstract - This paper addresses the main importance of


underwater vehicles in the field of Engineering and research.
Ocean exploration is becoming increasingly important and with
it, the need for sophisticated Autonomous Underwater Vehicles.
Until recently, the knowledge about the ocean was very limited.
One of reasons is due to the unstructured, hazardous undersea
environment which makes exploration difficult. Underwater
robotics can help us better understand marine and other
environmental issues. Autonomous underwater vehicle (AUV)
is one type of underwater robotics which has attracted many
research interests in recent years. This paper mainly focuses on
Classification of underwater vehicles, advantages, applications,
finally recent trends and transformations.
Keywords: Autonomous Underwater vehicle Fig.1.1 Newport's Auto-Mobile Fish Torpedo (1871)

I. INTRODUCTION The torpedo had a two-cylinder reciprocating engine,


operated by compressed air, which drove a 1-footdiameter, four-
One of the safest ways to explore the underwater is using bladed propeller. A hydrostatic depth control mechanism was
small unmanned vehicles to carry out various missions and also used. The first torpedo trial was in 1871. The torpedo did
measurements, among others, can be done without risking run, but difficulty was encountered in obtaining a water-tight
people's life. With the advent of underwater vehicles, the hull and an air-tight air flask. The origin of AUV's should
researcher's capability to investigate the deep waters is probably be linked to the Whitehead Automobile Fish
extremely improved. Today underwater vehicles are becoming Torpedo. Robert Whitehead is credited with designing, building
more popular especially for environmental monitoring and for and demonstrating the first Torpedo in Austria in1866.
defense purposes. Torpedoes are named after the Torpedo fish, which is an electric
An Autonomous Underwater vehicle (AUV) is a robot ray capable of delivering a stunning shock to its prey.
which travels underwater without requiring input from an Whitehead's first torpedo achieved a speed of over 3.0m/s and
operator. AUVs constitute part of a larger group of undersea ran for 700m. The Vehicle was driven by compressed air and
systems known as unmanned underwater vehicles, a carried an explosive charge. If one ignores the fact that it carried
classification that includes non-autonomous remotely operated an explosive charge, it might be considered the first AUV.
underwater vehicles (ROVs) controlled and powered from the
surface by an operator/pilot via an umbilical or using remote 1.1 State of the Art
control. In military applications AUVs more often referred to
simply as unmanned undersea vehicles (UUVs). The first AUV An AUV is a submerged platform, or vehicle, able to carry
was developed at the Applied Physics Laboratory at the different instruments in order to perform different tasks,
University of Washington as early as 1957 by Stan Murphy, Bob untethered from any surface support vessel, following a
Francois and later on, Terry Ewart. The "Special Purpose preplanned survey route, gather information, store it (for a post
Underwater Research Vehicle", or SPURV, was used to study mission processing) and, in some cases, send it to surface on real
diffusion, acoustic transmission, and submarine wakes. Other time for data quality assessment. They are not instruments by
early AUVs were developed at the Massachusetts Institute of themselves. (Pablo Rodriguez, Jaume Piera, autumn 2005).
Technology in the 1970s. One of these is on display in the Hart
Nautical Gallery in MIT. At the same time, AUVs were also II. TYPES OF AUVS
developed in the Soviet Union (although this was not commonly
known until much later). For our purposes we can consider the following
classification, attending their capabilities, weight, size and
payload. (Pablo Rodriguez, Jaume Piera, autumn 2005).

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

2.1 Micro-AUV 2.4 HAUV (Hovering AUV)

Tiny vehicles that weights less that 5 kg. Developed to


deploy one specific sensor at time. Collaboration algorithms are
being developed to have these vehicles working on a cooperative
mode.

2.2 Mini-AUV

These vehicles are less than 2 m. long and weighs less


than 100 kg. They could be easily deployed from a rubber boat
between two persons. Instrument manufacturers has started to
develop specific instrument suites for this class of vehicles, this Fig.2.2 Hovering AUV
means less power and higher frequencies, which makes this
class of vehicles more suitable for local and high resolution
studies. This class of vehicles are becoming very popular on the
Vehicles of relatively big size (3 4 m.) that have a
market, mainly because:
unique propulsion configuration that permits them to hover on a
given location as a ROV. This ability makes them an ideal tool
- They require a reduced operation and maintenance cost
for inspection and intervention tasks that the conventional
compared with the sea going AUV class.
AUVs can not achieve. There are few models on the market and
currently they are on the latest operational trials. They can carry
- Interest of the military sector on this vehicles that has favoured
a heavy payload (up to 200 kg) but have the same operational
the R&D on this class of vehicles and is pushing the market
constraints as the big AUV. (Pablo Rodriguez, Jaume Piera,
toward them.
autumn 2005).
2.3 Sea-going AUV
2.5 SAUV (Surface AUV)

Fig.2.1 Remus 6000

These vehicles are longer than 2m. and can weight up


to several tones. The payload can excess 100 kg and are rated Fig.2.3 Bluefin-12 AUV
from 600m (Maridian 600) down to 3.000 m (HUGIN 3000)
even 6000 m. (REMUS 6000). They are big vehicles, with
complicated logistics and maintenance and control systems These vehicles, usually of big capacity and autonomy
installed on separate containers; technical crew use to be of sail semi-submerged with a surfacing mast containing
more than 4 technicians. Autonomy can be greater than 48 h and communications and beacons. They carry a limited payload,
need medium-size support ships. Given the size and payload usually very specific and have some operational security
capabilities of these vehicles they can have installed high problems related with their sailing mode. (Pablo Rodriguez,
performance instrumentation on board. Typical configurations Jaume Piera, autumn 2005).
include MF (200-300 KHz) multibeams and side scan sonar,
seismic profilers (parametric or 3.5 KHz), gravitimeters, CTD,
TV cameras, navigation sensors, etc. (Pablo Rodriguez, Jaume
Piera, autumn 2005).

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

2.6 Gliders 4.1 Science: Seafloor mapping, geological sampling,


oceanographic monitoring.

Fig.2.4 Slocum Glider


Fig.4.1 AUV Model With Sonar Scanning
It is observed by Pablo Rodriguez, Jaume Piera, (2005)
that although they have limited navigational and payload
capacities, they can be considered also as UUV as they sail Sonar (Originally an acronym for Sound Navigation and
extremely long ranges periods of time gathering oceanographic Ranging) is a technique that uses sound propagation (usually
data. They only need external support for deployment and underwater, as in Submarine navigation) to navigate,
recovery. Payload and operational requirements limits seriously communicate with or detect other vessels. Two types of
the field of application. (Pablo Rodriguez, Jaume Piera, autumn technology share the name sonar: passive sonar is essentially
2005). listening for the sound made by vessels; active sonar is emitting
pulses of sound and listening for echoes. Sonar may be used as
means of acoustic location and of measurement of the echo
III. MODERN-DAY USES OF UNDERWATER VEHICLES characteristics of targets in the water.
AND TECHNOLOGY
AUVs carry sensors to navigate autonomously and map
1. National Defense features of the ocean. Typical sensors include compasses, depth
2. Resource Extraction sensors, side scan and other sonar's, magnetometers,
3. Science thermistors and conductivity probes. A demonstration at
4. Telecommunications Monterey Bay in California in - September 2006 showed that a
5. Construction, Inspection, and Maintenance 21-inch (530 mm) diameter AUV can tow a 300 feet (91m) long
6. Search and Recovery hydrophone array while maintaining a 3-knot (5.6 km/h)
7. Archaeology cruising speed.
8. Recreation and Entertainment
9. Education 4.2 Environment: environmental remediation; inspection of
underwater structures, including pipelines, dams, etc; long term
IV. APPLICATIONS monitoring (e.g., radiation, leakage, pollution).

Until recently, AUVs have been used for a limited number 4.3 Oil and gas industry: ocean survey and resource assessment;
of tasks dictated by the technology available. With the construction and maintenance of undersea structures. The Gavia
development of more advanced capabilities and high yield is the global provider of commercial AUVs known for its
power supplies, AUVs are now being used for more and more performance and adoptability. Best for surveying work as well
tasks with roles and missions constantly evolving. With the as oil rig maintenance. The oil and gas industry uses AUVs to
development of AUV technology, its application areas have
make detailed maps of seafloor before they start building subsea
been expanding gradually. Its main applications include the
following fields: infrastructure; pipelines and subsea completions can be
installed in the most cost effective manner with minimum
disruption to the environment. The AUV allows survey
companies to conduct precise surveys or areas where traditional
bathymetric surveys would be less effective or too costly. Also,
post-lat pipe surveys are now possible.

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

4.6 Research:

Fig.4.2 GAVIA Commercial AUV for Oil Rigs

4.4 Military: Shallow water mine search and disposal;


submarine off-board sensors.
A Typical military mission for an AUV is to map an
area to determine if there are any mines, or to monitor a Fig.4.4 Sea Duane 2 AUV from Flinders University, Adelaide,
Australia
protected area (such as a harbor) for new unidentified objects.
AUVs are also employed in anti-submarine warfare, to aid in the
There is a ton of development in research sector of
detection of manned submarines. On the military side, AUVs
AUVs but the latest goes to Sea Duane 2 of Flinders University,
have been under development for decades, and they are now
Australia. SD2 ids used for underwater surface scanning and life
reaching an operational status. Their initial fleet application will
assessment of deep sea organisms. Scientists use AUVs to study
be for mine hunting, which was also the case for fleet
lakes, the ocean, and the ocean floor. A Variety of sensors can be
introduction of ROVs. However in the case of AUVs, they will
affixed to AUVs to measure the concentration of various
operate from a submarine and not a surface ship.
elements or compounds, the absorption or reflection of light,
and the presence of microscopic life.

V. RECENT TRENDS

The SAUV II (Figure 1) is a solar powered AUV


designed for long endurance missions such as monitoring,
surveillance, or station keeping where real time bi-directional
communications to shore are critical. The SAUV II operates
continuously for several months using solar energy to recharge
its lithium ion batteries during daylight hours. The SAUV II was
fitted with a fast-response galvanic oxygen micro-sensor (AMT
Analysenmesstechnik GmbH). This sensor provides rapid in
situ profiling of dissolved oxygen at depths of up to 100 m with a
Fig.4.3 Blue Star 2 U.S.A Navy response time of a few hundred milliseconds. (Thomas B.
Curtin, Denise M. Crimmins, Joseph Curcio, Michael
The U.S. Navy's submarine launched AUV is the Long Term Benjamin, Christopher Roper, fall 2005)
Mine Reconnaissance System (LMRS), which is scheduled for
initial operation in 2003.

4.5 Hobby:

Many robotics construct AUVs as a hobby. Several


competitions exist which allow these homemade AUVs to
compete against each other while accomplishing objectives.
Like their commercial brethren, these AUVs can be fitted with
cameras, lights and sonar. As consequences of limited resources
and inexperience, hobbiest AUVs can rarely compete with
commercial models on operational depth, durability, or
sophistication. Finally, these hobby AUVs are usually not
oceangoing, being operated most of the time in pools or
lakebeds.

Fig.5.1 Solar Powered Autonomous Underwater Vehicle (SAUV II)

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

VI. Future Autonomous Underwater Vehicle 6.3 Detailed Investigation Systems

6.1. Next Generation Power System Acoustic sonar systems to survey topography and geology of the
ocean floor have been developed. In 2010, the Goseikaiko sonar
Technology for a new type of fuel-cell battery has been system was installed on the URASHIMA, and performance
developed to allow long-range navigation in the deep sea improvement tests were conducted in the sea. (JAMSTEC,
without a fuel supply. In 2010, prototype tests were run on a 2011).
small, high-efficiency three-less (HETL) fuel cell. (JAMSTEC,
2011).

Fig.6.3 GOSEIKAIKO Sonar installed in URASHIMA

6.4 Control System


Fig.6.1 Prototype of a Small, High-Efficiency three-less (HETL) fuel
cell
This control system is composed of multiple CPUs, and the
system load can be distributed because each one is used
6.2 Underwater Acoustic Communication System
separately according to the functions of the underwater vehicle.
In 2010, this control system was installed on the MR-X1, and
An underwater acoustic communication system has
sea trials were conducted to verify its utility. (JAMSTEC, 2011).
been developed to actualize transmitting picture signals and
control signals between a vehicle and its support vessel. In 2010,
test of long-range transmission using Time Reversal Acoustic
communication and short-range high-capacity transmission at
sea were conducted. (JAMSTEC, 2011).

Fig.6.4 Multi-CPU Control System

6.5 High-performance Navigation System

An Inertial navigation system has been developed to allow


Fig.6.2 Long Range-Transmission using Time Reversal Acoustic
communication autonomous navigation in the sea, where GPS doesn't work. In
2010, performance tests of compact-size inertial navigation
system were conducted in the sea. (JAMSTEC, 2011).

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6

VIII. REFERENCES

1. Stefan Ericson et.al, Autonomous Underwater Vehicle, , Blue


fin Robotics, Cambridge, USA.

2. Pablo Rodriguez, Jaume Piera, autumn 2005, Mini AUV, a


platform for future use on marine research for Spanish research
council, pp. 14-15.

3. Thomas B. Curtin, Denise M. Crimmins, Joseph Curcio,


Michael Benjamin, Christopher Roper, Autonomous
Underwater Vehicles: Trends and Transformations, Marine
Technology Society Journal, fall 2005 Volume 39, Number 3,
pp. 65-66.

Fig.6.5 Prototype of Compact-Size Inertial Navigation System 4. JAMSTEC, 2011, Future AUV System, Technological
Development, Marine Technology and Engineering Center

Fig.6.6 Future Autonomous Underwater Vehicle

Fig.6.7 Next Generation Marine System Network

Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.

You might also like