Professional Documents
Culture Documents
S.Naveen1 L.Parthasarathy2
Dept. of Mechatronics, BIT A1, Dept. of Mechatronics, BIT 17/1A,
Ravi Murugan Padmanaba Apts, Kavundampalayam, Thaneer Panthal, Gnaniyam Palayam Post, V.Vellode,
Kovai 30. naveen18101994@gmail.com Erode-12. parthaerode@gmail.com.
ABSTRACT - For years, the trusty seat belt provided the sole INTRODUCTION
form of passive restraint in our cars. After some years, the
concept of the airbag - a soft pillow to land against in a crash
As we all know, Airbags are a type of automobile safety restraint
came. This invention made a great impact by saving many lives.
The airbag system involves the use of various sensors in which like seatbelts. Inventors.about.com describes them as gas-
accelerometer is an important one. They help in measuring inflated cushions built into the steering wheel, dashboard, door,
proper acceleration of the vehicle. roof, or seat of your car that use a crash sensor to trigger a rapid
At the same time, MEMS began to gain importance in the expansion to protect you from the impact of an accident.
industrial areas. As it deals with particles in micro-scale level, it Although Breed invented a "sensor and safety system" in 1968,
helps in reducing the size of the devices to a larger extent. Thus the world's first electromechanical automotive airbag system,
Grayson (2007) says that the result was the MEMS these airbags more popular because of its durability &
accelerometer, which are small in size and are efficient than the flexibility. It became an integral part of a system because of the
then existing accelerometers. In this case, these accelerometers growth of MEMS technology.
are used to detect the rapid negative acceleration of the vehicle
to determine when a collision has occurred and the severity of I. MEMS TECHNOLOGY:
the collision. These tiny micro scale accelerometers
revolutionized car air-bag deployment systems in the mid- As Wikipedia says, it is the technology of very small devices; it
1990s. Silicon-based materials (e.g., single crystal silicon,
merges at the nano-scale into nanoelectromechanical systems
polycrystalline silicon, silicon dioxide, and silicon nitride) are
the primary materials used for constructing the MEMS devices, (NEMS) and nanotechnology. They are miniaturized
and manufacturing approaches are derived from mechanical and electro-mechanical elements (i.e., devices and
microfabrication processes developed for integrated circuits structures) that are made using the techniques of micro
(ICs) [2]. The microfabrication process as used for silicon- fabrication, feels Matej Andrejai (2008). Although they have
based MEMS (especially for prototyping) is time-consuming their presence in many products like inkjet printer head, a video
and requires access to cleanroom equipment. Both materials projector DLP system, a disposable bio-analysis chip, it came to
and use of cleanroom are expensive; and although the spotlight after this technology (MEMS) was blended with the
performance of silicon-based MEMS can be excellent, their then existing airbag system and made the airbag system a more
relatively high cost has limited the applications they can reliable one, says H. Baltes (2002).
address. Here, we are suggesting an alternate method in
producing the MEMS accelerometer by using papers. These This was made possible by making the accelerometers using
can actually replace the existing silicon accelerometers and thus MEMS technology and producing much smaller and reliable
saving some space, weight and cost too. They can be
device which can be made use of instead of the existing ones
manufactured easily as compared to the silicon accelerometers.
Xinyu Liu (2011) feels that Paper is readily available, which are far less reliable than those made of MEMS.
lightweight, and easy to manufacture; the paper substrate makes
integration of electrical signal-processing circuits onto the II. MEMS ACCELEROMETER:
paper-based MEMS devices straightforward.
Accelerometer is a device which measures all possible
Keywords - MEMS, airbag, accelerometer, paper acceleration forces both static and dynamic. It sees the
acceleration associated with the phenomenon of weight
experienced by any test mass at rest in the frame of reference of
the accelerometer device. Hence, it is used in the Airbag system
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
of an automobile so that it can act as a sensor which signals MECHANICAL PROPERTIES OF PAPER
sudden change of force (in case of crash). They are not only used ACCELEROMETER:
in the airbag system but also in other parts of automobiles.
One of the most commonly used accelerometer is the We have to check its stiffness so as to compare with other
piezoelectric accelerometer. But, since they are bulky and materials in constructing the MEMS. So we are in need of its
cannot be used for all operations, a smaller and highly functional Young's Modulus. For this, various force-deflection tests were
device like the MEMS accelerometer was developed. Though taken and atlast the young's modulus of the paper was found out
the first of its kind was developed 25 years ago, it was not to be much lower (80 times) than that of silicon.
accepted until lately, when there was need for large volume
industrial applications. Due to its small size and robust sensing ELECTRICAL & THERMAL PROPERTIES:
feature, they are further developed to obtain multi-axis sensing.
The MEMS are in fact used in many parts, other than The current-voltage characteristics of the carbon resistor placed
accelerometer, in automobiles as shown in figure 3. in the cantilever. As shown in figure 5A, we obtain a linear V-I
curve. So, this must not pose a trouble and it's not the major
III. PAPER ACCELEROMETER: deciding factor for accelerometer. We also tested its temperature
co-efficient. The effect of temperature on the output of the
Although Silicon-based MEMS accelerometer functions qui1et sensor could be made negligible by laying out another carbon
excellent, they too have defects because of their scaling property. resistor on the cantilever for temperature compensation and
Their disposal also poses a trouble. Although they are so small, integrating it into the circuit for signal readout like a Wheatstone
when considering many masses, it'll be a problem. Hence came bridge like circuit as Xinyu Liu (2011) says.
the idea of making those sensors out of paper. The new paper Another great advantage of the paper accelerometer over the
made device emulates the current photo resistive silicon MEMS silicon-based is that they can be folded and its stiffness gets
sensors that are at the heart of many modern accelerometers. increased automatically.
Paper accelerometers are very cheap ($0.1 per m2 for printing
paper) and they can be disposed very easily. They can be V. INTEGRATION OF ELECTRONIC DEVICE ONTO THE
manufactured very easily even in a normal laboratory with basic PAPER ACCELEROMETER:
tools and are easily available. The paper substrate makes
integration of electrical signal-processing circuits onto the H.Baltes (2002) feels that the resistance obtained from the
paper-based MEMS devices straightforward. carbon resistor must be converted to voltage in order to send it as
The figure 4A shows the paper based force sensor a signal. There are some ways to do this. Here, we are going to
(accelerometer) in a cantilever shape. Here, the carbon resistor is see a monolithic approach where we are mirofabricating the
placed where the maximum surface strain occurs during MEMS and a conventional IC onto a single chip as expressed by
deflection. Xinyu Liu (2011) feels that when a force is applied to A. C. Siegel (2003).
the beam structure, the resistor experiences a mechanical As shown in figure 6A, a wheatstone approach is given in
strain/stress, which then induces a change in resistance of the microfabricating it with the MEMS. The respective calibration
resistor. Measuring the change in resistance allows
curve is obtained (figure 6C) and the force resolution results in
quantification of the applied force.
120mN.
IV. FABRICATING PAPER ACCELEROMETER: CONCLUSION:
The cantilever can be fabricated by using small thickness papers. Here, we saw many options like feasibility of MEMS using
In this case, we took chromatography paper. Then, it should be paper as its construction material and finally developed a paper
cut using laser cutting. Required dimensions should be noted based accelerometer using MEMS. The paper's hydrophilic
correctly and can be applied in the cantilever. This process nature was removed and hydrophobic nature was made to
hardly takes an hour (Figure 4B and 4C). induce in it. Then, we tested its mechanical, electrical and
But we must think of one main point paper is water-lover thermal characteristics and made appropriate conclusions.
(hydrophilic). If it starts absorbing the available water, then These type of paper based MEMS has many advantages
the mechanical, electrical properties of the sensor may get papers are of low-cost and can result in low-cost MEMS
damaged. So we must render hydrophobic character to it by devices; they are easy to handle and they can be folded,
treating the surface hydroxyl groups of paper. It can lower the increasing their stiffness; they can be made by using low-cost
impact of the environment on them. After fabrication, the tools and requires only a normal environment; we can modify
product is made to test under some conditions. the surface of paper due to its great chemical properties and its
high ratio of surface area to weight, offer opportunities for
surface tailoring to generate new types of sensitivities.
Thus, paper based accelerometers are of great use in the near
future and they can increase the lifetime of the MEMS devices
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
and cut the cost of the automobile to some extent. Not only the
automobile, they can also be used in various other fields too
(navigation, gyrometer, astrophysical area etc.).
VI. REFERENCES
[4] h t t p : / / e n . w i k i p e d i a . o r g / w i k i /
Microelectromechanical_systems
[6] h t t p : / / w w w. i n s t r u m e n t a t i o n t o d a y. c o m / m e m s -
accelerometer/2011/08
[7] H. Baltes, O. Brand, A. Hierlemann, D. Lange and C. Fig. 3 MEMS and Accelerometers in Automobiles.
Hagleitner, in Proc. IEEE Conf. Micro Electro Mechanical
Systems, 2002, pp. 459 466.
FIGURES:
Fig.1Airbag in an Automobile
Fig. 4 Paper based force sensor.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Fig. 5 Electrical and Thermal properties of paper accelerometer. Fig. 6 Monolithic integration of MEMS with a wheatstone
bridge circuit.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Abstract - In this work, an attempt has been made to propose an II. REVIEW OF LITERATURE
automated inspection system to facilitate selection and sorting
of components for assembly operations. The proposed system
has three modules namely the image capturing module, image Extensive research and analysis have been carried out by
processing module, and an interface module. The system uses a many researchers over the last several years in developing
web camera for image capturing, image processing routine for automatic inspection systems. Many of these systems have been
image analysis and a dialog box for interface created using employed in manufacturing industries, food processing
Visual C++. The system enables the operator to control industries and so on. An automated visual inspection station was
inspection without his direct intervention in the process being developed by Lui et, al (2005). This station uses cameras and
carried out. The operator is provided with a dialog box for computers to replace human vision in evaluation and inspection
interface. With a click on the icons of the dialog box, the image of bearing diameters to ensure that defective products may not
is captured automatically, loaded into the system, enhanced, be selected. To achieve the above requirements, the developed
processed and finally output is generated. Basing on the output automated visual inspection system was equipped with an
the subsequent sorting and selection processes are carried out. industrial PC, a CCD camera to capture images, image
Index Terms Automation, image processing. processing system and a mechanical move table. The set up
allows accurate non-contact measurement while the industrial
products are moving on conveyor belt continuously. Yan and Cui
I. INTRODUCTION (2006) proposed an on-line crack inspection system for glass
bottles that inspects cracks with the help of an intelligent
As the demands of consumer economy grow exponentially, the automated system. In order to strengthen the system's ability to
ever-increasing requirements of the consumers in terms of both learn and adapt itself, back propagation neural network was
quality and quantity are to be satisfied. This places very introduced in the system and a model for crack inspection of
significant challenges on the manufacturing industry that is glass bottles was built.
charged with the onerous task of meeting the demand and at the During the past few years there has been a growing demand for
same time maintaining the quality of the product being improving the overall quality and reducing the production cost
manufactured. Automation in manufacturing improves the of finished products. Recent availability of higher speed, more
productivity significantly without compromise in the quality versatile, lower cost digital computer hardware coupled with
level of the product. Flexible manufacturing systems constitute advances in areas such as computer aided design and
a number of automated work stations, their functioning manufacturing, robotics, computer vision, AI etc. have spurred
depending upon the product to be manufactured with all the research and development efforts in the use of these new
workstations working in integration with each other and technologies for the automation of industrial production lines.
controlled by a central computer. This increases the productivity The research carried out by Pai et al .( 1983) is concerned with
of the system. Inspection is an important phase of a the visual inspection of holes drilled in air craft engine
manufacturing process and for a FMS setup to function combustor assemblies. Using a digital image processor,
effectively, this phase, whether performed as on-line inspection combustor assemblies were examined and scanned images were
or off-line inspection, also needs to be automated along with the processed for analysis of parameters such as hole areas and
other processes. Several different techniques such as machine diameters. Mitchell and Sarhadi (1992) developed a system for
vision, range sensing, inspection based on surface reflections, the automated visual inspection of carbon fabric work pieces
image processing etc. have been adopted into the fold of (called plies) in a robotic assembly cell. The main task of the
automation approaches to inspection, one of the most widely system is to inspect the positional accuracy of each ply as it is
used being image processing. In this work, an attempt has been laid-up. The texture boundary is used as a first estimate of the
made to develop a system that automates the process of position of the real ply edge. Birch wood veneer boards have a
inspection using a vision and image processing approach. The variety of uses including furniture, flooring and vehicle sides. To
objective of automating inspection is to facilitate sorting and increase grading accuracy of the veneer boards, attempts were
selection of components for subsequent assembly operations made to replace manual inspection by Automated Visual
that would be carried out in a FMS. The presented system Inspection (AVI) which employs a camera and image processing
automatically captures the image of a component placed at routines. Pham and Alcock (1997) developed an AVI system
specified location, loads it, carries out processing and analyzing wherein the image of the sheet is first acquired using appropriate
of the loaded image, and conveys the nomenclature of the cameras and lighting. The image is then segmented into two
component supplied to the system so as to facilitate sorting and areas namely clear wood and defects such as coloured streaks,
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
hard rot, holes, pin knots, rotten knots, sound knots, splits, The system uses Microsoft Lifecam VX-1000 for
streaks and worm holes. Each region was classified into a defect image capturing. The LifeCam VX-1000's small, ring-shaped
type using the extracted features and a classifier. Fernandez et al. base is stable and works for a variety of mounting situations. The
(2002) developed an automated visual inspection system (AVIS) head swivels 360 degrees and tilts 45 degrees down and 90
for quality control of preserved orange segments, widely degrees up. The Microsoft LifeCam VX-6000 comes with a
applicable to production processes of preserved fruits and drivers and software for taking pictures and video clips. 2-D
vegetables. Yalcin and Bozma (1998) developed an automated image processing can be efficiently done using the Image
visual inspection system that uses the concept of selective Processing Tool Box built in Matlab, which is a collection of
fixation as a basis for all of its visual processing including functions that extend the capability of the Matlab numeric
inspection. A computer vision system was developed for computing environment.
locating and inspecting integrated circuit chips for their As seen from the above, to automate the process of inspection,
automatic alignment during various manufacturing processes at all the modules of the system need to have direct user interface in
the General Motors Delco Electronics Division (1978). This order to enable an unskilled operator (having no acquaintance
system, called SIGHT-I, is currently in production use. with applications used) to control the process of inspection. The
It is evident from the above review that lot of work has been done work presented herein attempts to link the powerful features of
to automate inspection so as to apply it to various manufacturing Matlab and the Lifecam. In addition, the VC++ project handling
processes thereby improving their productivity. Focus has been this can be converted into an executable (.exe) file the execution
on replacing human vision with machine vision so that it leads to of which could be automatically timed and invoked from the
increased flexibility, lower cost, increased reliability and Windows platform.
accuracy, and greater speed and serviceability. With the increase On the above basis, a dialog box was created in VC++ to accept
in the complexity and demand for the industrial components the inputs from the operator and save these to different files. This
necessity of automating inspection has become inevitable. facilitates the invoking and execution of any Matlab program
Based on the study of the work carried out in this area an directly from the VC++ executable file without the operator
automated inspection station using image processing technique having to start Matlab separately. This facility offers a good
is proposed in the sections below. opportunity to automate the whole process right from taking the
images of the components, saving them, executing the Matlab
III. SYSTEM OF CONFIGURATION programme, and finally giving the result. The dialog box for the
operator to work with is as shown in Figure 3.2.
The configuration of the proposed intelligent system
for automatic inspection is as shown in Fig 3.1. The objective of
the automated inspection system is to provide the operator with
a facility to carry out selection and sorting of the components of
a FMS without direct intervention in the process being carried
out. The supplied input is then processed by the system to report
the component nomenclature. The processing of the input image
is done through the Image Processing Tool Box that is built in
Matlab. VC++ handles the interface between the operator,
camera and Matlab. The Image Processing Tool Box facilitates
the processing of the input image and gives the desired output.
The development of this system was taken up in different
modules namely,
a. Image capturing Module
b. Image Processing Module
c. Interface Module
Fig. 3.2: Dialog box for operator interface
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
accordingly a support structure with wooden base that supports V. IMAGE PROCESSING AND ANALYSIS
horizontal beam made of polymer material which has vertical
attachment of cantilever type at its top is designed. As discussed earlier, the proposed system's objective is to select
and sort out components for assembly. This is achieved by
means of image processing and analysis using Image Processing
Tool Box of Matlab. Selection and sorting of components can be
done based on various factors which in turn depend on the
variety of components being manufactured. This is a very vast
area of work but focus has been only on the following regions of
differentiating the components.
Differentiation Based On Shape
Differentiation Based On Size
Differentiation Based On Special Features
Specific For A Given FMS
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Fig. 5.3 image Converted to Logical Type i.e, Black and White
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
VII. CONCLUSION
VIII. REFERENCES
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
AbstractWith the advent of robotics in surgery, human beings o the necessary organs. Rather than creating large incisions
have sparked off another revolution. After the White and Green several inches long to gain access to underlying tissues,
Revolution in our country, time has now come for the robotic minimally invasive surgical techniques typically rely on small
revolution especially in the medical field. Though it may not be half-inch incisions encircling the surgical field in order to insert
possible to predict for sure as to when will this happen, the idea small scopes and instruments. Minimally invasive surgery has
is rapidly catching fire amongst the scientists and the medical caused a change in the route of access and has significantly and
community. But even robotic surgery has its set of pros and cons. irrevocably changed the surgical treatment of most disease
It is the patient who got to bear the brunt of complications or processes. Patients still undergo interventions to treat disease,
benefits from this new technique. Anesthesiologists as always but minimally invasive surgery makes possible a reduction or
must do their part to be the patient's 'best man' in the preoperative complete elimination of the 'collateral damage' required to gain
period. In this particular article we have tried to bring out the access to the organ requiring surgery. While the benefits of this
introduction of surgical robots in Operation Theater, their approach were numerous for the patient, early technology
development and future trends. Also an endeavor has been made limited the application of minimally invasive surgery to some
to keep it simple for the layman to understand. procedures. Specifically, surgeons using standard minimally
Keywords- Robotic surgery, Non-invasive, da-vinci invasive techniques lost the valueof a natural three dimensional
image, depth perception, and articulated movements.
Magnification of small structures was often difficult and
instruments were rigid and without joints. Robotic surgery has
provided the technology to address these limitations and allow
the application of minimally invasive surgery to a broader
spectrum of patients and their diseases. Surgical robots relieve
some of these limitations by providing fine motor control,
magnified three dimensional imaging and articulated
instruments. The use of robotics in surgery is now broad-based
across multiple surgical specialties and will undoubtedly
expand over the next decades as new technical innovation and
techniques increase the applicability of its use. Today's medical
robotic system was the brainchild of the United States
Department of Defense's desire to decrease war casualties with
development of Remote Surgery. What prompted this initiative,
were the wounded soldiers in Vietnam War, one third of the total
deaths were due to exsanguinating hemorrhage that had
Fig. 1 An Operation Theatre Robotic Surgery. potential to survive. The first documented use of Robotic
surgery came in 1985 when PUMA 560 robotic surgical arm was
used to take a neurosurgical biopsy. Robots are amazing models
I. INTRODUCTION of virtual workforce that functions with only electricity and
software. Robotics surgeons are an amazing advantage to have
Surgery has traditionally been a specialty within the medical in medical field. However they are still in their infancy and its
profession that has revolved around invasive procedures to treat miles to go before they replace humans.
various maladies. Initially, trauma induced by the therapeutic
procedure was necessary and reasonable to provide benefit to
the patient. But now, through the innovation of digital imaging
technology, combined with optical engineering and improved
video displays, surgeons can operate inside of
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
instance, traditional heart bypass surgery requires that the Detachable Instruments: The Endowrist detachable
patient's chest be "cracked" open by way of a 1-foot (30.48-cm) instruments allow the robotic arms to maneuver in ways that
long incision. However, with the da Vinci system, it's possible to simulate fine human movements. Each instrument has its own
operate on the heart by making three or four small incisions in function from suturing to clamping, and is switched from one to
the chest, each only about 1 centimeter in length. Because the the other using quick-release levers on each robotic arm. The
surgeon would make these smaller incisions instead of one long device memorizes the position of the robotic arm before the
one down the length of the chest, the patient would experience instrument is replaced so that the second one can be reset to the
less pain, trauma and bleeding, which means a faster recovery. exact same position as the first. The instruments' abilities to
Robotic assistants can also decrease the fatigue that doctors rotate in full circles provide an advantage over non-robotic
experience during surgeries that can last several hours. arms. The seven degrees of freedom (meaning the number of
Surgeons can become exhausted during those long surgeries, independent movements the robot can perform) offers
and can experience hand tremors as a result. Even the steadiest considerable choice in rotation and pivoting. Moreover, the
of human hands cannot match those of a surgical robot. surgeon is also able to control the amount of force applied,
Engineers program robotic surgery systems to compensate for which varies from a fraction of an ounce to several pounds. The
tremors, so if the doctor's handshakes the computer ignores it Intuitive Masters technology also has the ability to filter out
and keeps the mechanical arm steady. hand tremors and scale movements. As a result, the surgeon's
large hand movements can be translated into smaller ones by the
III . WORKING OF ROBOTIC SYSTEM robotic device. Carbon dioxide is usually pumped into the body
cavity to make more room for the robotic arms to maneuver.
Today's robotics devices typically have a computer software
component that controls the movement of mechanical parts of (d) 3-D Vision System: The camera unit or endoscope arm
the device as it acts on something in its environment The provides enhanced three-dimensional images. This high-
software is "command central" for the device's operation. resolution real-time magnification showing the inside of the
Surgeon sits in the console of the surgical system several feet patient allows the surgeon to have a considerable advantage over
from the patient. He looks through the vision system - like a pair regular surgery. The system provides over a thousand frames of
of binoculars -and gets a huge, 3-D view of inside the patient's the instrument position per second and filters each image
body and area of the operation. The surgeon, while watching through a video processor that eliminates background noise.
through the vision system, moves the handles on the console in The endoscope is programmed to regulate the temperature of the
the directions he wants to move the surgical instruments. The endoscope tip automatically to prevent fogging during the
handles make it easier for the surgeon to make precise operation. Unlike The Navigator Control, it also enables the
movements and operate for long periods of time without getting surgeon to quickly switch views through the use of a simple foot
tired. The robotic system translates and transmits these precise pedal.
hand and wrist movements to tiny instruments that have been
inserted into the patient through small access incisions. This
combination of increased view and tireless dexterity is helping
us overcome some of the limitations of other types of less
invasive surgery. It's also allowing us to finally use minimally
invasive surgery for more complex operations.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Table 1
Comparison of Conventional Laparoscopic Surgery
versus Robot Assisted Surgery
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
magnetic resonance) and intraoperative video image fusion to develop and obtain robotic devices has been largely driven by the
better guide the surgeon in dissection and identifying pathology. market. There is no doubt that they will become an important
These data may also be used to rehearse complex procedures tool in the surgical armamentarium, but the extent of their use is
before they Robotic surgery is in its infancy. Many obstacles and still evolving. Robotic surgery is restricted to a few centers in our
disadvantages will be resolved in time and no doubt many other country. The huge costs of the Machine and the set up along with
questions will arise. Many questions have yet to be asked; the spiraling cost of operation are the major dissuading factors. It
questions such as malpractice liability, credentialing, training was Escorts Heart Institute and Research Center that was the
requirements, and interstate licensing for tele-surgeons, to name first in India to acquire a surgical robot (da vinci Surgical
just a few.Many of current advantages in robotic assisted System). In India the first robotic surgery was performed in Apr
surgery ensure its continued development and expansion. For 2005, and the first robotic thoracic surgery in 2008. Care
example, the sophistication of the controls and the multiple foundation is working in collaboration with Indian Institute of
degrees of freedom afforded by the Zeus and da Vinci systems Information Technology (IIIT) Hyderabad to develop
allow increased mobility and no tremor without comprising the indigenous Robotic Surgical System.
visual field to make micro anastomosis possible. Many have
made the observation that robotic systems are information VIII. CONCLUSION
systems and as such they have the ability to interface and
integrate many of the technologies being developed for are Robotic Surgery will replace conventional laparoscopic
undertaken. The nature of robotic systems also makes the procedure in times to come thus revolutionizing the field of
possibility of long-distance intraoperative consultation or growing medical feats. The future of the robotic surgery is as
guidance possible and it may provide new opportunities for promising as the will to initiate better ways to perform medical
teaching and assessment of new surgeons through mentoring procedures. The separation of the patient from the human
and simulation. Computer Motion, the makers of the Zeus contact during surgery may herald an era of 'No infection, No
robotic surgical system, is already marketing a device called antibiotic'. Robotic surgery is an emerging technology in the
SOCRATES that allows surgeons at remote sites to connect to medical field. It gives us even greater vision, dexterity and
an operating room and share video and audio, to use a precision than possible with standard minimally invasive
telestrator to highlight anatomy, and to control the AESOP surgery, so we can now use minimally invasive techniques for a
endoscopic camera. Technically, much remains to be done wider range of procedures. But it's main drawback is high cost.
before robotic surgery's full potential can be realized. Although Besides the cost, Robotic System still has many obstacles that it
these systems have greatly improved dexterity, they have yet to must overcome before it can be fully integrated into the existing
develop the full potential in instrumentation or to incorporate healthcare system. More improvements in size, tactile sensation,
the full range of sensory input. More standard mechanical tools cost, and are expected for the future
and more energy directed tools need to be developed. Some
authors also believe that robotic surgery can be extended into the
realm of advanced diagnostic testing with the development and
use of ultrasonography, near infrared, and confocal microscopy
equipment. Much like the robots in popular culture, the future of
robotics in surgery is limited only by imagination. Many future
advancements are already being researched. Some
laboratories, including the authors' laboratory, are currently
working on systems to relay touch sensation from robotic
instruments back to the surgeon. Other laboratories are working
on improving current methods and developing new devices for
suture-less anastomoses. When most people think about
robotics, they think about automation. The possibility of
automating some tasks is both exciting and controversial. Fig.7 Growth Of Robotic Surgery Over The Years
Future systems might include the ability for a surgeon to
program the surgery and merely supervise as the robot performs
IX. REFERENCE
most of the tasks. The possibilities for improvement and
advancement are only limited by imagination and cost.
[1] Satava RM. Surgical robotics: the early chronicles: a
personal historical perspective.
VII. THE EMERGING TRENDS FOR ROBOTIC
SurgLaparoscEndoscPercutan Tech. 2002;12:616.
SURGERY IN INDIA
[PubMed]
Surgical robotics is a new technology that holds significant
[2] Felger JE, Nifong L. The evolution of and early experience
promise. Robotic surgery is often heralded as the new
w i t h r o b o t a s s i s t e d m i t r a l va l v e s u r g e r y.
revolution, and it is one of the most talked about subjects in
S u rg L a p a r o s c E n d o s c P e r c u t a n Te c h .
surgery today. Up to this point in time, however, the drive to
2002;12:5863.[PubMed]
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Abstract With the advent of the 21st century, the inclusion of nodes in real-time applications, and is known for its simplicity,
electronic systems in almost all the aspects of life including reliability, and high performance. The priority based message
vehicular technologies is evident. Modern day cars represent a scheduling used in CAN has a number of advantages, some of
symbiosis of several electronic subsystems that collaboratively the most important being the efficient bandwidth utilization,
give a safe and sound driving experience. Various electronic flexibility, simple implementation and small overhead (Steve
driver assist systems have been employed for ensuring ease and Corrigan, 2008).
safety of the users. These vehicles incorporate many electronic CAN was designed and improved upon for systems that need to
circuits for efficient vehicle control and monitoring such as transmit and receive relatively small amounts of information (as
circuits for monitoring speed, fuel level, battery voltage, engine compared to Ethernet or USB, which are designed to move
temperature, fire or spark detection in the engine, and detection much larger blocks of data) reliably to any or all other nodes on
of different combustible gases. The complexity of the functions the network. Applications of CAN in intelligent vehicular
implemented in these systems necessitates an exchange of data systems are evident. The basis of an intelligent vehicle is ability
between them. With conventional systems, data is exchanged by
of the vehicle to navigate and maneuver in a rapidly changing
means of dedicated signal lines, but this is becoming
increasingly difficult and expensive as control functions environment without compromising on the safety of the
become ever more complex. Controller Area Network (CAN) is commuters. For this the vehicle not only has to communicate
a high performance and reliable advanced serial communication with an intelligent guiding architecture but also has to provide a
protocol which effectively supports distributed real time reliable and fast communication within its internal modules
control. This paper presents implementation details of a which would be working in sync for providing corrective
prototype vehicle monitoring system using CAN protocol. The navigation (Renjun Li et al, 2008). Now the best protocol
main feature of the system includes monitoring of various available for this intra vehicular communications is CAN. The
vehicle parameters such as temperature, presence of CO level in CAN thus can be considered as the internal architecture which
the exhaust, battery voltage and light due to spark or fire. It uses continuously interacts with the external guiding architecture in
a PIC based data acquisition system that uses ADC to bring all order to perform the required functions for efficient navigation.
control data from analog to digital format which is visualized The CAN protocol is based on a bus topology and only two wires
through an LCD display. The communication module used in are needed for communication over a CAN bus (Karl Henrik
this project is embedded networking by CAN which has Johansson et al, 2005). The bus has a multi master structure
efficient data transfer. It also takes feedback of various vehicle where each device on the bus can send or receive data. Only one
conditions and is controlled by main controller. The schematic device can send data at any time while all the others listen. If two
is prepared using OrCAD. Hardware is implemented and or more devices attempt to send data at the same time, the one
software porting is done. with the highest priority is allowed to send its data while the
Keywords Controller Area Network protocol, battery others return to receive mode. The four different message types,
voltage, CO level, light due to spark or fire, Orcad, PIC or frames that can be transmitted on a CAN bus are the data
microcontroller. frame, the remote frame, the error frame, and the overload frame
(Robert Bosch GmbH, 1991).
I. INTRODUCTION The robustness of CAN may be attributed in part to its abundant
error-checking procedures. The CAN protocol incorporates five
CAN is a serial bus communications protocol developed by
methods of error checking: three at the message level and two at
Robert Bosch in the early 1980s. The CAN bus was traditionally
the bit level. Error checking at the message level is enforced by
used in automotive applications (Robert Bosch GmbH, 1991).
CAN became an international standard (ISO 11898) in 1994, the CRC and the ACK slots. The 16-bit CRC contains the
specially developed for fast serial data exchange between checksum of the preceding application data for error detection
electronic controllers in motor vehicles. By networking the with a 15-bit checksum and 1-bit delimiter. The ACK field is two
electronics in vehicles with CAN, they could be controlled from bits long and consists of the acknowledge bit and an
a central point, the Engine Control Unit (ECU), thus increasing acknowledge delimiter bit. At the bit level, each bit transmitted
functionality, adding modularity, and making diagnostic is monitored by the transmitter of the message. If a data bit (not
processes more efficient. CAN offer an efficient communication arbitration bit) is written onto the bus and its opposite is read, an
protocol between sensors, actuators, controllers, and other
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
error is generated. The final method of error detection is with the PIC18 family come with ADC (10-bit, up to 8-channel Analog-
bit-stuffing rule where after five consecutive bits of the same to-Digital Converter for PIC18Fxx8), USART, and I/O ports.
logic level, if the next bit is not a complement, an error is The CAN bus module supports message bit rates up to 1 Mbps
generated. and conforms to CAN 2.0B active specifications (Microchip
This paper presents implementation details of a vehicle Technology Inc., 2006).
monitoring system using CAN protocol that monitor various
vehicle parameters such as temperature, CO percentage in the 2.2 Operation of MCP 2551 CAN Transceiver and its
exhaust, battery voltage and light due to spark using an LDR. Interfacing Details
The MCP2551 is a high-speed CAN, fault-tolerant device that
II. HARDWARE IMPLEMENTATION serves as the interface between a CAN protocol controller and
the physical bus. The MCP2551 provides differential transmit
The block diagram of the vehicle monitoring system is shown in and receive capability for the CAN protocol controller and is
Fig.1. Brief details of various modules used in the system fully compatible with the ISO-11898 standard, including 24V
including PIC18F458 microcontroller are presented. requirements. It will operate at speeds of up to 1 Mb/s.
Typically, each node in a CAN system must have a device to
convert the digital signals generated by a CAN controller to
signals suitable for transmission over the bus cabling
(differential output). It also provides a buffer between the CAN
controller and the high-voltage spikes that can be generated on
the CAN bus by outside sources. The pin diagram of MCP2551
is shown in Fig.2 (Microchip Technology Inc., 2007).
The CAN bus has two states: Dominant and Recessive. A
dominant state occurs when the differential voltage between
CANH and CANL is greater than a defined voltage (e.g.,1.2V).
A recessive state occurs when the differential voltage is less than
a defined voltage (typically 0V). The dominant and recessive
states correspond to the low and high state of the TXD input pin,
respectively. The MCP2551 CAN outputs will drive a minimum
load of 45, allowing a maximum of 112 nodes to be
connected. The RXD output pin reflects the differential bus
voltage between CANH and CANL. The low and high states of
the RXD output pin correspond to the dominant and recessive
Fig. 1 Block diagram of Vehicle Monitoring System states of the CAN bus, respectively. The RS pin allows three
modes of operation to be selected: 1. High-Speed, 2. Slope-
Control, 3. Standby.
2.1 Salient Features of PIC18F458 microcontroller
Peripheral Interface Controller (PIC18F458) is a high
performance, enhanced flash microcontroller with CAN
module. Introduced by Microchip Technology, Inc, it possesses
an array of features that make it attractive for a wide range of
applications. It is packaged in a 40-pin DIP package or 44-pin
surface mount package. PIC18 has a RISC architecture that
comes with some standard features such as linear program
memory addressing up to 2Mbytes, data memory addressing up
to 4Kbytes. To use PIC18F to develop a microcontroller-based
system requires a ROM burner that supports flash memory;
however, a ROM eraser is not needed, because flash is an
EEPROM. In a flash memory, the entire contents of ROM must Fig. 2 Pin diagram of MCP 2551 CAN Transceiver
be erased in order to program it again. The erasing of flash is
done by the ROM programmer itself, and so a separate eraser is PIC18F458 is connected to CAN Transceiver at the engine side
not needed (Mazidi et al, 2008). When operated at its maximum as shown in the Fig.3. Temperature sensor and battery voltage
clock rate a PIC executes most of its instructions in 0.2 s or 5 circuit is connected to RA0 and RA1 ports.
instructions/s. A watchdog timer resets the PIC if the chip ever Gas sensor and LDR is connected to RC0 and RC1 ports. In
malfunctions and deviates from its normal operation. Three order to display in LCD, Data and control lines are connected to
versatile timers can be characterize inputs, control outputs and port D. CANH and CANL of engine is connected to CANH and
provide internal timings for program execution. Up to 12 CANL of dashboard side.
independent interrupt sources can control when the CPU will
deal with each source (Peatman, 2009). All the members of
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
The second PIC18F458 is connected to CAN Transceiver at the dashboard side as shown in the Fig.4. An external key is connected
to RB7. By pressing this key, it can request data from engine. This data can be seen through LCD.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
displayed on the screen. R/W input allows the user to write 3.2 At Dashboard side
information to LCD or read information from it. When RW line The CAN controller checks if any data is available at the receive
is low (0), the information on the data bus is being written to the buffers. If there is any, then that value is displayed through LCD.
LCD. When R/W is high (1), the program is effectively querying By pressing the key placed on the dashboard, the data is
(or reading) the LCD. The enable pin is used by the LCD to latch requested by sending an identifier. Then after 10 seconds the
information at its data pins. When data is supplied to data pins, a message is obtained from engine section via CAN protocol
high to low pulse must be applied to this pin in order for the LCD .
to latch the data present in the data pins. We have to prepare an IV. CONCLUSIONS
LCD properly before the character is to be displayed. For this a
number of commands have to be provided to the LCD before In this paper, the implementation details of vehicle monitoring
inputting the required data. LCD doesn't know about the content system that monitors various parameters such as temperature,
(data or commands) supplied to its data bus. It is the user who battery voltage, light due to spark or fire and CO level in the
has to specify whether the content at its data pins are data or exhaust are presented. For monitoring the above parameters,
commands. Here 2 ports Port B and Port C of PIC 18F458 are LM35 sensor, 9V battery, LDR and MQ6 sensors are used.
taken. Port B is used for providing control signals and Port C is Hardware schematics are drawn using OrCAD. For
used for providing data signals. implementing this system, the programming of LED, ADC and
LCD interfacing with microcontroller is to be done using
2.7 Universal Synchronous Asynchronous Receiver Transmitter Embedded C. The microcontroller interfacing using CAN
(USART) protocol is being carried out. It is intended to transfer the various
(USART) module is one of the three serial I/O modules measurements including temperature of the engine, battery
incorporated into PIC18FXX8 devices. The USART can be voltage, presence of light and CO level in the exhaust from
configured in the following modes: engine to dashboard via CAN Protocol when a key at the
Asynchronous (full-duplex) dashboard is pressed, and display these readings on a LCD on
Synchronous Master (half-duplex) the dashboard.
Synchronous Slave (half-duplex).
The MAX232 IC is used to convert the TTL/CMOS logic levels
to RS232 logic levels during serial communication of V. REFERENCES
microcontrollers with PC. The controller operates at TTL logic
level (0-5V) whereas the serial communication in PC works on
[1] Karl Henrik Johansson, Martin Trngren, and Lars Nielsen,
RS232 standards (-25 V to + 25V). This makes it difficult to
2005. Vehicle Application of Controller Area Network,
establish a direct link between them to communicate with each
Handbook of Networked and Embedded Control Systems
other. The intermediate link is provided through MAX232.
Control Engineering, Springer, pp. 741-765.
III. DESIGN METHODOLOGY
[2] Knoll, P.M. and Kosmowski, B.B., 2002. Liquid crystal
display unit for reconfigurable instrument for automotive
3.1 At Engine side
applications, Opto-Electronics Review, Vol. 10, No. 1,
CAN 2.0B is a network protocol that was specially developed
pp.75.
for connecting the sensors, actuators and ECUs of a vehicle.
CAN 2.0B supports data rates from 5Kbps to 1Mbps, which
[3] Mazidi, M.A., McKinlay, R.D., and Danny Causey, 2008.
allows the CAN network to be used to share status information
PIC Microcontroller and Embedded System, Pearson
and real time control. It can transfer up to 8 data bytes within a
Education Inc.
single message. In this investigation two nodes are used for
monitoring parameters. The various sensors used are
[4] Microchip Technology Inc., 2006. PIC18FXX8 Data
temperature, battery voltage, LDR and CO sensors.
Sheet, DS41159E.
Temperature and battery voltage are connected to ADC. LDR
and CO2 sensors are connected to digital ports. Values are
[5] Microchip Technology Inc., 2007. MCP2551 High-Speed
transferred to microcontrollers in the dashboard from ADC and CAN Transceiver Datasheet, DS21667E.
digital ports via CAN Protocol at an interval of 10 seconds and is
displayed in the LCD. Also the message can be seen through [6] Peatman, John B., 2009. Designing with PIC
computer via UART. If any data is required by the dashboard, the Microcontrollers, Pearson Education.
CAN controller at the engine side checks the identifier
transmitted by it. After 10 seconds, the engine side controller [7] Renjun Li, Chu Liu and Feng Luo, 2008. A Design for
sends the required data to the dashboard. Automotive CAN Bus Monitoring System, IEEE Vehicle
Power and Propulsion Conference (VPPC), September 3-5,
Harbin, China.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
[13] Website 3:
http://electronicswork.blogspot.in/2011/02/ldr555-dark-
sensor.html.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Abstract - Telerobotic is a combination of two major subfields, robotics, which is required to perform operations in unknown
teleoperation and telepresence. Teleoperation indicates or hazardous environments where it replaced a human being to
operation of a machine at a distance. It is similar in meaning to reduce operation cost, time and to avoid loss of life. In this
the phrase "remote control" but is usually encountered in system we control a robot to support operation at a distance. The
research, academic and technical environments. It is most robot may be in another room/country, different scale to the
commonly associated with robotics and mobile robots but can operator. The goal of teleoperation is to allow an operator to
be applied to a whole range of circumstances in which a device interact and operate a telerobot at remote environment via
or machine is operated by a person from a distance. This paper communication channel. The human operator is responsible for
will give information regarding telerobotics, its major high level control such as planning, perception and making
components and its application in major field. This paper will decisions at operator's site, while the robot performs the low
also help in understanding the main requirements for level instructions such as navigation and localization at the
telerobotics. remote site.
Keywords- Telepresence, Teleoperation, telemanipulator,
Visual and control system II. WHAT IS TELEROBOTICS?
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
environment to the operator in a natural manner. A good degree robot using the user-interface through which he can able to see
of telepresence assures the feasibility of the required environment and make further decision. ROBOVOLC is made
manipulation task. Visual and control applications are two major for volcano exploration, capable to approach active volcano and
components of Telerobotics and Telepresence. A remote camera can perform various kind of operations by keeping human
provides a visual representation of the view from the robot. operators at safe distance. It was mainly built to minimize risks
Placing the robotic camera in a perspective that allows intuitive to volcanologists, to enhance the chances to understand and
control is a recent technique has not been fruitful as the speed; study conditions during volcano eruption. Two separate PCs
resolution and bandwidth have only recently been adequate to used to assist operation, the first system has a user interface
the task of being able to control the robot camera in a meaningful through which volcanologists can drive and control the robot
way. Using a head mounted display, the control of the camera remotely in the volcano from the base station. The control of
can be facilitated by tracking the head. This only works if the robot can be accomplished by two joysticks and a touch
user feels comfortable with the latency of the system, the lag in screen/mouse. The second PC controls manipulator to collect
the response to movements, and the visual representation. Any samples of rocks by using three fingers gripper, which has a
issues such as, inadequate resolution, latency of the video force sensor to measure and control grasping strength, and to
image, lag in the mechanical and computer processing of the collect gases from volcano during eruption for the scientific
movement and response, and optical distortion due to camera measurements. The communication between the operator and
lens and head mounted display lenses, can cause the user the remote environment can be accomplished by a high-power
'simulator sickness' that is exacerbated by the lack of vestibular wireless LAN.
stimulation with visual representation of motion. Mismatch
between the users motions such as registration errors, lag in
movement response due to over filtering, inadequate resolution
for small movements, and slow speed can contribute to these
problems. The same technology can control the robot, but then
the eyehand coordination issues become even more pervasive
through the system, and user tension or frustration can make the
system difficult to use. Ironically, the tendency to build robots
has been to minimize the degrees of freedom because that
reduces the control problems. Recent improvements in
computers has shifted the emphasis to more degrees of freedom,
allowing robotic devices that seem more intelligent and more
human in their motions. This also allows more direct
teleoperation as the user can control the robot with their own
motions. A telerobotic interface can be as simple as a common
MMK (monitor-mouse-keyboard) interface. While this is not
immersive, it is inexpensive. Telerobotics driven by internet
connections are often of this type. A valuable modification to
MMK is a joystick, which provides a more intuitive navigation Figure 1 showing the working of ROBOVOLC
scheme for planar robot movement. Dedicated telepresence
setups utilize a head mounted display with either single or dual (b) Medical Robots. To assist surgeons in various surgical
eye display, and an ergonomically matched interface with specialities we have medical robots. The research and
joystick and related button, slider, trigger controls. development of medical robotic system is growing
Future interfaces will merge fully immersive virtual reality predominately in telerobotics during last two decades. ZEUS
interfaces and port real-time video instead of computer- and Da Vinci surgical systems are best examples of medical
generated images. Another example would be to use an Omni robots. As precision plays key role in surgeries, the design
directional treadmill with an immersive display system so that requirements for a teleoperation controllers of a robot are
the robot is driven by the person walking or running. Additional
significantly different from other telerobotic applications.
modifications may include merged data displays such as
Medical robots are equipped with computer-integrated
Infrared thermal imaging, real-time threat assessment, or device
schematics. technology and are comprised of programming languages,
advanced sensors and controllers for teleoperation. A surgeon
III. APPLICATIONS OF TELEROBOTICS. can see and work inside body by making tiny hole to the patient
by giving instructions to robot from surgeon's console in which
(a)Industrial Robots he can able to see 3D images from the stereo cameras. It needs
As mentioned earlier, telerobots can help human beings to further development in medical robots due to limitations like
operate in environments where it is not possible for a human to poor judgement, limited dexterity and hand-eye coordination,
work and it includes space, at volcanoes and under water expensive, technology in flux, difficult to construct and debug,
applications. Such telerobotic system gives advantages to the and limited to relatively simple procedure
scientists or users to work from the safe places and can monitor a
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Undersea Robots: It is dangerous to work at deep level of sea NASA has proposed use of highly capable telerobotic systems
for future planetary exploration using human exploration from
for human being because of high pressures and possibility of
orbit. In a concept forMars Exploration proposed by Landis, a
harmful chemicals. Undersea robots are gaining much
precursor mission to Mars could be done in which the human
importance and growing body of telerobotic devices during last vehicle brings a crew to Mars, but remains in orbit rather than
two decades. Remotely operated vehicles (ROVs) allows landing on the surface, while a highly capable remote robot is
scientists, oceanologists and companies to monitor water operated in real time on the surface. Such a system would go
quality in lakes and reservoirs using robotic-fish, to measure the beyond the simple long time delay robotics and move to a
Greenland ice sheet using drones, to know about world and life regime of virtual telepresence on the planet. One study of this
of creatures under water, for undersea volcano exploration and concept, the Human Exploration using Real-time Robotic
to activate the deepwater horizon blowout preventer (BOP) in Operations (HERRO) concept, suggested that such a mission
GULF Mexico etc. In the next century, undersea robotics will could be used to explore a wide variety of planetary destinations.
play an essential role for the exploration and production of SPACE exploration may have a new direction. In the 1960s,
mineral resources that lay in the seas. Operators can humans did the exploring but since the last moon landing in
communicate with AUVs (Automated Undersea Vehicles) in 1972, NASA's only explorers beyond low Earth orbit have been
several different ways, including low-frequency acoustics for semi-autonomous robots. Now the agency is pondering a third
approach, sending astronauts who would remain in orbit around
long distances and high-frequency coded acoustics for medium
alien worlds and explore via robotic rovers. On Earth, human-
distances. Romeo was designed like an operational test-bed. Its controlled robots are used for tasks ranging from delicate
aim is to perform research on intelligent vehicles in the real surgery to exploration of the deep sea. But in space, robotic
subsea environment and to develop advanced methodologies "telepresence" could be even more promising.Telerobotics
for marine science. Romeo is intended as a prototype would be orders of magnitude more productive for exploration
demonstrator for robotics, biological, and geological research. than semi-autonomous robots like the Mars rovers Spirit and
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Opportunity. "Nothing beats having human cognition and sometimes referred to as remote-presence devices have been a
dexterity in the field". But there is a hitch in trying to control vision of the tech industry. Until recently, engineers did not have
from Earth a robot that's exploring another planet: the huge time the processors, the miniature microphones, cameras and
lag as the signals travel back and forth. Real-time reactions are sensors, or the cheap, fast broadband necessary to support them.
needed for it to work. For example, surgeons can perform But in the last five years, a number of companies have been
operations well as long as the robot responds to their actions introducing functional devices. As the value of skilled labor
within about half a second. Greater latencies cause problems. rises, these companies are beginning to see a way to eliminate
Latency on Earth is no more than a few hundred milliseconds, the barrier of geography between offices. Traditional
but latency between the Earth and moon is about 3 seconds, and videoconferencing systems and telepresence rooms generally
that delay is enough to slow telerobotics dramatically. "You offer Pan / Tilt / Zoom cameras with far end control. The ability
could use telepresence to tie a knot in 30 seconds on Earth, but it for the remote user to turn the device's head and look around
would take 10 minutes to tie it with 3-second latency."Latency naturally during a meeting is often seen as the strongest feature
for signals to Mars is much longer - from 8 to 40 minutes of a telepresence robot. For this reason, the developers have
depending on the planets' positions - so real-time control from emerged in the new category of desktop telepresence robots that
Earth is impossible. The most plausible way to have robotic concentrate on this strongest feature to create a much lower cost
telepresence on Mars would be to station astronauts in orbit robot. The Desktop Telepresence Robots, also called Head and
around the planet. The first step towards this might be testing out Neck Robots allow users to look around during a meeting and
robotic telepresence on Earth with simulated latencies. Rovers are small enough to be carried from location to location,
on the moon controlled from lunar orbit might come next. eliminating the need for remote navigation.
Rovers could also be controlled in real-time to explore the far
side of the moon, not visited by the Apollo missions. To do this, (f)Other Telerobotic Applications
NASA would have to station astronauts at lunar Lagrangian Remote manipulators are used to handle radioactive materials.
point L2 - a gravitationally neutral area of space which lies about Telerobotics has been used in installation art pieces; Telegarden
60,000 kilometres beyond the moon, in line with Earth.Mars is a is an example of a project where a robot was operated by users
bigger challenge, of course, as is Venus, which is usually through the Web.
considered beyond the scope of human exploration because of
its boiling, corrosive atmosphere. A Venus mission could be IV. CONCLUSION
shorter as it is closer to us than Mars. However, any robots would
require extensive modification to survive in Venus's hostile Telerobotics is one of the most intriguing fields of robotics. It
environment and, even then, they would not last the years that a allows to break down the barriers of scale and distance, enabling
Mars rover might. Nevertheless, having human telepresence human beings to manipulate objects situated in remote locations
would make exploration much more productive than if and/or to interact with micro/nano-scale environments,
autonomous robots had to await commands from Earth. otherwise inaccessible. Research in telerobotics has been very
Telepresence opens up massive opportunities for exploration, active in the last decades and huge theoretical and technological
says Lester. "Once you go to Venus, you can go to a lot more results have been obtained. For space applications, there are now
places," he says. "You could go scuba diving in the methane systems that allow users to remotely perform maintenance
lakes on Titan. operations on the International Space Station. In Minimally
Invasive Robotic surgery, teleoperation systems allow surgeons
(e)Telepresence/Videoconferencing to operate in previously unreachable areas. In industrial
The prevalence of high quality video conferencing using mobile applications, engineers can remotely drive mobile robots for the
devices, tablets and portable computers has enabled a drastic maintenance of parts of machineries located in places too
growth in Telepresence Robots to help give a better sense of dangerous for humans or for inspecting some parts of the plants.
remote physical presence for communication and collaboration Despite of these significant results, there are still many
in the office, home, school, etc. when one cannot be there in challenges that need to be addressed for reaching the ultimate
person. The robot avatar can move or look around at the goal of telerobotics: telepresence. The best teleoperation system
command of the remote person. There have been two primary should be designed and controlled in such a way that the user
approaches that both utilize videoconferencing on a display 1) feels as being directly interacting with the environment s/he is
desktop telepresence robots - typically mount a phone or tablet manipulating, despite of the scale differences, of the distance
on a motorized desktop stand to enable the remote person to look and of the presence of the robots. To achieve telepresence,
around a remote environment by panning and tilting the display several issues have to be addressed. For example, the very nature
or 2) drivable telepresence robots - typically contain a display of the human interaction has to be further explored, new devices
(integrated or separate phone or tablet) mounted on a roaming allowing multimodal interaction have to be developed, new
base Some examples of desktop telepresence robots include control strategies for coupling in the best way possible the action
Kubi by Revolve Robotics, Galileo by Motrr, and Swivl. Some and the perception are required and new ways for fusing the
examples of roaming telepresence robots include Beam by streams of information arriving to the user have to be sought.
Suitable Technologies, Double by Double Robotics, RP-Vita by
iRobot, Anybots, Vgo, TeleMe by Mantarobot, and Romo by
Romotive. For over 20 years, telepresence robots, also
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
REFERENCE
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Abstract The R&D funding in robotics is singularly for the intervention. A powered exoskeleton would be available and
Military and security applications. As predicted by many integrated into the soldier system. A very good example is from
military stalwarts, robots will be the main and key fighters in the the movie Iron Man. The suit has the capability to perform
future warfare which will be driven technologically. The maneuver beyond the capability of any human. The use of the
increase in number of semi autonomous robots being deployed VR capability would enable soldiers to interact with robotics,
by Armed forces and police is gaining importance. It is a fact that software systems, and hardware platforms via an array of third
the future decision making capability of these robots will be generation interfaces that will rely on natural language
dictated by the choices they make based on calculations, which commands, gestures, and virtual display/control systems.
will make a human soldier just a puppet. This trend raises Consumer demand and scientific exploration will yield an
questions about the direction of both robotics as a field and explosion in cognitive and physical enhancers, including neural
humanity as a whole. Nowadays the events throughout the world prosthetics, and permanent physical prosthetics. These could
indicates and proves that robots are proving themselves yield dramatic enhancements in Soldier performance and
extremely valuable, irreplaceable in dealing with the natural provide a tremendous edge in combat[1-2].
disasters, industrial mishaps and other dangerous situations
which are harmful for the humans. They can help prevent II. UNDERSTANDING HUMANS
problems or monitor risks, carry out tasks in hostile or polluted
environments and support search and rescue operations. The Military robotics will be such a domain for understanding this
paper aims to focus on robotic soldiers which will be a necessity process; on one hand, programmers and operators can move out
and an asset in future battlefield. It will also help in informing of the range of danger, but on the other, they are distanced from
the readers about the need to be considered in responsibly experiencing the consequences of conflict. An international
introducing advanced technologies into the battlefield and, study, suggests that remotely piloted drone strikes may result in
eventually, into society. With history as a guide, we know that greater pilot safety for reduced targeting accuracy. In many cases
foresight is critical to both mitigate undesirable effects as well as those pilots are thousands of miles from any danger and return
to best promote or leverage the benefits of technology. home to their families in the evening. Military and security
Key wo r d s N a t i o n a l S a f t e t y, S e c u r i t y, M i l i t a r y robotics have opened new questions about mediated experience
Robotics,Robotic solider. and the line, if there is one, between video games and real-world
human struggles[3-4].
I. INTRODUCTION
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
negotiate difficult terrain use tank treads. Flying robots look like (a) The HULC exo-skeletal suit: Lockheed-Martin has designed
small airplanes. Some robots are the size of trucks, and they look and prototyped an exo-skeletal suit which is intended to increase
like trucks or bulldozers. Other, smaller robots have a very low the load-carrying capability of soldiers in the field or operations.
profile to allow for great maneuverability. The plan to have
robotic soldiers is a comprehensive strategy to upgrade the (b)TALON military robots: TALON military robots from
nation's military systems across all branches of the Armed QinetiQ are designed for multiple military applications,
Forces. The plan calls for an integrated battle system -- a fleet of including reconnaissance and combat. The U.S. military has
different vehicles that will use up to 80 percent of the same parts, deployed more than 3,000 of TALON.
new unattended sensors designed to collect intelligence in the
field, an unmanned launch systems that can fire the missiles at
enemies outside the line of sight.
These robots are divided into four categories based on the nature
of their employability:
(a )Unmanned Aerial Vehicles (UAV): for surveillance and
reconnaissance missions.
(b) Small Unmanned Ground Vehicles (UGV): can enter
hazardous areas and gather information without the soldiers
coming in contact.
(c)Multifunctional Utility/Logistics and Equipment (MULE):
designed for providing combat support in conflict situations.
Fig 3: The TALON
(d)Armed Robotic Vehicles (ARV): can either carry powerful
weapons platforms or sophisticated surveillance equipment.
The above types of robots may or may not include humanoids. VI. AUTOMATING TASKS
The most common robots currently in use by the military are
Initial robotic deployments are likely to be one-sided
small, flat robots mounted on miniature tank treads. These
engagements. But what happens when robots are deployed to
robots are tough and have the ability to tackle almost any terrain. fight other robots? As increasingly sophisticated systems take on
They have usually a variety of sensors built in, including audio more capabilities, questions are raised about the nature of war.
and video surveillance and chemical detection[5-6]. Will war become a matter of detached robot chess, or will
martial combat be replaced with new ways of expressing
V. AUGMENTING HUMANS conflict.
The resilience of any military force depends on the resilience of (a)Automated missile defence: The U.S. Army has developed a
individual soldiers. Traditionally, this has involved physical and counter rocket, artillery, and mortar(C-RAM) capability that
vocational training regimens. However, a new generation of will use radar to detect incoming rockets and mortar rounds and
robotic systems is interfacing with soldiers to increase their automatically direct fire against them. This is a kind of active
reach, strength, and effectiveness. These range from remote- defence system which is nowadays being used in many
controlled robotic systems to body suits that literally amplify weapons. A classical example of this is during the Gulf war- I,
individual soldiers' strength and endurance. Few examples of the Patriot missiles were used to counter the Scud missiles being
same are listed below: fired by the Iraqi's. This was the first time that such a technology
was demonstrated.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
(b) Military reconnaissance robot fuelled by biomass: The (b) MATILDA:Mesa Associates' Tactical Integrated Light-
DARPA-funded Energetically Autonomous Tactical Robot Force Deployment Assembly, made by Mesa Robotics, is
(EATR) is a prototype military reconnaissance robot designed to similar to other small robot designs but has a higher profile due
use waste heat engine to continually fuel itself on plants and to its triangular tread shape. It weighs 28 kg with the batteries,
other biomass, including dead bodies. can be carried by one or two people and fits in the trunk of a car.
It can be equipped with a mechanical arm or a variety of cameras
and sensors, and it can even tow a small trailer. The robot has a
top speed of 3 feet per second and a single-charge run time of
four to six hours. In the event of tread damage, the quick-change
tracks can be swapped in about five minutes.
(c) ACE: It is about the size of a small bulldozer. It can handle
many heavy-duty tasks, such as clearing out explosives with a
mechanical arm, clearing and cutting obstacles down with a
plow blade or a giant cutter, pulling disabled vehicles, hauling
cargo in a trailer and serving as a weapons platform. This robot
can roll along with a mine-sweeper attached to the front,
clearing a field of anti-personnel mines before any humans have
to walk there. One of its most innovative uses is as a
Fig 5 The EATR firefighting/decontamination platform. Equipped with a pan-
and-tilt nozzle, it can pull its own supply of foam retardant or
VII. MILITARY ROBOTS ACROSS GLOBE decontaminant in a 1,325-litre tank. A nozzle can also be
mounted on a mechanical arm for very precise aiming. This
Military robots have been in use, around the world, since second heavy-duty robot has a maximum speed of 10 kmph and runs on
half of the 20th century, when the first unmanned aircraft was a diesel engine
developed. Early military robots also appeared during Second
World War, when Germany employed small remotely controlled
vehicles known as tracked mines. These tracked mines were
some of the first UGVs to appear, though they suffered from
weaknesses such as easily destroyed control cables. The Soviet
Union also used radio controlled tank UGVs around this time.
At present, the following military robots are being used and
developed in various countries[7-10]:
Fig 8: ACER
Fig. 6: PACKBOT
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
2.2 Mini-AUV
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
Until recently, AUVs have been used for a limited number 4.3 Oil and gas industry: ocean survey and resource assessment;
of tasks dictated by the technology available. With the construction and maintenance of undersea structures. The Gavia
development of more advanced capabilities and high yield is the global provider of commercial AUVs known for its
power supplies, AUVs are now being used for more and more performance and adoptability. Best for surveying work as well
tasks with roles and missions constantly evolving. With the as oil rig maintenance. The oil and gas industry uses AUVs to
development of AUV technology, its application areas have
make detailed maps of seafloor before they start building subsea
been expanding gradually. Its main applications include the
following fields: infrastructure; pipelines and subsea completions can be
installed in the most cost effective manner with minimum
disruption to the environment. The AUV allows survey
companies to conduct precise surveys or areas where traditional
bathymetric surveys would be less effective or too costly. Also,
post-lat pipe surveys are now possible.
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
4.6 Research:
V. RECENT TRENDS
4.5 Hobby:
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
6.1. Next Generation Power System Acoustic sonar systems to survey topography and geology of the
ocean floor have been developed. In 2010, the Goseikaiko sonar
Technology for a new type of fuel-cell battery has been system was installed on the URASHIMA, and performance
developed to allow long-range navigation in the deep sea improvement tests were conducted in the sea. (JAMSTEC,
without a fuel supply. In 2010, prototype tests were run on a 2011).
small, high-efficiency three-less (HETL) fuel cell. (JAMSTEC,
2011).
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.
NCRAM - 2014 (The First National Conference) ISBN: 978-93-83635-01-6
VIII. REFERENCES
Fig.6.5 Prototype of Compact-Size Inertial Navigation System 4. JAMSTEC, 2011, Future AUV System, Technological
Development, Marine Technology and Engineering Center
Responsibility of contents of this paper rests upon the Authors and not upon the Publishers or Organizing Committee of the Conference and JNTUH.