You are on page 1of 7

Candidate’s Declaration

I hereby declare that the work, which is being presented in Seminar Report entitled
“Artificial Passenger” in partial fulfillment for the award of Degree of “Bachelor of
Technology” in Dept. of Electronics and Communication Engineering and submitted to the
Department of Electronics and Communication Engineering, Arya College of
Engineering & I.T., Rajasthan Technical University is a record of my own investigations
carried under the Guidance of Er. Rohitash Singh Chouhan (Assistant Professor) Department
of Electronics and Communication Engineering, Arya College of Engineering & I.T. .
I have not submitted the matter presented in this report anywhere for the award of any other
Degree.

Gunjan Goyal (15EAREC048)


B.Tech (ECE)
ACEIT

Counter Signed by Dr. Rahul Srivastava


Er. Rohitash Singh Chouhan Head of Department (ECE)
Assistant Professor Arya College of Engineering & I.T.

ii
ACKNOWLEDGEMENT
It is a matter of great pleasure for me to express my profound feeling of reference to my
worthy Teachers for their inspiring guidance and everlasting enthusiasm which have been
valuable assets during the tenure of my hard work.

The bliss that accompanies the successful completion of any task would not be complete
without the expression of gratitude. So with great honor I acknowledge all those guidance
and encouragement that has made successful in winding up this journey.

It gives me immense pleasure to acknowledge the help I received while making this seminar
report on “Artificial Passenger”.

With utmost gratitude, I express my sincere thanks to Prof. (Dr.) Rahul Srivastava, HOD,
Electronics and Communication Department for helping me in successfully completion of
this report.

I would also like to pay a note of gratitude to my parents for their blessings and support.

GUNJAN GOYAL
B.TECH, IV YEAR [VIII SEM.]
ELECTRONICS AND COMMUNICATION

iii
TABLE OF CONTENTS

CONTENT PAGE NO.

Candidate’s Declaration ii

Acknowledgement iii

Table of Content iv

List of Figures vii

Abstract viii

Chapter-1 INTRODUCTION 1

1.1 Artificial Passenger 1


1.2 Overview 2

Chapter-2 COMPONENTS 3

2.1 Eye Tracker 5

2.1.1 Hardware: Head Mounted System 5

2.1.2 Software 5

2.2 Alarm 6

2.3 Natural Language Processor 6

2.4 Driver Analyzer 6

2.5 Conversational Planner 7

2.6 Automatic Speech Recognition 7

iv
2.7 Microphone 7

2.8 Camera 7

Chapter-3 APPLICATIONS 8

3.1 Applications of Artificial Passenger 8

3.2 Future Application 9

Chapter-4 FUNCTIONS OF ARTIFICIAL PASSENGER 10

4.1 Voice Control Analyzer 10

4.2 Embedded Speech Recognition 12

4.3 Driver Drowsiness Prevention 14

4.4 Workload Manager 16

4.5 Privacy and Social Aspects 18

4.6 Distributive User Interface 19

Chapter-5 Working of Artificial Passenger 21

5.1 Tracking of Device 21

5.2 Algorithm for monitoring head/eye motion 21

5.3 Method for Detecting Driver Vigilance 23

Chapter-6 Features of Artificial Passenger 27

6.1 Conversational Telematics 27

6.2 Improving Speech Recognition 27

6.3 Analyzing Data 28

v
6.4 Sharing Data 28

6.5 Retrieving Data on Demand 30

CONCLUSION 31

REFERENCES 32

vi
LIST OF FIGURES

S.No. Figure No. Title Pg. No.

1. 2.1 Architecture of Artificial Passenger 04

2. 2.2 Eye Tracker 05

3. 2.3 Monitoring System 05

4. 2.4 Alarm System 06

5. 4.1 Embedded Speech Recognition 13

6. 4.2 Condition Sensor Device 15

7. 4.3 Mobile Indicator Device 17

8. 5.1 Representation of Working 22

9. 5.2 Working of Artificial Passenger 26

vii
ABSTRACT

An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure
that the driver stays awake. IBM has developed a prototype that holds a conversation with a
driver, telling jokes and asking questions intended to determine whether the driver can
respond alertly enough. Assuming the IBM approach, an artificial passenger would use a
microphone for the driver and a speech generator and the vehicle's audio speakers to
converse with the driver. The conversation would be based on a personalized profile of the
driver. A camera could be used to evaluate the driver's "facial state" and a voice analyzer to
evaluate whether the driver was becoming drowsy. If a driver seemed to display too much
fatigue, the artificial passenger might be programmed to open all the windows, sound a
buzzer, increase background music volume, or even spray the driver with ice water. One of
the ways to address driver safety concerns is to develop an efficient system that relies on
voice instead of hands to control Telematics devices.
One of the ways to reduce a driver’s cognitive workload is to allow the driver to speak
naturally when interacting with a car system (e.g. when playing voice games, issuing
commands via voice). It is difficult for a driver to remember syntax, such as what is the
distance to JFK? Or how far is JFK? or How long to drive to JFK? etc.). This fact led to the
development of Conversational Interactivity for Telematics (CIT) speech systems at IBM
Research. CIT speech systems can significantly improve a driver-vehicle relationship and
contribute to driving safety. But the development of full-fledged Natural Language
Understanding (NLU) for CIT is a difficult problem that typically requires significant
computer resources that are usually not available in local computer processors that car
manufacturer provide for their cars. To address this, NLU components should be located on a
server that is accessed by cars remotely or NLU should be downsized to run on local
computer devices (that are typically based on embedded chips). Some car manufacturers see
advantages in using upgraded NLU and speech processing on the client in the car, since
remote connections to servers are not available everywhere, can have delays, and are not
robust.

viii

You might also like