You are on page 1of 32

V O L U M E 3 N O . 4 N o v e m b e r 2 0 1 3 A U V S I 2 7 0 0 S . Q u i n c y S t .

, S u i t e 4 0 0 , A r l i n g t o n , VA 2 2 2 0 6 , U S A

Perception and Cognition

Inside This Issue:

Getting a Feel for Haptics Can You Pass the Turing Test? Robots That Sense More Like Humans

MARK YOUR CALENDARS

CONFERENCE 12 15 MAY | TRADE SHOW 13 15 MAY


ORANGE COUNTY CONVENTION CENTER | ORLANDO, FLA. | USA

WERE MOVING TO MAY

AUVSISHOW.ORG

CONTENTS

On the Cover 6
Brain in a Box
European Project Kicks Off With Ambitious Goal: Understand and Replicate the Human Brain

4 10

Essential Components
Perception and Cognition News

State of the Art


Thinking About Perception and Cognition Around the World

16

Technology Gap

On the Cover:
Kinova Researchs Jaco robotic arm leverages haptic technology to be able to grasp objects as delicate as an egg. Photo courtesy Kinova Research. Page 12.

From the Mouths of Bots: Natural Language Learning in AI

19 Q & A
Katsu Yamane, Disney Research

MISSION CRITICAL

November 2013

20 Timeline
Artificial Intelligence: A Timeline

22

Uncanny Valley
The Turing Test

24 Testing, Testing 12 Do You Feel Like I Do?


Robots Leverage Haptics
for a More Human Touch
Getting Robots to Perceive More Like Humans

29 End Users
Ghostwriter: Algorithms Write Books

26

Spotlight
How Susceptible are Jobs to Automation?

Mission Critical is published four times a year as an official publication of the Association for Unmanned Vehicle Systems International. Contents of the articles are the sole opinions of the authors and do not necessarily express the policies or opinion of the publisher, editor, AUVSI or any entity of the U.S. government. Materials may not be reproduced without written permission. All advertising will be subject to publishers approval and advertisers will agree to indemnify and relieve publisher of loss or claims resulting from advertising contents. Annual subscription and back issue/reprint requests may be addressed to AUVSI.

MISSION CRITICAL

November 2013

Editor's Message

Editorial Vice President of Communications and Publications, Editor Brett Davis bdavis@auvsi.org Managing Editor Danielle Lucey dlucey@auvsi.org

How Can We Have Robots Think Like Us? And do We Really Want Them to?

Contributing Writers Rich Tuttle Ashley Addington

Brett Davis

his issue of Mission Critical tackles some big issues in the world of robotics: How do robots and unmanned systems perceive the world around them? And, having perceived that, how do they respond? If you take a look at the Timeline story beginning on Page 20, you can see that the modern idea of articial intelligence has taken several twists and turns since researchers rst began thinking about how machines think. We spend a little time in this issue talking about the Turing Test, posited by Alan Turing, which concludes that if a machine can fool a human into thinking its not a machine, then its intelligent. We also spend some time looking at how researchers have moved beyond that idea, instead pushing toward massive data sets that computers can sift through to, say, beat Garry Kasparov at chess or Ken Jennings at Jeopardy! As Ken Ford, CEO of the Institute for Human and Machine Cognition says on Page 23, people didnt learn how to y until they quit trying to y like birds and instead discovered the laws of aerodynamics. Perception is a key part of this issue. Beginning on Page 12, writer Rich Tuttle takes a look at haptics, or the science of touch, and how it can lead to robots that are better equipped to navigate the world around them. Beginning on Page 24, we also take

a look at how knowledge databases can help robots perceive the world around them by giving them an idea of what to expect when they roll into an ofce or classroom. That idea continues in the Q & A on Page 19, where a Disney research institute uses a similar idea to help robots interact with people. The idea of replicating the way humans think hasnt gone away, however. It has just gotten more sophisticated. Beginning on Page 6, we take a look at a major new European initiative to replicate a human brain inside a supercomputer and then gure out how it works. That could lead to better ways to ght brain disease as well as the creation of new types of computers and robots that could be more intelligent. The United States also has kicked off a new research project to help understand the brain. So, in the near term, computers and by extension, some robots will probably think along the lines of Deep Blue or Watson, the IBM supercomputers that rely on massive databases. In the future, however, they might think more like humans. Rather than being programmed, they can learn, and they can do that more efciently. Coupled with new sensors, such as ngertips that can actually feel, mobile robots of the future could be useful in ways we can only imagine today.

Advertising Senior Business Development Manager Mike Greeson mgreeson@auvsi.org +1 571 255 7787

A publication of

President and CEO Michael Toscano Executive Vice President Gretchen West AUVSI Headquarters 2700 S. Quincy St., Suite 400 Arlington, VA 22206 USA +1 703 845 9671 info@auvsi.org www.auvsi.org

MISSION CRITICAL

November 2013

Brain Scans Digitally Remastered in MRI

The Key to Teaching Computers to See? Thinking Like a Computer


Researchers at the Massachusetts Institute of Technology have discovered that the key for successful object recognition is to redo the recognition algorithms so they allow researchers to view the software from the computers perspective. This new process lets the objects being processed and identied be translated into a mathematical view, then translated back again into an image for recognition.

This allows researchers to better understand why recognition software only has a success rate of 30 to 40 Stephen LaConte (right) and members of his lab. Photo courtesy Virginia Tech. percent. With the goal of creating a Virginia Techs Carilion Research outside world, Stephen LaConte, smaller error margin, computer recInstitute has found a way to make an assistant professor at the Vir- ognition software can then continue better use of brain scans with the ginia Tech Carilion Research Insti- to make breakthroughs in the realm help of computer imaging. tute, said in a press release. of articial intelligence. Researchers are using real-time functional magnetic resonance imaging that allows real-time thought to be immediately transformed into action by transferring noninvasive dimensions of activity in the brain. By using this technology the hope is to be able to better treat a variety of brain disorders with simple mind-reading capabilities. Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria. Covert mental activities, on the other hand such as visual imagery, inner language or recollections of the past cant be observed by others and dont necessarily translate into action in the In the study, scientists were able to use all of the brain to observe how subjects think when given a particular command. It was found that subjects who were in more control of their thoughts had a better brain scan than those who simply let their mind wander. When people undergoing realtime brain scans and get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes, said LaConte. This, in turn, gives them the opportunity to aid in their own healing. We want to use this effect to nd better ways to treat brain injuries and psychiatric and neurological disorders. The main object detection research program is called HOG (histogram of oriented gradients). HOG uses an image broken into several pieces and then identies each gradient that was separated. The software recognizes the color and labels it. This feature space, HOG, is very complex, says Carl Vondrick, an MIT graduate student in electrical engineering and computer science. A bunch of researchers sat down and tried to engineer, Whats the best feature space we can have? Its very high dimensional. Its almost impossible for a human to comprehend intuitively whats going on. So what weve done is built a way to visualize this space. MIT students hope that HOG will be a great research tool in better understanding how algorithms and software recognition intertwine and will improve students research experience.

Click on this QR code to see a video presentation of this research.

MISSION CRITICAL

November 2013

Essential Components

Center for Brains, Minds and Machines Founded

The Human Brain Easily Tricked by Artificial Finger


A study published recently has learned that the brain does not need multiple sensors to believe that an articial nger belongs to the body or not. This is nding is a major breakthrough for neuroscience research. In an experiment conducted by Neuroscience Research Australia, participants held an articial nger with their left hand that was placed above their right index ngers. The participants were put under an anesthetic and vision was eliminated so the hand could go numb and feelings in the joints were removed. When the patients were able to look and both ngers were moved simultaneously, the body instantly believed that the articial nger was its own. Grasping the articial nger induces a sensation in some subjects that their hands are level with one another, despite being 12 centimeters apart, Prof. Simon Gandevia, deputy director of NeuRA, said in a press release. This illusion demonstrates that our brain is a thoughtful, yet at times gullible, decision maker. It uses available sensory information and memories of past experiences to decide what scenario is most likely. This nding gives a brand new understanding of how the brain identies its body. Unlike past experiments where the main focus was on the brains association with the main ve senses, this experiment proved that muscle receptors are a key component in communication with the brain.

Illustration courtesy Christine Daniloff/MIT.

The National Science Foundation has funded a new research center at the Massachusetts Institute of Technology that will study articial intelligence. The Center for Brains, Minds and Machines is an interdisciplinary study center that will focus on how the human brain can be replicated in machines. The center will be a multi-institution collaboration, with professors from MIT, Harvard and Cornell, to name a few. The center will also have industry partners, such as Google, IBM, Boston Dynamics, Willow Garage and Rethink Robotics. Research will focus on how the human body can be further integrated into computers with topics revolving around vision, language and motor skills, circuits for intelligence, the development of intelligence in children and social intelligence. Those thrusts really do t together, in the sense that they cover what we think are the biggest challenges facing us when we try to develop a computational understanding of what intelligence is all about, says Patrick Winston, the Ford Foundation Professor of Engineering at MIT and re-

search coordinator for CBMM. With the research interests being so closely linked together, the likelihood of progress is higher, according to the researchers. All of the senses work together in the body for it to be able to understand and grasp its surroundings. The new center idea was launched on MITs 150th anniversary in 2011. The monetary donation going to the center will be given out over the next ve years. We know much more than we did before about biological brains and how they produce intelligent behavior. Were now at the point where we can start applying that understanding from neuroscience, cognitive science and computer science to the design of intelligent machines, says Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT. The center will also play a key role in the new BRAIN Initiative, an effort by federal agencies and private partners to better understand how the brain works. For more on that effort, see the story beginning on Page 6.

MISSION CRITICAL

November 2013

An image of a neuron cluster, which the Human Brain Project hopes to understand and replicate. Image courtesy HBP.

BRAIN IN A BOX
European Project Kicks Off With Ambitious Goal: Understand and Replicate the Human Brain
By Brett Davis

MISSION CRITICAL

November 2013

new European Commission initiative that kicked off in October seeks to unravel one of the greatest challenges facing science: simulate the human brain to be able to understand how it works and replicate it. The results could help develop new computing technologies that would nally allow computers and robotic systems to have brain-like intelligence, meaning they could learn and think much the way we do. What we are proposing is to establish a radically new foundation to explore and understand the brain, its diseases and to use that knowledge to build new computer technologies, says Henry Markram, a professor at cole Polytechnique Fdrale de Lausanne and coordinator of the project, in a project video. The Human Brain Project is part of a new initiative launched as part of the ECs Future and Emerging Technologies initiative. It will involve thousands of researchers from more than 130 research institutes and universities around the world. Its intended to be a decade-long effort. The ramp-up phase, which just kicked off in October and runs through March 2016, has been funded at 54 million euros; the overall effort has been earmarked about 1 billion euros by the European Commission. Put simply, the main goal is to create an articial replica of a human brain in a supercomputer. The project consists of three broad areas: neuroscience, aimed at understanding how the brain works; medicine, aimed at battling the diseases that can affect it; and computing, aimed at creating electronic version of brain processes. Its an infrastructure to be able to build and simulate the human brain, objectively classify brain diseases and build radically new computing devices, Markram says. The human brain is able to perform computations that modern computers still cant, all while consuming the same energy as a light bulb. One of the human brain projects most important goals is to develop a completely new category of neuromorphic computing systems, says a project video. Chips, devices and systems directly inspired by detailed models of the human brain. As Karlheinz Meier, codirector of the projects neuromorphic computing puts it in one project video, What we build is physical models of human circuits on silicon substrates. Such systems will transform industry, transportation systems, health care and our daily lives, the video says. The event kicked off at a conference from 6 to 11 Oct.

at the campus of Switzerlands EFPL. The plan is to launch six research platforms and test them for the next 30 months. The platforms will be dedicated to neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing and neurorobotics. Beginning in 2016, the platforms are to be available for use by both Human Brain Project scientists and other researchers around the world. The resources will be available on a competitive basis, similar to the way astronomers compete to use large telescopes.

13 Areas
The three main focus areas are further divided into 13 sub-areas, which include neuromorphic computing and neurorobotics. The computing effort will be to develop the Neuromorphic Computing Platform, a supercomputer system that will run brain model emulations. The system will consist of two computing systems, one in Heidelberg, Germany, and one in Manchester in England. Platform users will be able to study network implementations of their choice, including simplied versions of brain models developed on the Brain Simulation Platform or generic circuit models based on theoretical work, says the projects website. The latter is led by the Technische Universitt Mnchen, or Technical University of Munich, and EPFL, along with Spains University of Granada. It will provide a platform for taking brain models and plugging them into a high-delity simulator that includes a simulated robot. Researchers can take behaviors based on the human brain model, apply them to the robot, and see if they work and what happens in the brain model when they are carried out. We consider this a starting point for a completely new development in robotics, says Alois Knoll of Munich. It will be much more powerful than anything we have had before in robotics simulation. The computing part of the project wont try to develop classical articial intelligence. The challenge in articial intelligence is to design algorithms that can produce intelligent behavior and to use them to build intelligent machines, the project website says.
MISSION CRITICAL

November 2013

It doesnt matter if the algorithms are realistic in a biological sense, as long as they work. The brain project, however, wants to create processors that actually work like the human brain. We will develop brain models with learning rules that are as close as possible to the actual rules used by the brain and couple our models to virtual robots that interact with virtual environments. In other words, our models will learn the same way the brain learns. Our hope is that they will develop the same kind of intelligent behavior, the project says on its website. We know that the brains strategy works. So we expect that a model based on the same strategy will be much more powerful than anything AI has produced with invented algorithms. The resulting computer systems would be different from todays computers in that they wont need to be programmed, but instead can learn. Where current computers use stored programs and storage areas that contain precise representations of specic bits of information, the system the project hopes to create will rely on articial neurons modeled after human ones, with all their built-in capabilities and weaknesses. Their individual processing elements articial neurons will be far simpler and faster than the processors we nd in current computers. But like neurons in the brain, they will also be far less accurate and reliable. So the HBP will develop new techniques of stochastic computing that turn this apparent weakness into a strength making it possible to build very fast computers with very low power consumption, even with components that are individually unreliable and only moderately precise, the website says. Ultimately, such systems could be available for daily use, according to project researchers. They could be standalone computers, integrated into other systems, even as brains for robots.

An image of the brain, still poorly understood. Image courtesy HPB.

behavior and learning and the mechanisms of brain disease, says the White House news blog. The effort is launching with more than $100 million in funding for research supported by the National Institutes of Health, DARPA and the National Science Foundation. Foundations and private research institutions are also taking part, including the Allen Institute for Brain Science, which plans to spend about $60 million a year on projects related to the initiative, and the Kavli Foundation, which plans to spend $4 million a year over the next decade, according to the White House. NIH has announced the initial nine areas of its research, which will be funded at $40 million in 2014. They include generating a census of brain cell types, creating structural maps of the brain and developing large-scale neural network recording capabilities. DARPA plans to allocate $50 million to the work in 2014, mainly with an eye toward creating new information processing systems and mechanisms that could help warghters suffering from posttraumatic stress, brain injury and memory loss.The NSF plans to spend $20 million to work toward molecular-scale probes that can sense and record the activity of neural networks; help make advances in systems to analyze the huge amounts of data that brain research can create; and understand how thoughts, emotions, memories and actions are represented in the brain. The Allen Institute, founded in 2003 by Microsoft cofounder Paul Allen, has launched a 10-year initiative to understand neural coding, or a study of how information is coded and decoded in the mammalian brain, according to the White House. It is also a formal partner in the European Human Brain Project. Brett Davis is editor of Mission Critical.

Competition
European researchers arent the only ones interested in delving into the mysteries of the human brain. The White House has announced a somewhat similar effort. Earlier this year, the White House announced the Brain Research through Advancing Innovative Neurotechnologies or the BRAIN Initiative which also seeks to replicate brain structures and functions. Such cutting-edge capabilities, applied to both simple and complex systems, will open new doors to understanding how brain function is linked to human
8
MISSION CRITICAL

November 2013

An infographic of the BRAIN Initiative. Image courtesy the White House.

MISSION CRITICAL

November 2013

THINKING ABOUT
PERCEPTION AND COGNITION

U N IT ED K IN GDO M
Robotic seals have been shown to help increase the quality of life and cognitive activity of patients with dementia in a new U.K. clinical study. Paro, a robotic harp seal made in Japan, interacts with patients with artificial intelligence software and sensors that allow it to move, respond to touch and sound and display different emotions.

A ROU N D T H E W O R LD
esearchers around the world are nding ways to create machines that can better sense their environment and react to the people around them, ranging from robotic harp seals that aid elderly patients to golf courses that know when they need to be watered.

SPAIN , P O RT U GAL
A new, smarter way to water golf courses has arisen in Spain and Portugal. The EU WaterGolf project intends to save water and find a smarter way to keep the playing greens green. New wireless technology laced throughout a course will suggest parameters of irrigation with 3-D mapping, drainage and weather forecasts.

CAL IF O R NIA
Google researchers have made a breakthrough for recognition software on mobile and desktop computers. The Machine Vision Technique can recognize more than 100,000 different types of objects in photos in a matter of minutes.

ITALY
Smart homes are now becoming realities with new technologies capable of making everyday tasks new and futuristic. Companies like Italys Hi Interiors are creating concepts to wildly change every aspect of a home, such as the HiCan, or high-fidelity canopy, bed that is built with portable blinds, Wi-Fi, entertainment system and a projector screen that emerges from the foot of the bed.

EV E RY W H E RE
Facebook has set up a team to develop applications based on deep learning, the same technique Googles software uses, to improve the news feeds of its users by giving them more relevant content and better-targeted advertising.

10

MISSION CRITICAL

November 2013

state of the art

G ER M ANY
Bielefeld University has begun to analyze body language for customers in bars with the new robotic bartender, James, for Joint Action in Multimodal Embodied Systems. James is capable of making eye contact with customers and receiving drink orders as well as delivering them with his one arm and four fingers. No word yet on whether James is a good listener or if he will cut off customers who have had too many.

H U N GARY
The Hungarian Academy of Science and Etvs Lornd University has found that dogs interact better with robots when they are being socially active towards them. PeopleBot, a human-sized robot, got along better with canines when it behaved the way a human would behave.

ISRA E L
Scientists who work for the Centers for Disease Control and Prevention are finding ways to use artificial intelligence to help prevent the next global flu outbreak. A branch of artificial intelligence researchers, including ones from Tel Aviv University, is composing algorithms based off of past outbreak data to recognize key properties of future dangerous new flu strands.

JAPAN
Epsilon, a Japanese rocket that relies heavily on artificial intelligence to do the final safety checks before takeoff, recently did that and then launched into space. Using the new software allowed the rocket to take off with only eight people at the launch site instead of the usual 150.

INDIA
IPsoft has created a humanoid robot capable of answering 67,000 phone calls and 100,000 emails every day. She handles the offices dirty work and is capable of solving IT diagnostic problems and can be seen and interacted with on a customers computer screen.

MISSION CRITICAL

November 2013

11

Feel
DoYou
LikeI Do
By Rich Tuttle

Robots Leverage Haptics for a More Human Touch

aptics and robots were made for each other. Haptics, the science of touch, allows robots to feel as well as see, making them more effective at jobs they do today, like some kinds of surgery, and potentially able to do things they dont do today, like aerial refueling. To control a remote system, whether its in space or just next door, youd like to be able to interact with things in the same way you interact with things in the real world, so in that sense haptics is a very well suited technology for robotic telemanipulation, says Jason Wheeler, head of Sandia National Laboratories Cybernetics group. One way to interact is with tactile sensors. They feel what a robot nger, for instance, is touching and prompt the robot, or its human operator, to react accordingly. But such sensors either havent existed until recently
Kinova Researchs Jaco robotic arm, which the company is fitting to wheelchairs to assist those with mobility problems. Photo courtesy the company.

12

MISSION CRITICAL

November 2013

or have been too costly to be embedded in robots that researchers have been working with so far, says Mark Claffee, principal robotics engineer at iRobot. Now, with sensor technology advancing, and with computers becoming more capable and cheaper, iRobot and others are closing in on robots that, at least on some level, understand how to adjust themselves to better interact with objects, Clafee says. And theyre going to do that through tactile sensing and through haptic-type feedback, either to themselves or to a human operator.

Researching Haptics
Canadas Kinova Research is linking tactile sensors and robotics in another way to help those in wheelchairs. Its robotic manipulator arms, xed to wheelchairs, increase the mobility of people with upper spinal cord injuries, for example. Fitted with new tactile sensors, hightech manipulators like Kinovas Jaco and Mico will be even more effective, says Francis Boucher, chief business development ofcer of the Montreal company. The sensor is the product of work by Kinova and the University of Quebecs cole de Technologie Suprieure. Prof. Vincent Duchaine of ETS says its probably the most sensitive tactile sensor. It can feel everything from a gentle breath to very high forces, so it has a very wide range. Carnegie Mellon University in Pittsburgh is using Kinovas Mico in a program for several national agencies called Smart and Connected Health. The idea, according to a recent government announcement, is to spur next-generation health and healthcare research through high-risk, high-reward advances in the understanding of and applications in information science, technology, behavior, cognition, sensors, robotics, bioimaging and engineering. Mico is a beautiful piece of technology, says Sidd Srinivasa, associate professor of robotics and director of the Personal Robotics Lab at Carnegie Mellon. Were going to be building cutting-edge technology for this robot arm that will hopefully very soon reach real people who need this. He also recognizes that while humans take for granted their ability to touch and feel, its incredibly hard for robots, because they dont have anywhere close to the resolution or delity of haptic sensing that humans do. The interaction between mind and ngertips in a human is a wonderful and, at present, not-duplicable feat, says Frank Tobe of The Robot Report. Henrik I. Christensen, KUKA Chair of Robotics at Georgia Tech, illustrated this interaction in an experiment. He showed that it takes a person about ve seconds to strike a match. But with ngertip anesthesia, it takes about 25 seconds and a lot of fumbling. Theres no haptic feedback, Christensen said in a recent TED presentation. You have your regular muscle control and everything else. Ive just taken away your ngertip feeling. Were incredibly [reliant] on this ngertip sensing. But Srinivasa says human-like ngertip feeling for robots isnt likely to be developed soon. This means that while its important to continue to develop better haptic techMISSION CRITICAL

Haptic Challenge
One big program taking advantage of such technology is DARPAs Robotics Challenge, or DRC, which aims to develop robots that can help victims of natural or man-made disasters. Among other things, the DRC robots will have to be dexterous, which implies an ability to feel. IRobot and Sandia have supplied robotic hands to teams involved in the DRC. Theyre also getting Atlas Robots from Boston Dynamics. The hands and other systems will compete on these robots in a series of tasks in December at the Homestead-Miami Speedway. Other teams that have been developing robots from scratch also will compete. IRobots hand has three ngers and Sandias has four, but both feature a skin with embedded tactile sensors. Sandias has ngerprints to help in gripping and ngernails thats how you can pick up a at key from a surface, says George Sandy Sanzero, manager of Sandias Intelligent Systems, Robotics and Cybernetics Department. IRobot and Sandia developed the hands for another DARPA program, Autonomous Robot ManipulationHardware, or ARM-H. When that program ended a couple of years ago, DARPA decided to use these hands for DRC, says Sandias Wheeler. IRobots Clafee says hands are the critical interface between the robot system and the world around it. He says a robot in the competition typically wont relay information to a human operator, but rather understand where it has touched an object and how hard it is touching it at [a specic] point on the hand, and use its own software to understand what the right grasping strategy is. That would be too much information to try to relay to a human operator, he says. Our vision in terms of haptics and tactile sensing is let the robots understand the sensory input thats coming in to them and make decisions for themselves on how to adjust their grasp, or how to move their ngers to get a more stable grasp on the object.

November 2013

13

nology, its also important to come up with ways to more effectively compensate for robots relative lack of touch. Pressure sensors, for instance, could help ll the gap. Srinivasa hypothesizes that humans use this very technique. He says we infer forces through deformation. In other words, when I fold a piece of paper, if I press too hard, it deforms more than if I press less, and that perceptual channel, deformation channel, is acting as a proxy for the haptic channel. Its the same thing when youre operating on squishy stuff. If the stuff squishes, then you know that youre exerting some amount of force. He says Humans are really good at compensating for missing channels with other channels, and the lab is trying to get robots to do the same thing. Tactile sensors are one way to close the haptic loop. Another is to put sensors on a robots human operator. Cambridge Research and Development of Boston has developed a linear actuator called Neo thats about the size of a watch and that a person can wear on a headband or armband. Theres no vibration or force feedback in remote surgery or other uses, and adaptation takes mere minutes as the brain rapidly associates the Neos pressure application with the sense of touch, according to the company. Cambridge CEO Ken Steinberg says surgeons and others can operate [a] robot freely and they can feel whatever the robots feeling. Steinberg sees all kinds of applications, including aerial refueling. Today, he says, a refueling boom is guided visually by an operator from a tanker to the plane being refueled. With Cambridges technology, a boom operator would have feeler gauges to make a more deft contact with the other plane. Its not purely a visual experience, he says. Its also a haptic feedback experience. The same technique might be even more appealing if both the tanker and the plane being refueled were robots, Steinberg says. The operator in that case could be on the ground. Rich Tuttle is a longtime aerospace and defense journalist and contributor to AUVSIs Mission Critical and Unmanned Systems magazines.

The back of DARPAs Atlas robot, which will use haptics research to help develop robots that could aid in the wake of disasters. Photo courtesy DARPA.

14

MISSION CRITICAL

November 2013

AUVSIs quarterly publication that highlights special topics in the unmanned systems industry is now in print as a double issue on the back cover of Unmanned Systems.

Each is an in-depth focus on one particular issue with information on the defense, civil and commercial applications of the technology as well as new developments and what the future may hold.

Upcoming Issues:
Automated Vehicles - Febuary 2014 Edition
Advertising deadline: 2 Jan.
VOLUM 1 E 3 NO. AUVSI 2013 SPRING

VOLUME 1

NO.2 S

UMMER 20

11 AU

VSI 270

Robots aid Japa

0 South Qu

incy Street

, Suite 400

, Arlington

, VA 2 2 2 0 6

, US

Unmanned system s ght res


uincy South Q 2700 Street, ingt 00, Arl Suite 4 22206, o n , VA USA

Agriculture - May 2014 Edition


Advertising deadline: 25 March

ed Unmann s m te s Sy rgy and Ene

VOLUME 3 NO.2 May 2013 AUVSI 2700 South Quincy S t r e e t , S u i t e 4 0 0 , A r l i n g t o n , VA 22206, USA

First responde

Inside this iss

Robots help p
MISSIO N CRITICA L

ue:

olice
1

Public Safety - August 2014 Edition


Advertising deadline: 25 June

r robots
Summer 2011

Commercial UAS - November 2014 Edition


Advertising deadline: 25 Sept.

Inside thi

Mining Automated e in ROV Timel s Power Line g in or it Mon


Inside this issue:

s issue:

If your company has technology in any of these arenas, then you cant afford to miss this opportunity. To book your advertising space today, contact Ken Burris at +1 571 482 3204 or kburris@auvsi.org.

Technology Gap

From the Mouths of Bots

Natural Language Learning in AI


The actual result was that Watson accidentally picked up a potty mouth. Watson couldnt distinguish between polite language and profanity which the Urban Dictionary is full of, said Brown in an interview with Fortune magazine. Watson picked up some bad habits from reading Wikipedia as well. In tests, it even used the word bulls--- in an answer to a researchers query. After that incident, the 35-person team working on the project had to construct a lter to wash Watsons proverbial mouth out with soap. They also scrapped Urban Dictionary from its memory entirely. speak, your voice gets digitally coded and the signal gets relayed through a cell tower to a cloud server that has speech models ready to analyze the noise. Also, the phone performs a local speech evaluation to identify if the command can be handled by the phone, like cueing up a song stored on the device, and if thats the case, it informs the cloud-bound signal it is no longer needed. Then a server will evaluate the noises in the speech pattern to its series of known human language sounds and then it runs through a language model to estimate words. It then determines what the most probable commands might mean. The second step, Siris response, comes in the form of computer-generated speech, which leverages much of the same knowledge as analyzing the speech. And although this process can sound rather dry, like Watson, Siri is also prone to interesting replies. There were many conversations within the team about whether it should have an attitude, says Norman Winarsky, vice president of SRI International, talking about the pre-Apple work on Siri to The Wall Street Journal. And sometimes this sass is intended to make those hip to articial intelligence smirk. For instance, if a user tells Siri, Open the pod bay doors a reference to a command to the malcontent sentient computer HAL 9000 of 2001: A Spacy Odyssey the program will respond, We intelligent agents will never live that down, apparently.

BMs Watson can beat out any human competitor on Jeopardy! It can mine patient data to help doctors make more accurate diagnoses. It can even analyze thousands of pages of nancial data published every day to make more informed investment choices. But does Watson need a teenager to help translate phrases like OMG? Natural language learning is a big challenge in articial intelligence, but it has seen some success with applications like speech-to-text typing programs. But researchers at IBM wanted to take that a step further with Watson, introducing it to slang phrases. And the initial results werent quite what researchers bargained for. Eric Brown, the researcher in charge of the project, introduced Watson to Urban Dictionary, a wiki-style online lexicon of common and oftentimes not so common vernacular inputted by visitors to the site. The intention was to make Watson understand that OMG meant Oh, my God, and that hot mess doesnt mean there was an accident in the kitchen.

Natural Language at the Press of a Button


Arguably the most common natural language computing to technology consumers today is Apples Siri personal assistant. The origins of Siri come from an articial intelligence project by SRI International, developed for DARPA, that Apple bought in 2010 for $200 million. But the actual voice of Siri was recently revealed to be Susan Bennett, a voice-over actress living in Atlanta, Ga. Apple has not conrmed this, however audio forensics experts have veried the voice is hers. For four hours a day in July 2005, she read phrases documenting the English languages myriad vowel sounds, consonants, blends, diphthongs and glottal stops. However, simply recording a persons voice and playing it back is not how Siri works. The rst step is understanding the users command. When you press the home button on an iPhone and

Watson loaded onto a smartphone. Photo courtesy Jon Simon/Feature Photo Service for IBM.

16

MISSION CRITICAL

November 2013

the most cost-effective, comprehensive, and searchable unmanned systems and robotics directory in the industry

ACCESS

100,000 DATA POINTS 3,800 PLATFORMS

1,200 COMPANIES

more than 30 variables with 100,000 data points with the confidence that records are updated with the most current data

SEARCH

PROFIT

from unparalleled access to data spanning academic, civil, commercial, and military markets including prototypes and full production systems

your company with the competition, customers with their products, manufacturers with developers and researchers with data

CONNECT

SEE US IN BOOTH 4005 FOR AN INTERACTIVE AUVSIS UNMANNED SYSTEMS 2013 ATTENDEES!
DEMO AND AN INTRODUCTORY PRICE FOR

ROBOTDIRECTORY.AUVSI.ORG

WEBINAR series
UNMANNED IN THE ARCTIC
WHEN: 13 November, 3:00-4:00 p.m. EDT (U.S. and Canada) SPEAKER: Greg Walker, director, Alaska Center for Unmanned Aircraft Systems Integration
Unmanned systems are proving to be invaluable tools for Arctic monitoring and research. From wildlife tracking to oil spill remediation, the systems are quickly becoming a staple in a region of the world where humans rarely venture. AUVSIs Unmanned in the Arctic webinar will feature a presentation from Greg Walker discussing the University of Alaska, Fairbanks recent involvement in Arctic Shield 2013 and more.

RESERVE YOUR SPOT FOR AUVSIS NOVEMBER WEBINAR TODAY!


AUVSI Members: FREE Nonmember: $39

FOR REGISTRATION AND SPONSORSHIP INFORMATION VISIT WWW.AUVSI.ORG/WEBINAR

www.auvsi.org

Q&A

Katsu Yamane

Senior Research Scientist at Disney Research

Katsu Yamane is senior research scientist at Disney Research in Pittsburgh, Pa., where he has worked since 2008. His main area of research is humanoid robot control and motion synthesis. In a recent study, he worked on ways for robots to perceive that humans are handing them objects.
A test subject interacts with a robot that can receive an object handed to it. Image courtesy Disney Research.

Q: WHAT PROBLEMS DOES THIS RESEARCH HELP SOLVE? A: This research helps robots make physical interaction with humans naturally. Motion-planning algorithms are becoming quite powerful, but they still have to spend a long time to generate robot motions compared to normal human reaction time. Robots would have to react to human motions much more quickly to make interactions natural. Q: BY
USING A DATABASE OF HUMAN MOTIONS, DO YOU SIDESTEP THE NEED FOR GREATER PERCEPTION OR INTELLIGENCE ON THE PART OF THE ROBOT?

searching and sorting data that can be easily ordered such as text. Human motion, on the other hand, is in a very high-dimensional space and therefore not easy to order. We developed a method for organizing human poses into a binary data structure. The data structure allows the robot to search for poses in the database that are similar to what it is seeing right now. Q: WHAT ROBOT MOTIONS AND COMPONENTS wOULD YOU LIKE TO ADD IN THE FUTURE? A: We would like to add natural hand motions to grab the handed object. We would also like to have the robot do additional tasks after receiving an object, such as putting it into a bag. Q: IS THERE A NEED FOR ROBOTS TO HAVE HAPTIC SENSORS, OR wOULD THAT HELP? A: Haptic sensors would certainly help the robot recognize that the object is indeed in the hand. Q: WHAT
OTHER TECHNOLOGY ADVANCES COULD AID IN THIS RESEARCH OR IN ITS EVENTUAL USE BY ROBOTICS IN A VARIETY OF ROLES (IN A FACTORY, IN A HOME, ETC.)?

A: Yes, thats exactly the idea of this research. Obviously, humans can react to other persons motions instantly, and humans expect the same speed when they interact with robots. By using a human motion database, we can leverage the human motion planning process as a black box and just use its results. Q: IS THE ROBOT ABLE TO LEARN TO ACCEPT DIFFERENTLY SHAPED OBJECTS, OR IS THIS MOSTLY FOCUSED ON ALLOwING IT TO KNOw wHEN IT IS BEING OFFERED SOMETHING AND TO SYNCHRONIZE ITS MOTION?

A: We are currently focusing on recognizing the handoff motion and synchronizing the robot arm motion. However, if the different object shapes result in different arm motions, the robot can recognize those shapes. In the future, we could combine this technique with a vision system to recognize the shape of the object being handed. Q: WHAT
IS THE BENEFIT OF

A: Computer vision technology to recognize the human motion and object shape would be essential to put this research into practical use, because we cant use motion capture systems in such environments. Q: WHY IS DISNEY RESEARCH INTERESTED IN THIS wORK? A: My expertise is in motion synthesis and control of humanoid robots. We are interested in exploring autonomous, interactive robots and physical human-robot interaction using whole-body motions. Q: WHY IS THIS RESEARCH IMPORTANT FOR THE FUTURE? A: If robots are to work in factories and homes in the future, interactions with humans must be intuitive, seamless and natural. By learning from human-human interaction, we can model how humans interact, which in turn makes the robot motions and interactions look natural to humans.
MISSION CRITICAL

TEASING

THE ROBOT AS

SHOwN IN A VIDEO?

A: This demo was just to demonstrate that the robot can quickly react, even if the human motion changes abruptly. Q: CAN
YOU DESCRIBE THE HIERARCHICAL DATA STRUCTURE THAT YOU DEVELOPED?

HOw DOES THAT wORK, AND HOw DOES IT HELP THE ROBOT? A: The data structure is an extension of the classical binary tree data structure from information theory. Conventionally, this data structure has been used for quickly

November 2013

19

Artificial Intelligence
a t i m e l i n e

Mathematician and codebreaker Alan Turing devises the Turing Test, which involves a computer attempting to trick a person into believe it is another human.

MITs Joseph Weizenbaum creates one of the earliest natural language processing programs called ELIZA. This program took users answers and processed them into scripts with human-like responses. Versions of the program are still available today.

Jack Myers and Harry Pople at University of Pittsburgh develop the INTERNIST knowledge based medical diagnosis program, which was based on clinical knowledge. It is able to make multiple diagnoses related to internal medicine.

1950

1966

1969

1956

Christopher Strachey writes one of the first machine learning game programs with the use of checkers. Studies found that the use of games helped scientists to learn and respond when training computers to think for themselves. The term artificial intelligence is recognized as an academic discipline at Dartmouth College during a technology conference. 20
MISSION CRITICAL

The first autonomous drawing program, AARON, is demonstrated at the AAAI national conference. The program was created by Harold Cohen.

The Standford Research Institute directs experiments with Shakey, one of the first mobile robot systems. It had the ability to move and observe its environment as well as do simple problem solving.

November 2013

1985

1951

1979

TIMELINE

Hondas ASIMO robot gains the ability to walk at the same gait as a human while delivering trays to customers in a restaurant setting. Ian Horswill advances behavior-based robotics with Polly, the first robot capable of navigating with the use of vision. Polly was able to move at a speed of 1 meter per second. IBMs Watson defeats the two greatest Jeopardy champions, Brad Rutter and Ken Jennings. Watson is an artificially intelligent computer that is capable of answering questions in natural language. It was developed by David Ferrucci in IBMs DeepQA project.

Chess champion Garry Kasparov defeats IBMs Deep Blue computer in a chess match. In 1997, an upgraded Deep Blue defeats Kasparov.

1993

1996

2005

2011
Apple releases Siri (Speech Interpretation and Recognition Interface) for the first time on the iPhone 4s. Siri is a spinoff of DARPAs CALO project, which stands for Cognitive Assistant that Learns and Organizes.

ALVINN, or Autonomous Land Vehicle In a Neural Network, steers a car coast-to-coast under computer control. ALVINN is a semiautonomous perception system that could learn to drive by watching people do it.

The Nomad robot explores remote parts of Antarctica looking for meteorite samples. Nomad autonomously finds and classifies dozens of terrestrial rocks and five indigenous meteorites.

2000

1995

2012

MISSION CRITICAL

November 2013

21

The Turing Test


Party Game, Turned Philosophical Argument, Turned Competition

Illustration courtesy iStock Photo.

he Turing Test, introduced by legendary mathematician and codebreaker Alan Turing, is a means of determining a machines ability to exhibit intelligent behavior that is at least equal to a human being. As introduced in 1950, the test involves a person who is communicating with another person and a machine. Both attempt to convince the subject that they are human through their responses. If the subject cant tell the difference, then the computer wins the game, dubbed the Imitation Game, which was based on a party game of the time. The Turing Test has been held up as the very denition of articial intelligence, although it arguably hasnt been met yet.

Thats not for lack of trying. Work on Turings idea led, in the 1960s, to the development of chatbots, or computer programs that would communicate back and forth with human interrogators. The best known is ELIZA, developed in 1966, which replicated the communication of a psychotherapist. One modern descendant of ELIZA is Apples Siri, which may have helped give you trafc directions this morning. A yearly competition, the controversial International Loebner Prize in Articial Intelligence, seeks to nd the best of such chatbots and has been rewarding them since 1991. So far, all have won a bronze medal and $4,000. Should any program fool two or more judges when compared to two or more humans, the competition will then

22

MISSION CRITICAL

November 2013

Uncanny valley

begin requiring multimodal entries that incorporate music, speech, pictures and videos. Should a computer program win that, by fooling half the judges, its creators will win a $100,000 grand prize and the competition will end. This years winner not of the big prize, but of another bronze medal is a chatbot named Mitsuku, programmed by Steve Worswick of Great Britian, who told the BBC that he initially built a chatbot to attract users to his dance music website, only to discover they were more interested in parrying with the chatbot. On a blog entry on Mitsukus website in September after the award was announced, he noted that winning even the annual competition has its benets people are paying attention. Today has been a bit of a strange day for me, he wrote. I usually have around 400-500 people a day visit this site, and at the time of writing this, I have had 9,532 visitors from all corners of the globe, as well as being mentioned on various sites around the net. I was even interviewed by the BBC this morning. In prepping for the prize, he wrote that he had been working to remove Mitsukus robotic responses to some questions, along the lines of, I have no heart, but I have a power supply or, Sorry, my eye is not attached at the moment. Funny, but not likely to fool humans into thinking shes a real girl.

was indistinguishable from a human. I believe that was a mistake. At this point, very few serious AI researchers believe they are trying to pass the Turing Test. The Turing Test is both too easy and too hard at the same time, he says. Early ight pioneers tried to mimic birds. The laws of aerodynamics were never discovered by birdwatching. The blind mimicry of the behavior of the natural system without understanding the underlying principles usually leads one astray. We didnt have ight until we built things that didnt y like birds or look like birds. This is very much analogous to AI. Ford says now, as the industry matures, we are more in the Wright brothers stage. We are going in that direction rather in birdwatching and bird building. If we could build an AI that always passed the Turing Test, and it could do it perfectly, I cant imagine what use it would be. In recent days, there has been sort of a Turing Test in reverse. The quirky Twitter feed of @Horse-ebooks, thought to be a malfunctioning spambot that for years appeared to re off random passages of electronic books, was revealed to instead be the work of two artists from the website BuzzFeed. ABC News reported that thousands who followed the account were surprised to learn today that the account was not run by a spambot or a robot but by two human beings as an elaborate conceptual art installation.

Birdwatching
The Turing Test has been controversial throughout its history. Ken Ford, CEO of the Institute for Human and Machine Cognition, a not-for-prot research institute of the Florida University System, says the test was part of a maturing process for the eld of AI. AI went through a maturing process where the initial goals for AI were sort of philosophical and posed by Alan Turing, in some ways, as a thought experiment. They werent technical goals, he says. He was arguing with philosophers about the possibility of an intelligent machine. The philosophers said not its not possible. We would never admit it. In the early days, the focus was very much on building mechanical simulacrums of human reasoning. It became a notion of building a machine whose dialogue

Further Advances
Several deep-pocket companies today are funding what they hope will be breakthroughs in AI, which initially may resemble chatbots such as Mitsuku, albeit much smarter ones. In the most recent example, Microsoft cofounder Paul Allen recently announced that he had hired Oren Etzioni as executive director of the Allen Institute for Articial Intelligence in Seattle. Etzioni was previously director of the Turing Center at Seattle neighbor the University of Washington. In a press release, Allen said he hired Etzioni because he shares my vision and enthusiasm for the exciting possibilities in the eld of AI, including opportunities to help computers acquire knowledge and reason.

MISSION CRITICAL

November 2013

23

From Top to Bottom

Getting Robots to Perceive More Like Humans

here are you? Its a simple answer as a human, but passing that knowledge on to a robot has long been a complex perception task. Cognitive Patterns aims to leverage what researchers already know about how humans perceive their environment and exploit that information when building autonomous systems. The prototype software developed by Massachusetts-headquartered company Aptima Inc. leverages the open-source Robot Operating System, or ROS, to enable this kind of processing on any type of platform. One of the things thats not particularly commonly known outside of cognitive science is that humans perceive and really think about very selective aspects of the world and then build a whole big picture based on things we know, says Webb Stacy, a cognitive scientist and psychologist working for Aptima. In essence, humans perceive a large portion of dening where they are through their brains and not through their surroundings. Past experiences feed future expectations about, for instance, what objects might be in a room called an ofce or a cafeteria. You dont have to really pick up information about exactly what a table, phone or notebook looks like, you already kind of know those things are going to be in an ofce, and as a result the process of perceiving whats in an ofce or making

Aptimas Cognitive Patterns architecture enables robots to make sense of their environment, much like how humans do. This robot has Cognitive Patterns integrated onto it for a test. Photo courtesy Aptima.

sense of the setting is as much coming from preexisting knowledge as it is directly from the senses, he says. This way of perceiving is called top-down, bottom-up processing. However, machine perception is typically driven by what Stacy terms bottom-up processing, where data from sensors are streamed through mathematic lters to make sense of what an object might be and where a robot is in relation. Cognitive Patterns is revolutionary, says Stacy, because it provides a knowledge base for a robot, so it is able to combine a knowledge base

with visual data provided by sensors to extrapolate information about its environment in a fashion more akin to top-down, bottom-up processing. Aptima secured funding for Cognitive Patterns through a small business innovative research grant from DARPA, and Stacy stresses that what the company is doing is practical and can be done without a multimillion-dollar budget. The rst phase of the project was limited to simulated models, whereas the second phase put the software on a robot.

24

MISSION CRITICAL

November 2013

Testing, Testing

When you move from simulation to physical hardware, youre dealing with an awful lot of uncertainty, he says. So its one thing in the simulation to say, here are some features that get presented to the system for recognition. Its another thing on a real robot wandering around perceiving things. However, Stacy says using ROS makes it easier to go from simulation to real-world application. In phase two, Aptima integrated the cognitive aspects of its software with visual information inputted by sensors built by iRobot. To test the prototype, Aptima secured the use of a local Massachusetts middle school, so the company could test the principles of its software in a real-world environment. For this second phase, Aptimas DARPA agent worked at the Army Research Laboratory. This allowed the company to integrate Cognitive Patterns with a robotic cognitive model the ARL had, called the SubSymbolic Robot Intelligent Controlling System, or SS RICS. That program combines language-based learning with lower level perception. Stacy says Cognitive Patterns aligns somewhere between the two. The system has an operator interface that Stacy says allows the user to communicate with the robot on a high level. The operator might say, I dont have a map. Go nd the cafeteria and see if there are kids there, he says. Now thats a very high level of command, because for a machine theres all kinds of stuff that

needs to gure it out there. And one of the really interesting things we did here is we looked to see if we couldnt generate new knowledge whenever we encountered something that we didnt understand. The operator can place alternate reality, or AR, tags on certain objects, and they act as a proxy for using computer vision to recognize those objects. Then the robot is able to learn those features and apply them to future scenarios. For instance, the team did this once when a robot using Cognitive Patterns was in a room with a weapons cache, so next time it entered a similar scenario, it would have a cognitive model on which to base its perceptions. The operator can also tell the robot to ignore certain surroundings or label special things in its environment, such as determining what a student desk is, because that object is a blend of a traditional school chair and a separate desk. This higher level of communication benets the operator, says Stacy, because, aside from software developers, most operators wont care about the detailed features being extracted from the robots camera system.

Research and the Ofce of the Secretary of Defense where a prosthetic would be controlled by the neurosignatures of a persons brain interfacing the limb with the users mind. The arms going to have sensors on it, and so its going to be doing the same kind of thing the robot is doing with Cognitive Patterns, which is perceiving its environment, knowing the things it can reach for, he says. Through collaboration with a prosthetics expert, Aptima is using its software to let the arm communicate with its user in a more natural way, such as through muscle twitches.

Perfecting Perception
The end goal of this work addresses the current disconnect in how robots could best perceive their environments, with the machine vision community pushing optimal mathematical models to perfect bottom-up processing, while the cognitive science community is seeking out the best cognitive model to apply to get a robot to think, says Stacy. We are starting to see the need to hook up with each other, and Cognitive Patterns really is the intersection between those two. If that happens, well have robots that can really see and understand the world the way that humans do, and thats been elusive for a long time in robotics.

Finding New Uses


Now Aptima is working on another prototype as a follow on to its DARPA contract implementing Cognitive Patterns onto prosthetics. The company is working on an intelligent prosthetic through a contract with the Ofce of Naval

Click on this QR code to see a video on Cognitive Patterns.

MISSION CRITICAL

November 2013

25

The Future of Employment:


How Susceptible are Jobs to Automation?
By Ashley Addington

Personal assistant robots, like PAL Robotics REEM-H1, could be used for service jobs in the future. Photo courtesy PAL Robotics.

26

MISSION CRITICAL

November 2013

Spotlight

new Oxford study has emerged addressing the changes in the future job market.

the less likely a job will be taken over by a machine. Awareness is essentially what we have tried to produce from this study. We want to make more people aware of what the future potentially holds and that education is the key for people to keep their careers, Osborne said. Jobs in transportation are also at risk due to the automation of vehicles. Sensor technology has continued to improve vastly over the past few years, which has led to increased safety and security in vehicles. It has also offered enough data that can help engineers get past the problems of robotic development. These will permit an algorithmic vehicle controller to monitor its environment to a degree that exceeds the capabilities of any human driver. Algorithms are thus potentially safer and more effective drivers than humans, the study says. As with any technological change, for every change to a current job there is the possibility of new growth in other areas. In order for computers and robots to be able to do particular jobs, there must be someone to set up and monitor the computers to make sure everything functions accordingly. Jobs will also be added to assist and manage the technology. First, as technology substitutes for labor, there is a destruction effect, requiring workers to reallocate their labor supply; and second, there is the capitalization effect, as more companies enter industries where productivity is relatively high, leading employment in those industries to expand, the study says. Computerization has the potential to change the job market and society by being more thorough and doing a vast majority of tasks faster. Algorithms are able to make decisions with an unbiased mind, thus allowing them to come up with conclusions in a more timely fashion than human operators. Even though the job market is changing, there is no need to panic. The best things people can do for themselves is to stay current with technology and be thoroughly educated, Osborne says. There are plans to continue the Oxford study and focus more on the wide range of jobs that have potential to change, which ones are most susceptible and how this should be handled.

In the study, conducted by Carl Benedikt Frey and Michael A. Osborne and entitled The Future of Employment: How Susceptible are Jobs to Computerisation, the researchers broke down the future evolution of jobs and what employment could mean for the rest of the world. Before I was even conducting this study, I was really interested in how technology was advancing, especially in the job market, Osborne said. The conclusion: Over the next two decades, 47 percent of the jobs in more than 700 elds could be affected by automation. The main jobs at risk are the low-level jobs that dont require higher education. Jobs such as librarians, telemarketers and database managers will be among the rst jobs that could be replaced by automation, the researchers say. Higher education jobs that require creativity, such as in the elds of health care, media, education and the arts, would be less likely to be handled by computers because of the need for spontaneous thought. People need to be aware of how important education is. People with jobs that require more education will be least likely to lose their job, Osborne said. Now, for the rst time, low-wage manual jobs will be at risk. Jobs in agriculture, housekeeping, construction and manufacturing could be some of the areas handed over to computers and robots. Jobs that require perception and manipulation are safer, because they involve having to interact with other human beings. Computers would have a harder time taking over these jobs, because much of the work involved in these careers change on a daily basis. It would be close to impossible to program a computer with every possible scenario when dealing with situations and communications between individuals. Most management, business and nance occupations, which are intensive in generalist tasks requiring social intelligence, are largely conned to the low-risk category, the study says. Science and engineering jobs are also not at risk, because they require a large amount of creative intelligence. The higher the salary and education demands,

Click on this QR code to read The Future of Employment: How Susceptible are Jobs to Computerisation.

MISSION CRITICAL

November 2013

27

Unmanned Systems
Bringing you the latest

Global News and Information about Unmanned Systems

Send an email to eBrief@auvsi.org to begin receiving

AUVSIs Unmanned Systems eBrief.


28
MISSION CRITICAL

November 2013

End Users

Ghostwriter: Algorithms Write Books

hilip Parker, a marketing professor at the international graduate business school INSEAD, is also a published author a very published author. Published so many times, in fact, that he makes prolic writer like Shakespeare or Stephen King seem downright lazy. Parker has penned more than one million report length books, which he self-publishes as paperbacks or print-on-demand books. Unlike Shakespeare or Stephen King, they tend not to be about love or fear. Instead, they might be about such topics as The 2007-2012 Outlook for Tufted Washable Scatter Rugs, Bathmats and Sets That Measure 6-Feet by 9-Feet or Smaller in India or The 2009-2014 World Outlook for 60-Milligram Containers of Fromage Frais. There are some dictionaries in there too, and medical sourcebooks and texbooks. He also writes poetry 1.4 million poems so far. He doesnt produce these articles, books and poems just to see his name on the shelf, but rather to serve very small, niche markets, particularly in the developing world, that traditional publishing have ignored because they are so small. Parker doesnt sit and crank these out himself, at least not in the traditional book-writing sense. Instead, he uses a series of computer programs to compile information on a given topic and then combines it into book form. The above-mentioned book titles might seem extremely arcane, but if youre a small business in a developing country making a specic market, he noted in a recent Seattle TED Talk, you cant purchase market data for some of these products. Its out there but no traditional publisher is going to package it. I discovered there is a demand. Its a very narrow demand, but the key problem is the author, he said. Authors are very expensive. They want food and things like that. They want income. The methodology for each type of book is studied and then copied into algorithms. As for how they work, it depends on the subgenre, he writes. For example, the methodology for a crossword puzzle follows the logic of that genre. Trade reports follow a completely

different methodology. The methodology of the genre is rst studied, then replicated using code. The programs are able to provide the whole value chain of the publishing industry, he says, by automating the collecting, sorting, cleaning, interpolating, authoring and formatting. Parker says he began working on the programs-as-writer idea in the 1990s trying various techniques, but around 1999-2000 I was able to create the rst commercial applications. The idea came from being able to do genres that otherwise would not have happened. He hit on the economic model of selling very expensive, high-end market research studies, completely computer generated, to subsidize the creation of language learning materials, mathematics books, etc. for the underserved languages. Weve published over about a million titles, most of them are high-end industry studies, but a lot of them are language learning, he said in the talk while showing a video of what it looks like when a computer writes a book (many Web pages pop up in quick succession). EVE is capable of creating textbooks for students in Africa who have never seen one in their language, for collating weather reports for farmers in remote locations, even to help develop video games for agriculture extension agents to help them plan planting regimens for regions they have never seen. Much of the poetry is aimed at helping non-English speakers learn the language. Its a way of corralling existing data in new ways to serve tiny markets. The practice could be used in a variety of ways, he noted. Say, for example, youre a football player and you dont like your physics book because you cant relate to it. Why not have a football player physics book? Why not have a ballet dancers physics book? he asked. Going forward, hed like to do even more, such as by having virtual professors who can diagnose, teach and write original research in elds that do not have enough scientists or researcher[s] available. Fun stuff.

MISSION CRITICAL

November 2013

29

MAXIMIZE YOUR

VISIBILITY
BECOME A CORPORATE MEMBER TODAY DISCOUNTS
Exhibits Sponsorships Advertising Event registration Complimentary job listings on AUVSIs career center

ACCESS
Listings online and in Unmanned Systems magazine Members-only networking, education and VIP events AUVSI Day on Capitol Hill Local chapters around the world Members-only online content AUVSIs online community

KNOWLEDGE
Unmanned Systems magazine subscription Unmanned Systems e-Brief subscription Advocacy Action Alerts Knowledge resources

Join today at www.auvsi.org/Join

You might also like