You are on page 1of 6

Military Applicability of Artificial Intelligence

A Study
-

By

K. Shiv Anand

III – B. Tech

Department of
Computer Science

VARDHAMAN COLLEGE OF ENGINEERING


ABSTRACT

As with many other fields of scientific study, the military has picked up on the use of
Artificial Intelligence. The possibilities of military use of AI are boundless, exciting,
intimidating, and frightening. While today's military robots are used mainly to find roadside
bombs, search caves, and act as armed sentries, they have the potential to do so much more.

Not all military uses of AI directly relate to the battlefield however; it can use Artificial
Intelligence for more passive purposes as well. For example, the military has developed a
computer game that uses AI to teach new recruits how to speak Arabic. The program requires
soldiers to complete game missions during which they must be able to understand and speak
the language. This system gives the soldiers a more realistic, easy, and effective way to learn
the new tongue. This particular game works by using speech recognition technology that
evaluates the soldier's words and detects common errors. It can then create a model of the
soldier, keeping track of what he's learned and what he hasn't in order to provide
individualized feedback for the soldier's specific problems. Those who are working on this
project believe that it will change the face of all language learning and similar programs will
become mainstream sometime in the near future.

Introduction:

This paper reviews the potential use of autonomous weapons in future warfare and examines
the ethical issues raised using utilitarian and rights-based philosophical arguments. From
these arguments we judge that if these weapons are permitted in an unbridled fashion then
they are morally less acceptable than conventional technology. Moreover, at a deeper level
we argue that their use in life and death situations gives rise to profound objections that are
emotional rather than logical in nature.

It is therefore recommended that doctrine, concepts of operation and policy are given urgent
attention by all those responsible for their use.

The military and the science of computers has always been incredibly closely tied - in fact,
the early development of computing was virtually exclusively limited to military purposes.
The very first operational use of a computer was the gun director used in the second world
war to aid ground gunners to predict the path of a plane given its radar data. Famous names in
AI, such as Alan Turing, were scientists that were heavily involved in the military. Turing,
recognized as one of founders of both contemporary computer science and artificial
intelligence, was the scientist who broke the German's Enigma code through the use of
computers.
Applications – Autonomous Weapon Systems
Autonomous Weapons Systems

By dictionary, being 'autonomous' is the ability to act independently or have a freedom


to do so. Thus, I define AW systems as those that operate without human intervention and
as such are able to complete their tasks by processing, responding to and acting on the
environment they operate in. The key feature of an autonomous weapon is the ability to 'pull
the trigger' — attack a selected target without human initiation nor confirmation, both in case
of target choice or attack command.

Autonomous weapon systems are increasingly being used in modern warfare, notably in the
recent Middle East campaigns of NATO & also by the other countries including India. The
improvement in electronics is resulting in the development of weapon systems with ever
greater computing power. Current systems developed for surveillance only are now being
replaced with combat capable vehicles where the decision to attack resides in an intelligent
computer rather than a human operator. It is envisaged that this will increasingly become the
norm in the digitised battlespace of the future because the human operator will simply be too
slow to take effective decisions except at the highest levels.

However, the use of such technology raises real concerns about whether the decision to take
human life should reside with an artificially intelligent machine. Concerns range from the
implications for proportionality and discrimination to the fundamental issue of whether it is
morally right to permit computers to take such decisions.

As computing power increased and pragmatic programming languages were developed,


more complicated algorithms and simulations could be realized. For instance, computers
were soon utilized to simulate nuclear escalations and wars or how arms races would be
affected by various parameters. The simulations grew powerful enough that the results
of many of these 'wargames' became classified material, and the 'holes' that were
exposed were integrated into national policies.

Artificial Intelligence applications in the West started to become extensively researched


when the Japanese announced in 1981 that they were going to build a 5th Generation
computer, capable of logic deduction and other such capabilities.

Inevitably, the 5th Generation project failed, due to the inherent problems that AI is faced
with. Nevertheless, research still continued around the globe to integrate more 'intelligent'
computer systems into the battlefield. Emphatic generals foresaw battle by hordes of entirely
autonomous buggies and aerial vehicles, robots that would have multiple goals and whose
mission may last for months, driving deep into enemy territory. The problems in developing
such systems are obvious - the lack of functional machine vision systems has lead to
problems with object avoidance, friend/foe recognition, target acquisition and much more.
Problems also occur trying to get the robot to adapt to its surroundings, the terrain, and other
environmental aspects.

Nowadays, developers seem to be concentrating on smaller goals, such as voice recognition


systems, expert systems and advisory systems. The main military value of such projects is to
reduce the workload on a pilot. Modern pilots work in incredibly complex electronic
environments - receiving information not only from their own radar, but from many others
(principle behind J-STARS). Not only is the information load high, the multi-role aircraft of
the 21st century have highly complex avionics, navigation, communications and weapon
systems. All this must be organized in a highly accessible way. Through voice-recognition,
systems could be checked, modified and altered without the pilot looking down into the
cockpit. Expert/advisory systems could predict what the pilot would want in a given scenario
and decrease the complexity of a given task automatically.

Aside from research in this area, various paradigms in AI have been successfully applied in
the military field. For example, using an EA (evolutionary algorithm) to evolve algorithms to
detect targets given radar/FLIR data, or neural networks differentiating between mines and
rocks given sonar data in a submarine. I will look into these two examples in depth below.

Neural-networks
Neural networks (NN) are another excellent technique of mapping numbers to results.
Unlike the EA, though, they will only output certain results. A NN is normally pre-trained
with a set of input vectors and a 'teacher' to tell them what the output should be for the
given input. A NN can then adapt to a series of patterns. Thus, when feed with
information after being trained, the NN will output the result whose trained input most
closely resembles the input being tested.

This was the method that some scientists took to identify sonar sounds. Their goal was
to train a network to differentiate between rocks and mines - a notoriously difficult task
for human sonar operators to accomplish.

The network architecture was quite simple, it had 60 inputs, one hidden layer with 1-24
inputs, and two output units. The output would be <0,1> for a rock and <1,0> for a
mine. The large amount of input units was to encorporate 60 normalized energy levels of
frequency bands in the sonar echo. What this means is that a sonar echo would be
detected, and subsequently fed into a frequency analyzer, that would break down the
echo into 60 frequency bands. The various energy levels of these bands was measured,
and converted into a number between 0 and 1.

A few simple training method was used (gradient-descent), as the network was fed
examples of mine echoes and rock echoes. After the network had made its
classifications, it was then told whether it was correct or not. Soon, the network could
differentiate as good or better than its equivalent human operator.

The network had also beaten standard data classification techniques. Data classification
programs could successfully detect mines 50% of the time by using parameters such as
the frequency bandwidth, onset time, and rate of decay of the signals. Unfortunately, the
remaining 50% of sonar echoes do not always follow the rather strict heuristics that the
data classification used. The networks power came in its ability to focus on the more
subtle traits of the signal, and use them to differentiate.

International Guidelines
There are no current international guidelines for, or eve discussions about, the uses of
autonomous robots in warfare .These are needed urgently. If there was a political will to use
them then there would be no legal basis on which to complain.15 This is especially the case if
they could be released somewhere where there is a fairly high probability that they will kill a
considerably greater number of enemy combatants (uniformed and non-uniformed) than
innocents– i.e. the civilian death toll was not disproportionate to the military advantage.

Ethical Issues
The key ethical issue concerning AW systems is responsibility, which spans three
Sub problems — control, i.e. who is controlling the system, man or machine; consciousness,
i.e. does the system interfere with man orders or is self-aware; and crimes, i.e. can a
machine commit it and who takes account for it. Responsibility is the reason why I want to
distinguish semi- and fully autonomous weapon systems

Morality: A Quick Thought


All these systems are quite impressive, and perfected models could prove incredible
assets on the battlefield. Artificial Intelligence may only get developed to a certain level
due to the threat humans feel as computers get more and more intelligent. The concept
behind movies such as Terminator where our robotic military technology backfires on us
and destroys us are rampant. Are there moral issues that we must confront as artificial
military intelligence develops? As Gary Chapman puts it:

Autonomous weapons are a revolution in warfare in that they will be the first machines
given the responsibility for killing human beings without human direction or supervision.
To make this more accurate, these weapons will be the first killing machines that are
actually predatory, that are designed to hunt human beings and destroy them.

Conclusion

Although science fiction authors have done their guess work and experts in the field have
done their predicting, the future of AI and its place in global society is still a big question
mark. The current problems posed by AI are certainly the most pressing to resolve, but they
are not nearly as spectacular or as far reaching as those predicted and anticipated in the
future.

AI is still a budding technology, and few of the known problems have been worked on let
alone resolved. That will probably change as the technology grows, raising more numerous
and more pressing conundrums. Until then, here are a few situations and problems for you to
consider.
Defining Autonomous Weapon Systems as the fully autonomous machines brought the
ethical issues to the key point of responsibility for the actions of such machines. It is the
responsibility of the owners to maintain the correct execution of orders and lawful application
of AW. The owners in this case are the cumulative group of people involved in the creation
and control of the AW. Humans already are developing systems that can malfunction, and it
is their, not the machines, responsibility to correct these errors. Also, it is humans
responsibility to control what machines are doing. However, despite the nightmare visions of
several authors, it is equally likely that the war using AW systems will be less humane than
the conventional human conflicts

References:
[1] Stanford University CSE, "Autonomous Weapons", S. Chen, T. Hsieh, J. Kung, V. Beffa
(http://cse.stanford.edu/classes/cs201/Projects/autonomous-weapons/)
[2] The Risks Digest Volume 3: Issue 64, "Towards an effective definition of 'autonomous'
weapons"
(http://catless.ncl.ac.uk/risks/3.64.html#subj4)

[3] Journal of Applied Philosophy, Vol. 24, No. 1, 2007, "Killer Robots", Robert Sparrow

You might also like