You are on page 1of 23

Computer-Music Interfaces: A Survey

BRUCE W. PENNYCOOK
Department of Music and Department of Computing and Information Science, Queen’s University, Kingston,
Ontario. Canada

This paper is a study of the unique problems posed by the use of computers by composers
and performers of music. The paper begins with a presentation of the basic concepts
involved in the musical interaction with computer devices, followed by a detailed
discussion of three musical tasks: music manuscript preparation, music language
interfaces for composition, and real-time performance interaction. Fundamental design
principles are exposed through an examination of several early computer music systems,
especially the Structured Sound Synthesis Project. A survey of numerous systems, based
on the following categories, is presented: compositions and synthesis languages, graphics
score editing, performance instruments, digital audio processing tools, and computer-
aided instruction in music systems. An extensive reference list is provided for further
study in the field.

Categories and Subject Descriptors: 5.5 [Computer Applications]: Arts and


Humanities-music
General Terms: Design, Languages
Additional Key Words and Phrases: Composition and synthesis languages, computer.
aided instruction in music systems, design principles, graphic score editing, real-time
performance systems

INTRODUCTION with them [Baecker 19801. A special issue


of Computing Surveys that focused on psy-
The User Interface in a Music Context chological aspects of human-computer in-
teraction appeared in 1981 [Moran 19811.
There have been numerous studies address- A recent article by William Buxton
ing a variety of difficult aspects of man- [1983], whose contributions to music inter-
machine communications and user-inter- face design are discussed below, offers some
face design. The types of tasks examined in pertinent observations regarding the selec-
these articles are primarily familiar com- tion of input devices. Buxton summarizes
puting environment activities such as text his report as follows:
editing and formatting [Embley and Nagy
1981; Furata et al. 1982; Meyrowitz and When we have developed a methodology which
van Dam 19821, interactive programming allows us to determine the gesture which best
[Sandewall 19781, natural language inter- suits the expression of a particular concept, then
we will be able to build the user interfaces which
faces [Ledgard et al. 19801, and input strat- today are only a dream. [Buxton 19831
egies involving nonkeyboard input devices
such as mice, light pens, tablets, and the This comment serves as a useful starting
various display format strategies associated point for this survey. Attempts to design

Permission to copy without fee all or part of this material is granted provided that the copies are not made or
distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its
date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To
copy otherwise, or to republish, requires a fee and/or specific permission.
0 1985 ACM 0360-0300/85/0600-0267 $00.75

ComputingSurveys,Vol. 17, No. 2, June 1985


268 l Bruce W. Pennycook

CONTENTS (59 Many of the interface requirments for


a music system are unique to music and
have no equivalent within the general
realm of computer usage. Furthermore,
composition of music, performance of
INTRODUCTION
The User Interface in a Music Context
digital musical instruments, and ma-
Partitioning the Discussion nipulation of recorded sound each pose
Technical Considerations new and substantially different prob-
1. MUSIC USER INTERFACE CONCEPTS lems for the user-interface designer.
1.1 An Examination of Some Musical Tasks
2. DESIGN PRINCIPLES
(3) Composers, performers, and recording
2.1 Early Systems engineers usually exhibit highly idio-
2.2 The Structured Sound Synthesis Project syncratic work habits, which ultimately
2.3 Other Real-Time Control Devices must be accommodated by the user in-
3. SYSTEMS SURVEY terface(s) on an individual basis.
3.1 Composition and Synthesis Languages
3.2 Graphics Score Editing (4) As a result in part of the rich, descrip-
3.3 Performance Instruments tive vocabulary of musical concepts, in-
3.4 Digital Audio Processing Tools formation about music and sound is not
3.5 Computer-Aided Instruction easily translated into computer data
in Music Systems
that are accessible to and readily ma-
4. CONCLUSIONS
ACKNOWLEDGMENTS nipulated by musicians.
REFERENCES
The work reported in this survey repre-
sents only a small fraction of the diverse
and relatively voluminous efforts in this
field. It is important to note that most of
and implement new user interfaces for com- the contributions to user-interface design
puter music applications have indeed re- and implementation have been developed
quired a fresh examination of the musical by musicians with varying degrees of train-
activities for which they are intended. ing as computer scientists according to
Musicians develop musical skills through their own needs and musical perspectives.
formal study and practical application. It is difficult to align the information in
Musical knowledge is translated into sound a survey such as this with each individual
through the physical gestures of vocal and reader’s background and interests. The
instrumental performance or, in the case of path chosen here is somewhat biased to-
electroacoustic music, through signal gen- ward a general treatment of the relation-
eration, amplifiers, and finally loudspeak- ship between musicians and machines
ers. Much of the musician’s craft is rather than a documentation of the various
absorbed unconsciously as part of the mu- technologies that have been used to con-
sic-making experience. Unraveling these struct computer music interfaces.
complex interrelationships of knowledge, The objective is to establish some basic
experience, and gesture poses a formidable criteria for music interface design and to
challenge. Codifying the web of musical support these criteria with discussions of a
attributes loosely referred to as musician- few implemented systems.
ship is further compounded by the fact that
each and every musical style is a product
Partitioning the Discussion
of unique sociotemporal forces.
From these notions, some critical obser- For the purposes of this survey, the discus-
vations may be drawn regarding the nature sion of music interface design has been
of music information as it applies to the subdivided into the following categories:
design and implementation of user inter-
faces: l technical considerations,
l music user interface concepts,
(1) Little is known about the deep struc- l design strategies,
tures of musical cognition. l systems survey.

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey 269

FORMAL
SEVERAL SECONDS TO SEVERAL
MUSIC
MINUTES
STRUCTURES

TRANSLATION OF FORMAL STRUCTURES

v
PERFORMANCE
-.05 SECONDS TO SEVERAL
EXECUTION
SECONDS
RATES

INPUT DEVICE INTERACTION

.I MILLISECONDS TO 01 SECOND

CONTROL UPDATES TO SYNTHESIZER

HARDWARE INSTRUCTION RATES


SYNTHESIS

SOUND SAMPLE OUTPUT

SOUND SAMPLE
-44K SAMPLES/SECOND/CHANNEL
CONVERSION

Figure 1. Throughput rates for real-time music systems.

The section on technical considerations ing and printing music notations. This mul-
presents certain factors that must be sat- tiplicity of purpose is most pronounced in
isfied in real-time computing environ- recent commercial instruments such as the
ments, although many of the interfaces pre- Fairlight Computer Music Instrument
sented in this survery are software packages (manufactured by Fairlight Instruments
operating in nonreal time. Limited, Sydney, Australia).
In Section I, first the unique nature of
musical tasks as opposed to general com-
Technical Considerations
puting tasks is illustrated, and then four
innovative systems are discussed to estab- The system requirements for music inter-
lish some basic principles. faces vary according to the task and to the
Section II has been subdivided into five nature of the musical environment. In a
broad categories: composition and synthe- real-time setting the following hierarchy of
sis languages, graphics score editing tools, timing requirements exists, as shown in
real-time performance systems, digital au- Figure 1: (1) control of formal music struc-
dio processing tools, and computer-aided tures such as instrumentation, number of
instruction systems. There is a great deal output channels; (2) performance execution
of overlap within these five divisions; for from the input devices; (3) merging the
example, graphics score editing tools are performance and synthesis data into a con-
used for composing as well as for transcrib- tinuous stream and transmitting it to. the

Computing Surveys, Vol. 17, No. 2, June 1985


270 l Bruce W. Pennycook
synthesis device; (4) computation of the way may be awkward or even counterpro-
sound samples; (5) output to and from au- ductive for another musician.
dio conversion subsystems. Integrating the Attempts to identify and codify the re-
minimum response times at all levels of the lationship of musical activities and com-
system can pose substantial throughput puter music interfaces have appeared in
problems. Throughput reliability can be Smoliar [1973], Buxton [ 19781, Vercoe
compounded by the nature of musical ges- [ 19751, Chadabe [ 19771, Laske [ 1977,1978],
ture in that performance data often arrives and Hanes [1980].
in “bursts.” So far, no generally applicable theories
Consider the example of a pianist playing have emerged. There is, however, a consis-
a different chord once each second. Al- tent theme: Musical actions (whether en-
though the average bandwidth is relatively coded as musical symbols or performed on
low, the instantaneous bandwidth of each instruments) and music cognition are both
chord is directly proportional to the total based on a hierarchy of temporal struc-
number of elements, which, in combina- tures. The design of a computer music in-
tion, produce the actual sound. Thus only terface or performance system requires
the worst case throughput can be considered careful examination on many temporal
in the interface design. levels.
Provisions must also be made for the
integrity of real-time requests at all levels 1.1 An Examination of Some Musical Tasks
of the system, as shown in Figure 1. A
detailed study of the requirements for real- 1.1.1 Music Manuscript Preparation
time control -of all of the contributing fac-
Certain musical tasks can be described as
tors outlined above exceeds the scope of
a sequence of simple actions with specifia-
this article. The reader is directed to Ma-
ble goals. For example, transcribing indi-
thews and Bennett [ 19781, Moorer [1982a,
vidual parts for the members of an ensem-
1982b], and Loy and Abbott [1985] for in-
ble from the composer’s master score could
formation on the technical issues.
be summarized as follows:

1. MUSIC USER INTERFACE CONCEPTS l Select suitable manuscript paper for the
current instrument (considering the
Interface specifications vary dramatically number of staves, staff size, spacing be-
with respect to the five levels of communi- tween staves, etc.).
cation shown in Figure 1. This makes it l Determine if the instrument requires a
impossible to devise evaluation schemes transposed part (i.e., a part written in a
that can be generally applied to all music different key as required by transposing
interface specifications. Unlike text-editing instruments such as clarinets and French
environments, in which measures of pro- horns).
ductivity can be gathered empirically, in l Choose the correct number of measures
most musical settings productivity and aes- per page, so that page turns are preceded
thetic value become hopelessly confused. by a rest of sufficient duration.
How can we measure the effectiveness of a l Choose the corrrect number of measures
piano keyboard? Measures used in evalu- per line of music to facilitate reading and
ating text editing environments (e.g., cor- visual interpretation.
rect keystrokes per unit time [Embley and l Select the correct pen nib sizes for the
Nagey 19811) do not apply. Instead, the manuscript.
interface designer must rely on individual l Prepare the manuscript paper by adding
assessments by users performing a variety the appropriate clef, key, and time sig-
of tasks. The choice of tasks is necessarily natures.
limited by the biases inherent in the system l Copy the part from score.
and by the musical preferences of the user. l Check the transcription against the orig-
A user interface that satisfies the needs of inal part in the score and make correc-
one musician in an efficient, well-ordered tions as necessary.

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey 9 271
This process is essentially a specialized immensely difficult problem, which must
form of document preparation. Although take into account such vague and flexible
the general problem of music manuscript notions as musical style and layout aes-
preparation is somewhat difficult because thetics.
of the idiosyncracies of music typesetting, In Section 3.2, several score editors are
the basic tasks of the music copyist should presented. Each offers certain unique fea-
require an interface similar to display-ori- tures.
ented text editors and formatters. Yet there
are very few music manuscript preparation 1.1.2 Music Language Interfaces for
systems general enough to accommodate a Composition
wide variety of musical styles and forms
and produce camera-ready copy (see Smith The problems of musical style and personal
[ 19731, Byrd [ 19771, Maxwell and Ornstein preference are most acute in the design of
[1983], and Hamel [1984]). music languages for composition. There are
One of these systems, MS, has taken countless anecdotes surrounding the work
Leland Smith, an accomplished performer, habits of the great composers. Beethoven
composer, and pioneer in the development would fret over a crucial melodic passage
of score-processing software, over a decade for months and then, having solved it, could
to develop and refine [Smith 19731. Part of complete the entire movement in a very
the problem stems from the rapid changes short time. Stravinsky’s work scores are not
in display technology and the resultant ex- available for study, but it is known that he
pectations of the users. The principal used liberal amounts of colored ink as a
difficulty, however, lies in the nature of private code for organizing musical struc-
musical notation. The size, location, and tures. One suspects that these examples are
orientation of each musical symbol must be not extreme cases, and that individual style
adjusted so that the visual appearance of and aesthetics are acute considerations in
the manuscript is not only accurate, but designing music languages for composition.
also contributes to the performer’s under- The relationship of the tasks comprising
standing of the work and his or her ability compositional process to the manipulation
to execute the symbols as musical gesture. of the materials is not readily apparent.
(It is often said by musicians that one can The goal-a composition-emerges from
usually guess the composer of a work simply seemingly arbitrary processes that the com-
by the appearance of the score.) poser constructs to suit his or her immedi-
What appears to be a relatively straight- ate creative needs. The interface between
forward (although very long) document- the composer’s imagination and the fin-
processing problem actually embodies the ished product is a tool that allows the com-
same aesthetic difficulties as other music- poser to experiment with musical materials
processing tasks. Composers and copyists while being restricted to the requirements
have their own notions of how a particular of the medium. For some, paper and pencil
musical passage ought to look on the page. suffices; for others a piano serves the pur-
Of course, there are certain standards and pose. Many contemporary composers work
conventions in the realm of music publish- with multitrack sound recording systems
ing that are reducible to sets of rules and and a complex of sophisticated electronic
procedures. These may work well enough devices (see Chadabe [1977, 19831).
for traditional music notation such as a Identifying composition tasks in terms of
Schubert symphony, but are wholly inade- goals seems fruitless. We can, however,
quate for many contemporary scores or for identify the kinds of tools that composers
scores of non-Western music such as South find useful: musical instruments, music
Indian classical music. The visual appear- manuscript, graphics, sound recording sys-
ance of the score must effectively convey tems, programming languages, device con-
the stylistic idiom and the composer’s in- trollers, etc. Thus the criteria for evaluating
tentions. Providing the copyist with a com- the effectiveness of the user interface must
puter music typesetting system becomes an be based on a measure of its capacity to

Computing Surveys, Vol. 17, No. 2, June 1985


272 l Bruce W. Pennycook

adapt to the needs of the composer with unprecedented control mechanisms.


throughout the compositional process. These new mechanisms require the per-
former to develop new technical skills and,
1.1.3 Performance Systems more important, new modes of artistic
expression.
For centuries instrument builders have la- Computer-controlled analog and digital
bored to provide performers with acousti- synthesizers offer another level of per-
cally precise, responsive instruments that former control. Performance data, synthe-
enable them to play the best music with the sis configuration, device assignment, out-
least amount of effort. A poorly constructed put channel distribution, etc., can all be
violin can be coerced to produce beautiful prepared in advance. A performance, then,
tones, but it requires much greater exertion consists of varying degrees of intervention
and skill by the player. Instrument inven- over the automatic control of the synthe-
tors have also been motivated by a desire sizer; the operator becomes performer,
to provide composers with new sonic capa- composer, and conductor of all the musical
bilities. The pianoforte was considered to forces at once.
be a novelty in the middle of the eighteenth
century, but the expressive qualities offered 2. DESIGN PRINCIPLES
by control of hammer velocity soon made
it preferred over the harpsichord by com- 2.1 Early Systems
posers and performers. (Sachs [1940] con-
tains a detailed historical survey of musical The earliest interactive computer music
instruments.) systems were Groove, developed by Max
Electroacoustic instruments pose new Mathews and his co-workers at AT&T Bell
problems for the instrument designer. In Laboratories between 1968 and 1970 [Ma-
addition to certain traditional modes of thews and Moore 1969,197O; Mathews and
instrumental performance, electronic in- Rosler 19691, and A Computer Aid for Mus-
struments offer a wide variety of new pos- ical Composers, developed at approximately
sibilities. Many synthesizers are “hard- the same time at the National Research
wired,” presenting the performer with a set Council of Canada [Pulfer 1970, 1971a,
of mechanisms such as keyboards, poten- 1971b; Tanner 19711. These systems ad-
tiometers, switches, foot pedals, joy sticks, dressed the problems of the user environ-
thumb wheels, etc., which control specified ment in terms of real-time interaction.
sound-generating and processing modules. Both systems provided a basic set of tools
More sophisticated synthesizers, including for describing, manipulating, playing, and
several new entries based entirely on digital storing musical information. The devices
technology, permit the user to assign the may seem rudimentary compared with cur-
output of a controlling device (a time-vary- rent standards of technology. As Figure 2
ing direct-current control voltage or a shows, however, each system provided the
stream of binary data) to different inputs composer with a flexible and easily under-
of the modules. stood work space. In an address to the
This capacity to reconfigure the controls American Society of University Composers,
of the instrument manually or under pro- Mathews characterizes Groove as follows:
gram control radically alters our notions of “The desired relationship between the per-
performance. For example, advances in the former and the computer is not that be-
design of electronic piano keyboards, such tween a player and his instrument, but
as those developed by R. Moog,l which rather that between the conductor and his
track pressure sensitivity and motion in orchestra” [Mathews and Moore 1969a].
two planes along with key position and key Groove provides instantaneous feedback
depression velocity, provide the performer through the visual display device and the
loudspeakers. The musician’s responses to
’ The 100 Series Keyboard Controller, Big Brier Inc., the feedback are recorded and saved for
Leicester, N.C., 1982. future replays.

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey l 273

USER INTERFACE DEVICES

DEVICE GROOVE NRC

VISUAL CRT:MENUS, CRT/MENU,


WAVEFORMS SCORES,

MANUAL TTY TTY Figure 2. A comparison of Groove and the


KNOBS NRC music systems.
SWITCHES LIGHT-PEN
CLAVIER
3-d WAND

AUDIO REAL-TIME REAL-TIME


(analog) (digital)
STEREO STEREO

The NRC system provided similar types application of a mouse controller in


of real-time interaction but with limited music interface design).
digital rather than analog audio synthesis. (8) A two-handed data entry system, with
A number of important features were in- the left hand controlling a key set se-
corporated which indicate that this system lecting note durations for pitches
was very much in the vanguard of today’s placed on the screen by the right hand,
experimental user-interface designs: which controlled the cursor position.
(1) Four voices of real-time digital sound The NRC system was well ahead of its
output. time, not only with respect to the produc-
(2) A color display that enabled individual tion of musical scores and sounds, but in
music passages (voices) to be color the overall integration of computer graph-
coded during the editing procedure (red ics, input devices, real-time control, and
for the current voice being edited, blue digital synthesis.
for all of the background). There were other approaches to real-time
(3) Optional simultaneous control of an interaction which produced Piper [Gaburo
analog synthesizer called the Paramus 19731, Musys [Grogono [1973], EMS [Wig-
that could play from the same score as gen 19681, all relying on computer control
the digital synthesis devices. of analog sound generators rather than the
(4) Direct hard-copy output of the music somewhat limited real-time digital synthe-
in the form of a conventional musical sis capabilities available at that time. In all
score. cases the user had direct control over
(5) All four voices displayed in conven- sound-modifying parameters.
tional musical notation scrolled hori-
zontally across the screen at the same
2.2 The Structured Sound Synthesis Project
time that the score was being digitally
synthesized. This is perhaps more as- An important attempt to systematically in-
tonishing, given the state of graphics vestigate man-machine communication
systems, computer power, and synthe- within the musical domain was made by
sis capabilities in 1971. William Buxton and a team of research
(6) A 61-note organ keyboard that could assistants at the University of Toronto in
be performed in real-time producing 1978. The Structured Sound Synthesis
four voices of notation on the screen Project (SSSP) borrowed from Buxton’s
(to the best of the system’s ability). experiences with the NRC system, POD
(7) Two orthogonally mounted thumb [Truax 19771, and Piper, and from certain
wheels and a mouse for controlling the concepts proposed by Otto Laske [1978].
cursor on the graphics screen (the first Buxton’s efforts have established some fun-

ComputingSurveys,Vol. 17, No. 2, June 1985


274 . Bruce W. Pennycook

damental criteria for effective music inter- important feature of the synthesis interface
face design. These first appeared in the is the generous use of defaults in the data
report Design Issues in the Foundation of specifications. Users may focus on certain
a Computer Based Tool for Music Compo- aspects of the composition process, such as
sition [Buxton 19781. This report was fol- entering, testing, and correcting the notes
lowed by Buxton [1981], Buxton et al. and rhythms, without having to be con-
[1978a, 1978b, 1979, 1980, 19821, and Fe- cerned with other parameters. This ap-
dorkow et al. [1978], all of which address proach is very helpful for the novice as well
user-interface issues to some extent. The as being convenient for the experienced
major features of the user interface are user.
summarized in the following sections.
2.2.2.1 The SSSP Conduct System.
2.2.1 SSSP Graphics Display Several unique devices and control struc-
tures have been designed and implemented
The graphics interface provides the basis by the SSSP team that can be used to
for nearly all interaction with the music control the output of the synthesizer during
system. Commands and graphics actions playback of prepared musical events. Live
such as score editing and sound synthesis performance using the SSSP Conduct sys-
specification have been organized into a tem generally involves the manipulation of
combined iconic selection and typed re- these devices in a similar fashion to Ma-
sponse menu-driven format. Since one of thew’s Groove system. As it would be highly
the primary objectives of the SSSP has impractical to move the graphics display
been to minimize the learning curve for system to each performance site, programs
musically sophisticated but computer-naive were devised that enabled the user to rap-
users, most of the dialog is system initiated idly execute commands at a standard video
and is couched in familiar musical terms display terminal from the following motion-
and symbols. sensitive input sources:
Most communications are executed
through manipulation of a cursor con- (1) two continuously variable sliders;
trolled by a tablet input device, thus reduc- (2) r-y mouse (actually a digitizing pad
ing the need for typed input. Actions by and tracker);
the user always invoke a response from (3) x-y touch-sensitive pad;
the system (unlike UNIX2 [Ritchie and (4) clavier (piano) keyboard;
Thompson 19741, e.g., where an absence of (5) four variable-slope, straight-line seg-
response from the operating system usually ments for control of output rates or
means that the input was syntactically cor- amplitude contours implemented in the
rect and that the request is being serviced). Conduct software.
The most significant feature of the graphic
interface is that all of the actions needed Other real-time user variables include
to compose and listen to a musical event simple switches, set by placing the cursor
appear as symbolically informative images. over a label and depressing the appropriate
An example of the display during a session button on the mouse, and continuously
of editing waveforms for the synthesis proc- variable parameters set by direct typing,
ess is shown in Figure 3. depressing a button to invoke the “last-
typed” value or the “default” value, or by
2.2.2 SSSP Sound Synthesis Interface dragging, whereby the cursor is placed over
Audio feedback from the SSSP is instan- a parameter field and by moving the mouse
taneous. User input and internal data are up or down vertically the current value is
organized and stored in efficient data struc- shifted appropriately.
tures so that performance specifications, The most interesting feature of Conduct
called m-events, can be interpreted directly is that the parameters that describe musical
by the synthesizer control processor and attributes such as pitch, duration, and
subsequently by the synthesizer units. An timbre selection may be arbitrarily grouped
together. Each motion detected from one
’ UNIX is a trademark of AT&T Bell Laboratories. of the many input channels can modify a

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey l 275

WAVEFORMS FUNCTIONS
E
u sine default -freq
Pw
2

steady -an

+ default -ind

1 PLAY
DIST.

amp5J
df -ws-obj
* WAVE SHAPING*
COMPARE
* NOTE MODE*
* SINGLE*
PITCH VOL DURATION EXIT

*WORKING OBJECTS*

Figure 3. The SSSP interactive display. (Reproduced from Buxton [1981, p. 541.)

set of parameters. According to Buxton, tivity in general, the authors designed a


“thus, any transducer can control many system that could be easily adapted accord-
parameters, all having different instanta- ing to observed user preferences. The
neous values, without any concern for con- “quick and dirty” approach [Buxton 1978a,
text” [Buxton 1981, p. 1781. These features p. 51 reduces the commitment to work that
provide the performer a great deal of free- has been completed, thus encouraging the
dom and substantially reduce the effort and designers to improve the system in re-
time needed to affect a direct, real-time sponse to the user community.
response from the synthesis/playback sys- (2) Unique, special-purpose interface
tem. mechanisms were designed and imple-
mented. The Conduct system offers a set
2.2.3 Contributions of the SSSP of interface techniques which have no di-
to User-Interface Design rect counterpart outside of music. Of par-
ticular interest are Buxton’s experiments
The achievements of the SSSP team may
with nonkeyboard and “one-handed” input
be assessed by the four basic criteria estab-
lished in the introduction to this article: devices. These are useful for multiple-
input-device live performance settings and
(1) Rather than assume a priori knowl- a position-sensitive surface, which in one
edge about music interfaces and music crea- mode serves as a “drum” and in another as

ComputingSurveys,Vol. 17, No. 2, June 1985


276 l Bruce W. Pennycook
set of “potentiometers” for controlling per- strategy, gesture objectification-the ac-
formance parameter levels. quisition and utilization of real-time ges-
(3) The system is presented to the user tures-has been modeled as a set of time-
in the form of a “software onion,” permit- varying functions, much like the Groove
ting the composer access to as many of the system described above. An important pro-
details of the processes as deemed suitable. vision of the Lucaslilm system is that all
Wherever possible, default values are pro- actions can be “remembered” (i.e., stored
vided, permitting the user to defer deci- in memory) for recall at a later time. Al-
sions. The wide range of interface methods though other systems have included this
also contributes to accommodating individ- feature, it has been central to the design
ual work habits. strategies of the digital sound project at
(4) The use of iconic display menus, fa- Lucasfilm.
miliar music terminology, highly stream- The operator of the digital recording sys-
lined command sequences, and gestural in- tem at Lucasfilm is provided with a high-
put mechanisms (tablet, mouse, sliders, resolution bit-mapped screen that can be
etc.) greatly reduces the barriers between used for displaying attributes of the acoust-
music and computers. ical signal (time-domain and spectral-do-
main representations), musical scores, and
2.3 Other Real-Time Control Devices various aspects of the mixing procedure.
A comprehensive approach to computer An example of the latter is the display of
music production is under development at icons representing prerecorded events
Lucasfilms, Inc. The objectives of the digi- stored on disk, which can be visually time-
tal audio group (under the direction of synchronized to other events and to the
James A. Moorer) encompass all aspects of film frame count.
musical activity: composition, synthesis, A device demonstrated at the 1983 Inter-
analysis, signal processing, sound record- national Computer Music Conference at
ing, editing, processing and mixing, and the Eastman School of Music in Rochester,
score preparation. The first step was the New York, offered a different approach to
design and construction of the Lucasfilm music control devices. Steve Hafflich dis-
Audio Signal Processor, [Moorer 1982a]. A played a prototype system [Hafflich and
general-purpose interface was required to Burns 19831, in which the motions of an
serve musicians and recording engineers orchestral conductor were detected from
alike. Two articles have been published high-frequency audio signals emitted by a
that report on the human engineering prob- specially built baton. The sonar signals
lems, “Remembering Performance Ges- were tracked and displayed on a graphics
tures” [Abbott 19821 and “The Lucasfilm terminal. Thus the motion of the conduc-
Real-Time Console for Recording Studios tor’s baton through space was converted
and Performance of Computer Music” into time-varying vectors, which could then
[Snell 19821. be used to control various aspects of the
Abbott recognizes that, in general, studio synthesis process in real time.
engineers have “. . . evolved ways of think-
ing about memorizing and editing gestures” 3. SYSTEMS SURVEY
[Abbott 1982, p. 11, and may be reluctant
3.1 Composition and Synthesis Languages
to formulate new work habits, whereas
computer music composers are more often Numerous software packages have been
interested in exploring new means for mus- written for applications in music composi-
ical expression. The basic approach has tion, music analysis, sound synthesis, and
been to identify parameter definition prob- sound manipulation. In most cases musical
lems, defined as the kinds of processing or acoustical information is specified as
between the user input and the synthesis/ alpha-numeric data. The increased availa-
processing device, and to divide these into bility of medium- and high-resolution dis-
a gesture objectification problem and a pa- play terminals has contributed to the de-
rameter-mapping problem. As a general velopment of all aspects of music-related

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey 277

interface design, especially score represen- FORMES. Another approach to music


tations. New products for the IBM PC and language design is represented by
Apple Macintosh computers offer a wide FORMES [Cointe and Rodet 19831, which
variety of applications software for music is a music-programming language written
notation display and editing, real-time con- in VLISP [Chailloux 19781. This object-
trol of music synthesizers, and the manip- oriented language adds a time dimension
ulation of digitally recorded sound. component called a dynamic-calculation
tree to program and data constructs bor-
rowed from other object-oriented languages
3.1.1 Sound Synthesis Languages
such as Smalltalk [Kay and Goldberg
Sound synthesis languages have been sur- 19761. A discussion of Smalltalk in a music
veyed elsewhere in this publication and do context appears in Lieberman [ 19821.
not properly belong to a discussion of music MPL. “Musical Program Library is a
interfaces (see Loy and Abbott [ 19851). The comprehensive system for processing mus-
capabilities of synthesis software (or hard- ical information” [Nelson 19801. MPL is
ware), however, affect the complexity and based on APL [Iverson 19621 and consists
run-time requirements of the music system, of a work space and a set of functions for
as well as the complexity of the programs organizing and transforming musical data
or devices that supply the necessary para- in single-dimension vectors and in 4 x n
metric data. A general-purpose, program- matrices, where the four rows represent a
mable synthesis language, such as Music10 set of parameters used to control a synthe-
[Tovar and Smith 19771 or MUSBOX [Loy sis process. Musical graphics can be dis-
19811, offers much greater flexibility than played on a Tektronix 4013 or printed on a
“hard-wired” systems with a limited num- Calcomp 563 plotter and a Diablo Hyterm
ber of active parameters, such as the SSSP II printer/plotter.
synthesizer. Whether implemented in PZaComp. The PlaComp system is a set
hardware or software though, the purpose of integrated languages and command
of the user interface is to provide time- structures for composing, editing encoded
ordered lists of synthesis descriptors, time- scores, describing and storing synthesis
varying functions, and other performance data, and synthesizing music [Murray and
parameters used to control the synthesis Beauchamp 1978; Peters 19751. PlaComp
process. utilizes the PLATO resources developed at
the University of Illinois to facilitate user
3.1.2 Music Data Encoding and Manipulation input and to display music and synthesis
data.
A list and brief description of the principle Pla. Pla [Schottstaedt 19831 is a music
music data-encoding and data-manipulat- programming language based primarily on
ing languages follows. The mode of com- SAIL [Reiser 19761 and LISP [Weinreb
munication in each of these schemes is and Moon 19811 constructs. The principle
alphanumeric text, but their input specifi- contribution of Pla to user-interface de-
cations and operation syntax are designed signs is the incorporation of structured
for markedly different user strategies. More programming constructs and the LISP con-
extensive discussion of music languages struct Flavors [Wood 19821. Further dis-
and preprocessors can be found in Penny- cussion of Pla can be found in Pennycook
cook [1983a], Hiller and Isaacson [1970], [1983a], Roads [ 19851, and Loy and Abbott
and Roads [ 19851. [ 19851.
DARMS. DARMS [Brinkman 1983; POD. The POD system was developed
Erickson 19751 is a music-encoding scheme by Truax [Truax 1977; Truax and Baren-
used primarily by music theorists to enter holtz 19771 at the Institute of Sonology,
data for statistical, thematic, and structural Utrecht, on a PDP-15. The current version
analyses. Musical symbols such as pitches, forms the basis of a PDPll/DMX-1000
rhythms, articulations, and measures are [Walraff 19791, real-time composition and
assigned codes and/or values. synthesis system called PODX [Truax

Computing Surveys, Vol. 17, No. 2, June 1985


278 l Bruce W. Pennycook

co 16-20 A
t; (ENV.4)
W
II-15
g (ENV.3)

g (EL%
3
l-5
Figure 4. POD mask. (Reproduced 2
(ENV. 5) TIME+
from Truax [1977, p. 301.) II 80” 9d’
FREQUENCY “B
(Hz)

DENSITY
sounds /set
I

0” 20” 50” 60” 80” 90”

Figure 5. POD mask. (Reproduced TIME+


from Truax [1977, p. 301.)
70” 80” 90”

(Hz) I--

DENSITY
C 0.5 2.6
L 0.4
soundslsec. TIME-+
0” 20” 50” 60” 70” 80” 90”

19851. POD is based on an aesthetic and and values derived from musical symbols
pedagogical preference for compositional and produces a time-ordered list of para-
strategies that focus on the specification of metric data for a variety of synthesis lan-
time-varying functions, which control sto- guages. Although the input formats are
chastic distributions of several musical pa- rigid and somewhat restrictive, Score has
rameters (pitch, timbre, vertical density, been successfully used for many important
amplitude, spatial distribution). This strat- computer music compositions. There are
egy provides the composer with strong, ac- several similar music-encoding and -ma-
tive parameters that produce immediate nipulation languages modeled on Score,
auditory results, as opposed to weak, such as Scot [Good 19781 and Score11
general systems that often frustrate the [Brinkman 19811. Score and its descend-
composer with system complexities and ants use terminology that is familiar to
poor response turnaround times. Figures 4 musicians, and automate repetitive tasks.
and 5 illustrate two different sets of prob- This simplifies the work of translating
ability masks controlling sound density, musical information into data that can use
frequency, and object (sound synthesis rou- a graphics device or control a synthesis
tines) selection. device. Sample input and output files are
Score. Score [Smith 19721 is a FOR- shown in Figure 6.
TRAN programme that reads a file of codes Further discussion of music languages

Computing Surveys, Vol.17, No. 2, June 1985


Computer-Music Interfaces: A Survey l

<input to Score>

TOOT 0 I 8;. . . . . . . . . . 0.. instrument 1 plays 8 notes from time 0.

P2 RHY/8/16//8//4//2;. . . 8th, 2 16ths, 2 8ths, 2 quarters, 1 half note.

P3 NOTES/C4/D/E
/F/G/A/B/C5; 0.. . . . . C-major scale starting on middle C

P4 500;. . . . . . . . . . * *. . . . . amplitude (2048 maximum conversion


word size)

P5 F3;.................. Fl and F3 are names of function tables.


P6 Fl;

END;

<output - first note only>

PLAY;
TOOT 0.00 .125 C F3 Fl;. parameter fields designate the instrument
name, start time in seconds, note duration
in seconds, pitch, functions.

Figure 6. Score input and output files. (Reproduced from Pennycook [1983a, p. 91.)

and their properties can be found in on an interactive vector terminal. Figure 7


Hiller [ 19721, Krasner [1980], Penny- shows an example of the notational flexi-
cook [1983a], Roads [ 19851, and Loy and bility of the screen formats.
Abbott [ 19841. Mockingbird. Mockingbird [Maxwell and
Ornstein 19831 is described as a powerful
scribe rather than a creative tool (see figure
3.2 Graphics Score Editing
8; see also Roads [1981]). Music is per-
Reducing the cost and increasing the reso- formed on and played back through a
lution of bit-mapped display systems is a Yamaha CP-30 synthesizer, and displayed
major factor in the further development of on the high-resolution Dorado [Lampson
interactive score-editing tools. Pointing et al. 19811terminal. High resolution, large
systems (mice, track balls, etc.) offer in- memory space, very fast processing speeds,
creases in user thrhghput similar to those and mouse-activated editing capabilities
achieved in menu-driven text editing similar to those developed for text at Xerox
systems. Palo Alto Research Center provide the mu-
sician with a powerful music notation edi-
MS. MS was developed by Smith in the tor. The system is somewhat restrictive in
1970s [Smith 1973). Musical notation is that only piano type (one treble clef and
graphically displayed on a vector screen. one bass clef) scores are supported. The
Commands and data are entered directly elegance, efficiency, and great speed of
from the keyboard or indirectly from files. Mockingbird, however, make it a worthy
The output can be produced several times model for further developments.
larger than life size, which produces sharp, Gregory’s Scribe. A well-designed, inex-
high-quality print when photoreduced. pensive system for encoding and printing
Scriva. Scriva [Buxton et al. 19791is the traditional music notation is described by
score-preparation software incorporated in Crawford and Zeef [1983]. The user inter-
the SSSP system (see Buxton et al. [1982]). acts with an Apple computer through a low-
The user uses a digitizing tablet and mouse cost digitizing tablet. Output is printed on
to manipulate symbols and text displayed a dot-matrix printer at a resolution high

Computing Surveys, Vol. 17, No. 2, June 1985


280 l Bruce W. Pennycook

+ +

--- --,
- - -
z -
-- -_ -
/a---- --
-

‘;
. i
x A
-

Figure 7. Example of a Scriva screen format. (Reproduced from Buxton et al. [1979, p. 231.)

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey l

Figure 8. Mockingbird notation. (Reproduced from Roads [1981, p. 581.)

enough for performance applications. This laser-jet type device. A reproduction from
system offers several traditional music no- one of the Musprint manual pages appears
tation formats and facilities for transcrib- in Figure 9.
ing them to modern music notation. Personal Composer. Personal Composer
Musprint. Musprint [Hamel 19841 is a [Miller 19841 is a software package for the
new software package written by Keith Ha- IBM PC. It requires a Hercules’” mono-
me1 for the Apple Macintosh. The author chrome graphics card, which increases the
constructs musical symbols using the screen resolution of an IBM PC to 820 x
MacPaint program, and then (like the 640 pixels, to obtain sufficient display qual-
SSSP tools) uses a mouse to place them on ity for music notation. Of greater interest
the bit-mapped screen. Although the reso- here, though, is that this product includes
lution of the Macintosh is much less than a direct interface to MIDI (see Section 3.3)
a terminal such as the Dorado, the vertical data. Music performed on a piano-type key-
resolution of the output is a function of the board that can generate MIDI data can be
printer. Hence dot-matrix output that directly displayed as musical notation-a
shows discontinuities in oblique and curved highly desirable feature. Conversely, any of
lines appears smooth when printed on a the 32 channels of music notation can be

ComputingSurveys,Vol. 17, No. 2, June 1985


282 l Bruce W. Pen nycook

-5- p---SW
-3- s
--L- -ti-

Figure 9. Musprint notation


(dot-matrix output). (Reproduced
from Hamel [1984, p. 5-41.)

converted to MIDI data and heard through Corporation, to sophisticated programma-


a synthesizer equipped to receive MIDI. ble models for professional studio use, such
as the Crumar Synergy [Kaplan 19811 and
3.3 Performance Instruments the new Yamaha digital keyboard pianos
and synthesizers. Most of the professional
Digital performance devices can be divided models permit the player to store preset
into categories based on their degrees of signal treatment and record and playback
complexity and user access to the internal performance actions, and augment the
operations of the hardware. acoustical resources and control mecha-
Digital Music Systems. Digital music nism from external sources. A recent mar-
systems, such as New England Digital Syn- ket entry, the Kurzweil Piano,3 achieves a
clavier II [Appleton, 1977; Lehrman 19841, nearly perfect reproduction of an acoustic
and Fairlight Computer Music Instrument, piano through a patented method. (Infor-
Crumar General Development System [Ka- mation on digital synthesizers is presented
plan 19811,offer the performer a wide range regularly in Keyboard Magazine and The
of user-accessible capabilities. These in- Computer Music Journal. A broad survey
clude digital recording, graphics score ed- of commercial synthesis equipment ap-
iting and signal manipulation tools, fully peared in the February 1981 issue of Studio
programmable synthesis, and composition Sound.)
software. Although they are rather expen-
sive ($10,000~$50,000), these systems are Microcomputer Peripherals. Several man-
being used in computer music research stu- ufacturers offer music systems for per-
dios and commercial recording studios. The sonal computers. These range from soft-
capacity to record, store, edit, process, and ware options, which use the internal
play back musical tones has made machines speaker of the machine, to digital synthesis
like the Fairlight and Synclavier attractive boards that use piano keyboards and graph-
as cost-effective commercial tools. ics packages. Although very few design
innovations for the user interface have
Portable Digital Keyboard Devices. Com- accompanied these products, they have
mercial synthesizers range from simple in-
expensive portables, from major manufac- 3 Kurzweil Music Systems, 411 Waverly Oaks Rd.,
turers such as Yamaha, Casio, and Tandy Waltham, Mass.
Computing Surveys, Vol. 17, No. 2, June 1985
Computer-Music Interfaces: A Survey 283
integrated keyboard performance, pro- other devices have been constructed and
grammable sound-synthesis specification, demonstrated [Bartlett 1979; Mathews and
composition aids, multitrack recording and Abbott 1980; Buxton, personal communi-
playback of performance gestures, and mu- cation, 19841. Another interesting devel-
sic instruction, in streamlined, low-cost opment is a set of electronic, solid-body,
packages, making computer music available stringed instruments developed by R. Ar-
to the microprocessor consumer. The most min in Toronto [Emerson 19841. Although
popular of these systems are MusicSystem, the output is analog, these fine instruments
manufactured by Mountain Computer Cor- offer unprecedented precision and audio
poration, Alpha Syntauri, from Syntauri signal quality in the area of violin, viola,
Corporation, and Soundchaser, by Passport and cello simulation.
Designs Incorporated. An interesting de- MIDI. The Musical Instrument Digital
sign for an interactive network of micro- Interface is described by Junglieb [1983]
processors to produce music appeared in and Wright [ 19831.A more probing analysis
Bischoff et al. [ 19781,but this is not offered of the problems and promises of the MIDI
commercially. system appeared in Milan0 [ 19841.MIDI is
Nonkeyboard Instruments. Products a hardware and software interface system
such as the Drumulator, Lynn Drum, and for connecting a microprocessor to several
Drummatics offer a range of drum perform- music synthesis instruments over a daisy-
chained 31.25kbaud serial link. At the low-
ance capabilities including programmable
est level, the music interface is a protocol
sequencing (from reiterative patterns to en-
specification for distributing musical data.
tire songs), digital or analog drum synthe- At the user level, it is a set of software
sis, and programmable playback of digitized packages for describing sequences of musi-
drums (see Anderton [1983]). The user in- cal events. Up to 16 channels of control
terfaces for these machines vary from data can be distributed at a fast enough
switch and rotary knob controls to touch- rate to avoid timing discrepancies among
sensitive surfaces played with the hands or
drum sticks. Like the keyboard devices, the various devices. Many new software
performed gestures can be captured, stored, products for Commodore, Apple, and IBM
and replayed under program control in computers are currently available from ma-
some machines. Active parameters include jor electroacoustic instrument manufac-
turers such as Roland, Yamaha, Sequential
tempo, amplitude of each drum or cymbal Circuits, Korg, and Oberheim. The MIDI
sound in the pattern, accent selection,
rhythmic characteristics, and, in the more interface has permitted the decoupling of
the computer from the performance/syn-
expensive units, pitch and signal treatment thesis instrument, thus opening up the
controls.
market for more specialized products. Ul-
Custom-Built Devices. Many noncom- timately, the presence of a mechanism that
mercial performance devices and periph- permits manufacturers to concentrate on
erals have been constructed to fulfill unique the technology of performance devices, free
compositional needs. Buxton’s SSSP Con- from the concerns of the computer inter-
duct system described above represents one face, will yield better products.
strategy for controlling musical actions in
a live-performance setting. Another inter- 3.4 Digital Audio Processing Tools
esting system is Drapes/Events [Scheidt
19831, in which a set of active musical Interfaces developed for music and speech
parameters such as timbral characteristics, signal processing serve a wide range of pur-
note density, rhythmic rate, and volume poses. In most cases, interactive graphics
are displayed as horizontal bars on an Ap- display of signal representations form the
ple II terminal. The user can manipulate basis of the interface. From the software-
these graphs/parameters with a few key- based “all-digital” sound studio developed
strokes, which result in instantaneous mod- at the Center for Computer Research in
ifications to the sounds produced by a cus- Music and Acoustics (CCRMA), Stanford
tom-built digital synthesizer. Numerous [Moorer 19771, to real-time systems such
Computing Surveys, Vol. 17, No. 2, June 198.5
284 . Bruce W. Pennycook

.O .I .2 .3
TIME
Figure 10. Perspective plot of amplitude X harmonic number X time for a
violin tone. The fundamental harmonic is plotted in the background with higher
frequency harmonics in the foreground. (Reproduced from Grey 11975, p. 1221.)

as the Lucasfilm audio system [Moorer and frequency-domain representation (es-


1982a, 1982b] the primary interface goals pecially those derived from the Fourier
have been to provide error-free digital sig- transform and phase vocoder analyses
nal editing, digital signal processing, splic- [Moorer 1977; Piszczalski 19791). An ex-
ing, and mixing tools. ample of a violin tone analyzed by the phase
Other approaches include the New York vocoder is shown in Figure 10. Several sig-
Institute of Technology Digital Editor nificant advances in our understanding of
[Kowalski and Glassner 19821, a system psychoacoustics are in part attributable to
designed in conjunction with the British the development of interactive graphics
Broadcasting Corporation [McNally 19791, tools for manipulating signal representa-
interactive tools designed for the Institut tions [Chafe et al. 1982; Grey 1975; Strawn
de Recherche et Coordination Acoustique- 19801.
Musique, Paris [Abbott 19781, Gcomp Ban- The speech recognition research team at
ger and Pennycook [1983], an interactive the Massachusetts Institute of Technology
graphics package for specifying and editing has been developing powerful graphics aids
time-varying sound, file processing, and on a LISP machine [Roads 1983; Weinreb
mixing control functions, and various com- and Moon 19811,which, when coupled with
mercial packages described above that in- high-resolution hard copy printers, greatly
clude digital recording a playback. enhance productivity. Noninteractive real-
Another class of interfaces has emerged time display capabilities have been added
from research in acoustics, psychoacous- to digital instrumentation and signal anal-
tics, and speech recognition. These tools ysis products, such as those offered by lead-
provide graphics display of time-domain ing audio instrumentation manufacturers

Computing Surveys, Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey 285

like Bruel and Kjaar and Texas Instru- detailed study, The Role of Graphics in
ments. Computer Aided Instruction in Music
A trend toward small, powerful UNIX- [Pennycook 1983b], offers a detailed dis-
based computers has provided the means cussion of the most widely distributed
to produce specialized audio workstations packages.
with capabilities previously limited to large A new product built in Canada, the Ex-
computers. One such system, the ORFIAS cercette [Sallis 19831, offers instantaneous
32/320 [Pennycook et al. 19851, utilizes pitch detection for training in solfkge and
a Texas Instruments TMS32020 digital- voice intonation. A pitch is displayed in
signal-processing microprocessor within Common Musical Notation on a medium-
a commercially available 32-bit com- resolution screen and played by the synthe-
puter. The signal-processing power of the sis mechanism. The student responds to a
TMS32020 is enhanced by other circuitry request for a new pitch by singing into a
that provides interfaces for disks and audio microphone. The response is promptly dis-
conversion systems as well as numerous played with an indication of accuracy (cor-
oscillators. In addition to an extended rect, high, low). The user can select four
UNIX command structure, the user inter- degrees of error resolution ranging from
face offers graphics aids for signal manip- +/- one-quarter tone to “just detectable.”
ulation. The capacity of the device is currently
There is a host of commercially available being expanded to include pitch sequences
digital sound-processing products being (melodies).
used in both the recording studio and live Musicland [Lamb 19821 is a music in-
performances. These include digital delay struction system available for the Alpha
units, phasors, flangers, reverberation sys- Syntauri system. (A description of the Uni-
tems, harmonizers (which generate one or versity of Canterbury computer music sys-
more pitches in parallel at a specifiable tem in which Lamb first formulated certain
interval to the input signal), and units that aspects of his methodologies appears in
combine one or more of these effects. Man- Frykberg and Bates [1978].) The Music-
ufacturers’ attempt to provide user-friendly land system is unique in that musical con-
control panels is of particular interest. For structs are assembled by the manipulation
example, the Ursa Major Space Station, of colored boxes with a mouse input device.
manufactured by Ursa Major, Belmont, The boxes contain musical fragments that
Massachusetts, offers both a variety of re- can be presented to the student from a
verberation and delay effects, and custom- library or constructed with free-hand
built circuit boards that simulate the rever- graphics drawing routines. The interface
beration characteristics of specific rooms. allows a young person to experiment with
The front panel reduces the complex de- compositional constructs before encounter-
scription of reverberation specifications to ing the complex vocabulary and syntax of
a set of rotary knobs which control the music.
delay taps and switches for selecting pro-
grammed settings.
4. CONCLUSIONS
In this survey a number of fundamental
3.5 Computer-Aided Instruction
criteria in the design of music interfaces
in Music Systems
have been established. The success and ef-
Several entries in the computer-aided in- fectiveness of each of the implemented sys-
struction field have used interface tech- tems described here are the result of exper-
niques developed for text editing and tise and insight in both music and computer
graphics display of musical notation. engineering. Many of the interface designs
Menu-driven systems substantially reduce developed in other fields, especially docu-
the learning curve for the user, an ex- ment preparation, have contributed to mu-
tremely important consideration in the de- sic interface design by defining the man-
sign of interactive instructional software. A machine communications issues.

Computing Surveys, Vol. 17, No. 2, June 1985


286 l Bruce W. Pennycook
Current research trends suggest that the 11 preprocessor. In Proceedings of the 1981 Znter-
role of the “artist in the laboratory” [Bux- national Computer Music Conference (Denton,
Tex.). Computer Music Association, San Fran-
ton 19841 has been grossly underestimated, cisco, pp. 178-196.
and that the extreme demands placed on BRINKMAN, A. 1983. A design for a single-pass scan-
technology by musical requirements should ner for the DARMS coding language. In Proceed-
serve as a suitable model for the man- inas of the 1983 International Computer Music
machine interface. Conference (Venice, Italy). Computer Music As-
The design requirements of musical in- sociation, San Francisco, pp. 7-30.
formation and performance, however, BUXTON, W. 1978. Design issues in the foundation
of a computer-based tool for music composition.
have required unique solutions. Although Tech. Rep. CSRG-97, Computer Systems Re-
achievements in reliable real-time music search Group, Univ. of Toronto. Toronto. On-
hardware implementations have had the tario, Canada.
most pronounced commercial impact, ef- BUXTON, W. 1981. Music software user’s manual.
forts to solve all aspects of music interface Tech. Ren. CSRG 22, Comnuter &stems Re-
search Group, Univ. of Toronto, Toronto, On-
specification have exposed some general tario, Canada.
interface problems, and pointed the way BUXTON, W. 1983. Lexical and pragmatic consider-
toward some substantive solutions for all ations of input structures. Comput. Graphics 17,
user interface designers. 1, 31-37.
BUXTON, W. 1984. The robs of the artist in the Zabo-
ratory. Unpublished manuscript, Computer Sys-
ACKNOWLEDGMENTS tems Research Institute, Univ. of Toronto, To-
ronto, Ontario, Canada.
The author wishes to thank Curtis Abbott, Guest
Editor of this issue of Computing Surveys, for the BUXTON, W., REEVES,W., BAECKER,R., AND MEZCI,
L. 1978a. The use of hierarchy and instance in
opportunity to serve as a contributor and for his advice a data structure for computer music. Comput.
and encouragement, and William Buxton for his val- Music J. 2,4, 10-20.
uable comments, information, and access to his per- BUXTON, W., FOGELS,E. A., FEDORKOW,G., SASAKI,
sonal library. L., AND SMITH, K. C. 1978b. An introduction
to the SSSP digital synthesizer. Comput. Music
J. 2, 4, 23-38.
REFERENCES
BUXTON, W., SNIDERMAN, R., REEVES, W., PATEL,
ABBOW, C. 1978. A software approach to interactive S., AND BAECKER,R. 1979. The evolution of the
processing of musical sounds. Comput. Music J. SSSP score editing tools. Comput. Music J. 3, 4,
2, 2, 19-23. 14-25.
ABBOT, C. 1982. Remembering Performance Ges- BUXTON, W., REEVES, W., FEDORKOW,G., SMITH,
tures. Lucasfilm Ltd., San Rafael, Calif. K. C., AND BAECKER, R. 1980. A microcompu-
ANDERTON, C. 1983. Digital drums: An overview. ter-based conducting system. Comput. Music J.
4, 1, 8-21.
Polyphony 8, 5, 22-26.
APPLETON, J. 1977. Problems of designing a com- BUXTON, W., PATEL, S., REEVES,W., AND BAECKER,
poser’s language for digital synthesis. Audio Eng. R. 1982. Scope in interactive score editors. Com-
SOC.Preprint No. 1230, Audio Engineering SOC., put. Music J. 5, 3, 50-56.
New York, pp. l-5. BYRD, D. 1977. An integrated computer music soft-
BAECKER, R. 1980. Human-computer interactive ware system. Comput. Music J. 1, 2, 55-60.
systems: A state-of-the-art review. In Processing CHADABE, J. 1977. The nature of the landscape
of Visible Languuge ZZ, P. Kolers, E. Wrolftrad, within which computer music systems are con-
and H. Bouma, Eds. Plenum, New York, pp. structed. Comput. Music J. 1, 3, 5-11.
423-444. CHADABE, J. 1983. Interactive composing, Albany,
BANGER, C., AND PENNYCOOK, B. 1983. Gcomp: New York. In Proceedings of the 1983 Znterna-
Graphics control of mixing and processing. Com- tional Computer Music Conference (Rochester,
put. Music J. 7, 4, 33-39. N.Y.) Computer Music Association, San Fran-
BARTLETT, M. 1979. A microcomputer controlled cisco.
synthesis system for live performance. Comput. CHAFE, C., MONT-REYNAUD, B., AND RUSH, L.
Music J. 3, 1, 25-37. 1982. Toward an intelligent editor of digital au-
BISCHOFF, J., GOLD, R., AND HORTON, J. 1978. dio: Recognition of musical constructs. Comput.
Music for an interactive network of microproces- Music J. 6, 1, 30-41.
sors. Comput. Music J. 2, 3, 24-29. CHAILLOUX, J. 1978. VLISP: 10.3 Manuel de refer-
BRINKMAN, A. L. 1982. Data structures for a Music- ence. RT 16-78, Universite de Paris, 8-Vincennes.

ComputingSurveys,Vol. 17, No. 2, June 1985


Computer-Music Interfaces: A Survey l 287
COINTE, P., AND RODET, X. 1983. FORMES: A new JUNGLIEB, S. 1983. MIDI hardware fundamentals.
object-language for managing of hierarchy of Polyphony 8, 4, 34-38.
events. Institut de Recherche et de Coordination KAPLAN, S. J. 1981. Developing a commercial digital
Acoustique-Musique, Paris. synthesizer. Comput. Music J. 5,3,62-73.
CRAWFORD,D., AND ZEEF,J. 1983. Gregory’s Scribe: KAY, A., AND GOLDBERG,A., EDS. 1976. Smalltalk-
Inexpensive graphics for Pre-1600 music nota- 72 instruction manual, SSL 76-6, Xerox Palo Alto
tion. Comput. Music J. 7, 1, 21-24. Research Center, Palo Alto, Calif.
EMBLEY, D. W., AND NAGEY, G. 1981. Behavioral KOWALSKI, M. J., AND GLASSNER, A. 1982. The
aspects of text editors. ACM Comput. Surv. 13, 1 N.Y.I.T. digital sound editor. Comput. Music J.
(Mar.), 33-70. 6, 1, 66-73.
EMERSON,M. 1984. The shock of the new. The Strad KRASNER, G. 1980. Machines Tongues VII: “The
94, 1126,698701. design of a Smalltalk music system. Comput.
ERICKSON,R. F. 1975. The DARMS project: A sta- Music J. 4, 4, 4-14.
tus report. Comput. Humanities 9,6 291-298. LAMB, M. 1982. Musicland-A network of educa-
FEDORKOW, G., BUXTON, W., AND SMITH, K. C. tional games. Options, vol. 13: Spring. Office of
1978. A computer-controlled sound distribution Educational Development, Univ. of Toronto,
system for the performance of electroacoustic Toronto, pp. 12-13.
music. Comput. Music J. 2, 3, 33-42. LAMPSON, B. W., PIER, K. P., ORNSTEIN, S. M.,
FRYKBERG,S. K., AND BATES, R. H. 1978. The com- CLARK, D. C., AND MCDANIEL, G. 1980. The
puter and the composer: A description of a devel- Dorado: A high performance personal computer
oping, working system. Tech. Rep., Dept. of (three papers). Rep. CSL-80-2, Xerox Palo Alto
Electrical Engineering, Univ. of Canterburv,-. Research Center, Palo Alto, Calif.
Christchurch, New Zealand. LASKE, 0. E. 1977. Toward a theory of interfaces for
FURATA, R., SCOFIELD, J., AND SHAW, A. 1982. computer music systems. Comput. Music J. 1, 4,
Document formatting systems: Surveys concepts, 53-60.
and issues. ACM Comwt. Surv. 14. 3 (Sent.)..-., LASKE, 0. E. 1978. Considering human memory in
417-472. designing user interfaces for computer music.
GABURO,J. 1973. An analog/hybrid instrument for Comput. Music J. 2,4, 39-45.
electronic music synthesis. Ph.D. dissertation, LEDGARD,H., WHITESIDE, J., SINGER, A., AND SEY-
Univ. of Toronto, Toronto, Ontario, Canada. MO&, W. 1980. The natural language of inter-
GOOD,M. 1978. SCOT: A score translator for Music- active svstems. Commun. ACM 23, 10 (Oct.),
11. Undergraduate thesis. Dept. of Electrical En- 556-563:
gineering and Computer Science, Massachusetts LEHRMAN, P. D. 1984. Inside the Synclavier. Studio
Institute of Technology, Cambridge, Mass. Sound 26,2,68-74.
GREY,J. M. 1975. An exploration of musical timbre. LIEBERMAN,H. 1982. Machine Tongues IX: Object-
Stan-M-2, Ph.D. dissertation, Dept. of Music, oriented programming. Comput. Music J. 6, 3,
Stanford Univ., Stanford Calif. 8-21.
GROGONO,P. 1973. Software for an electronic music LOY, G. D. 1981. Notes on the implementation of
studio. So&. Pratt. Exper. 3, 369-383. MUSBOX: A compiler for the Systems Concepts
digital synthesizer. Comput. Music J. 5, 1,34-50.
HAFFLICH, S., AND BURNS, M. 1983. Following a
conductor: The engineering of an input device. LOY, G. D. AND ABBOIT, C. 1985. Programming
M.I.T. Experimental Music Studio. Presented to languages for computer music synthesis, perform-
the International Computer Music Conference, ance, and composition. Comput. Surv. 17, 2
Rochester, N.Y. (June), 235-265.
HAMEL, K. 1984. Musprint Manual. Triangle Re- MATHEWS, M., AND ABBOT, C. 1980. The sequen-
sources, Cambridge, Mass. tial drum. Comput. Music J. 4, 4, 45-59.
MATHEWS, M., AND BENNETT, G. 1978. Real-time
HANES, S. 1980. The musician-machine interface in synthesizer control rapports. IRCAM 5178, Insti-
digital sound synthesis. Comput. Music J. 4, 4, tut de Recherche et Coordination Acoustique-
23-44. Musique, Paris.
HILLER, L. 1972. Computer programs used to pro- MATHEWS,M., AND MOORE, F. R. 1969. GROOVE,
duce the composition HPSCHD. Tech. Rep. 4, a program for realtime control of a sound syn-
Dept. of Music, State Univ. of New York, Buffalo, thesizer by a computer. In Proceedings of the 4th
N. Y. Annual Conference of the American Society of
HILLER, L., AND ISAACSON,L. 1970. Music com- University Composers (Santa Barbara, Calif.,
posed with computers: A historical survey. In The Apr.). ASUC, Music Dept., Columbia Univ., New
Computer and Music, H. Lincoln, Ed. Cornell York, pp. 22-31.
Univ. Press, Ithaca, N. Y., pp. 42-96. MATHEWS,M. AND MOORE,F. R. 1970. GROOVE-
IVERSON, K. E. 1962. A Programming Lunguuge. A program to compose, store, and edit functions
Wiley, London. of time. Commun. ACM 13,12 (Dec.), 715-721.

ComputingSurveys,Vol. 17, No. 2, June 1985


288 l Bruce W. Pennycook
MATHEWS, M., AND ROSLER, L. 1969. Graphical Stanford Artificial Intelligence Laboratory, Stan-
language for the scores of computer-generated ford Univ., Stanford, Calif.
sounds. In Music by Computers. H. von Foerster, RITCHIE, D. M., AND THOMPSON, K. 1974. The
and J. W. Beauchamp, Eds. Wiley, New York, UNIX time-sharing svstem. Commun. ACM 17.
pp. 84-114. 7 (July), 365-375. - -
MAXWELL, J. T., AND ORNSTEIN, S. M. 1983. ROADS,C. 1981. A note on printing music by com-
Mockingbird: A composer’s amanuensis. CSL-83- puter. Comput. Music J. 5,3,57-59.
2, Xerox Corporation, Palo Alto, Calif. ROADS, C. 1983. A report on Spire: An interactive
MCNALLY, D. 1979. Microprocessor mixing and audio processing environment. Comput. Music J.
processing of digital audio signals. J. Audio Eng. 7,2 70-74.
SOC.27, 10, 793-803. ROADS, C. 1985. Research in music and artificial
MEYROWITZ,N., AND VAN DAM, A. 1982. Interactive intelligence. ACM Comput. Suru. 17, 2 (June),
editing systems: Parts I and II. ACM Comput. 163-190.
Suru. 14,3 (Sept.), 321-416. SACHS,C. 1940. The History of Musical Instruments.
MILANO, D. 1984. Turmoil in MIDI-Land. Keyboard, Norton, New York.
42-63,106. SALLIS, F. 1984. LIM. Unpublished manuscript,
MILLER, J. 1984. Personal Composer, American Bul- Univ. de Laval, Quebec City, Canada.
lycode (tm), Washington, D.C. SANDEWALL,E. 1978. Programming in an interac-
MOORER, J. A. 1977. Signal processing aspects of tive environment: The “LISP” exverience. ACM
computer music: A survey. Proc. IEEE 65, 8, Comput. Suru. 10, 1 (Mar.), 35-71:
1108-1132. SCHEIDT,D. 1983. The blue card. Unpublished man-
MOORER, J. A. 1982a. The Lucasfilm audio signal uscript, Kingston, Canada.
processor. Comput. Music J. 6, 3,22-32. SCHOTTSTAEDT,B. 1983. Pla: A composer’s idea of
MOORER,J. A. 1982b. The ASP System: A Tutorial. a language. Comput. Music J. 7, 1 11-20.
Lucasfilm Ltd., San Rafael, Calif. SMITH, L. C. 1972. Score-A musician’s approach to
MORAN, T. P., ED. 1981. Special issue: The Psy- computer music. J. Audio Eng. SOC.20, 1, 7-14.
chology of Human-Computer Interaction. ACM SMITH, L. C. 1973. Editing and printing music by
Comput. Suru. 13, 1 (Mar.).
computer. J. Music Theor. 17,2, 292-309.
MURRAY, D. J., AND BEAUCHAMP,J. 1978. A user’s
guide to the PLATO/I1980 music synthesis sys- SMOLIAR, S. W. 1973. A data structure for an inter-
tem: Using the PlaComp language. Manual, Dept. active music system. Interface 2, 2, 127-140.
of Music, Univ. of Illinois, Urbana. SNELL, J. 1982. The Lucasfilm real-time console for
NELSON, G. 1980. MPL: Musical program library recording studios and performance of computer
manual. Dept. of Music, Oberlin College, Oberlin, music. Comput. Music J. 6, 3, 33-45.
Ohio. STRAWN, J. 1980. Approximation and syntactic
PENNYCOOK,B. W. 1983a. Music languages and analysis of amplitude and frequency functions for
preprocessors: A tutorial. In Proceedings of the digital sound synthesis. Comput. Music J. 4, 3,
1983 International Computer Music Conference 3-24.
(Berkeley, Calif.). Computer Music Association, TANNER, P. 1971. Some programs for the computer
San Francisco, pp. 275-298. generation of polyphonic music. Rep. ERB-862,
PENNYCOOK,B. 1983b. The role of graphics in com- National Research Council, Ottawa, Canada.
puter aided music instruction. Paper presented at TOVAR AND SMITH, L. 1977. Music10 manual. Un-
1983 Nicograph Conference, Tokyo, Japan, Oct. published user’s manual. Center for Computer
PENNYCOOK, B., KULICK, J., AND DOVE, D. 1985. Research in Music and Acoustics, Stanford Univ.,
The Image and Audio Systems audio workstation. Stanford, Calif.
In Proceedings of the 1985 International Com- TRUAX, B. 1977. The POD system on interactive
puter Music Conference. Computer Music Asso- composition programs. Comput. Music J. 1, 3,
ciation, San Francisco, pp. 145-149. 30-39.
PISZCZALSKI,M. 1979. Spectral surfaces from per- TRUAX, B. 1985. The PODX system: Interactive
formed music. Comput. Music J. 3, 1 18-24. compositional software for the DMX-1000.
PULFER, J. K. 1970. Computer aid for musical com- Comput. Music J. 9, 1, 29-38.
posers. Bull. Radio Electr. Eng. Div. 20, 2. Na- TRUAX, B., AND BARENHOLTZ, J. 1977. Models of
tional Research Council, Ottawa, Canada, pp. interactive computer composition. In Computing
44-48. in the Humanities: Proceedings of the Third
PULFER, J. K. 1971a. The NRC computer music ZCCH, S. Lusignan and J. North, Eds. Univ. of
system. National Research Council Rep., Ottawa, Waterloo Press, Waterloo, Ontario, Canada, pp.
Canada. 209-219.
PULFER, J. K. 1971b. Man-machine interaction in VERCOE,B. 1975. Man-computer interaction in cre-
creative avvlications. Znt. J. Man-Mach. Stud. 3. ative applications. Experimental Music Studio,
1-11. -- Massachusetts Institute of Technology, Cam-
REISER, J., ED. 1976. SAIL. Rep. STAN-CS-574, bridge, Mass.

ComputingSurveys,Vol. 1’7,No. 2, June 1985


Computer-Music Interfaces: A Survey l 289

WALRAFF,D. 1979. The DMX-1000 signal process- Interface 1, 127-165.


ing computer. Comput. Music J. 3,4,44-49. WOOD, R. J. 1982. Franz Flavors: An implementa-
WEINREB, D., AND MOON, D. 1981. Lisp machine tion of abstract data types in an applicative lan-
manual. Artificial Intelligence Laboratory, Mas- guage. Rep. TR-1174, Maryland Artificial Intel-
sachusetts Institute of Technology, Cambridge, ligence Group, Univ. of Maryland, College Park,
Mass. Md.
WIGGEN, K. 1968. The electronic music studio at WRIGHT, J. 1983. What MIDI means for musicians.
Stockholm Development and Construction. Polyphonic 8, 4, 8-15.

Received April 1984; final revision accepted August 1985.

ComputingSurveys,Vol. 17,No. 2, June 1985

You might also like