You are on page 1of 14

University of Derby

Faculty of Arts, Design and Technology

Creating a hardware granular synthesizer

Author

Vladimir Coman-Popescu

Supervisor

Dr. Bruce Wiggins

Academic Year
2012 2013

Submitted in support of a degree in


MA Music Production
1

Abstract
The following report deals with the experiment of creating a hardware granular synthesizer with the
purpose of assessing whether the technique has the potential of becoming accessible to musicians
outside of the computer music area, as well as with the author's personal motives and explorations
of granular synthesis for this purpose. After establishing the rationale behind the experiment and
making predictions about the project's success, a short review of relevant literature on the subject is
undergone, for context, and a better understanding of the types of granular synthesis and current
available tools that handle it, before a detailed description of the design and build processes is
offered, as well as an evaluative approach to examining the finished instrument and asserting the
experiment's success. In closing, further avenues for future improvement of the design and technical
specifications are suggested.

Table of Contents
1.

Introduction

a. Granular synthesis in a nutshell


b. Rationale
c. The plan
2.

Granular synthesis over time, tools and theory

a. Literature Review - A Short History


- Theory and Types
b. Granular synthesis tools available
3.

Building the Grainscape

a. Finding the Design


b. The Build Process
4.

Issues, Achievements and Room for Improvement

5.

References

6.

Appendix internet links

Introduction
Granular Synthesis in a nutshell
Out of the multitude of techniques for sound creation and manipulation available through the use
of technology at this time, granular synthesis is considered by some to be one of the most powerful
(Price, 2005).
Based on physicist Dennis Gabor's idea about the quantum of sound, proposed in his 1947 paper
Acoustical quanta and the theory of hearing , granular synthesis was first suggested as a computer
music technique for producing complex sounds by Iannis Xenakis and Curtis Roads (Truax, 2010).
The main principle of granular synthesis is the division of sound into multiple very small slices, or
grains, which are then played back repeatedly to produce a continuous sound, and using the pitch,
size or other properties of these grains to achieve a significant number of possible sonic results. For
instance, this technique is used to sever the connection between the pitch and duration of sound
recordings, thus offering the possibility of changing a sound's pitch without affecting its duration
and vice-versa. While a lot of the granular synthesis software instruments available today are based
on sampling recorded sound and slicing it into grains which can then be played back while varying
their source position from within the overall sound recording, sound grains can be just as well
generated by mathematical functions. Another example, an arguably more artistic use of the
granular synthesis technique is randomizing the scrub of each grain being played back in order to
produce a completely new sound texture with a number of musical performance and sound design
applications.

Rationale
Although part of the applications of what is now named granular synthesis were achieved to a
certain degree with hardware relatively soon after its theorization in the form of tools such as Pierre
Schaeffer's and Jacques Poullin's Phonogene (Poullin, 1954) or Dennis Gabor's Gaboret (Barnias,
2004), advancements in the realm of computing have made it significantly more accessible as
software running on computers. However, this also drastically limits the number of people that have
access to it as a musical composition or performance tool. While computer musicians and electronic
music artists might be familiar with the concept of granular synthesis, a great number of other types
of creative people such as rock musicians playing in more traditional contexts (such as bands based
on guitars, drums and bass) are arguably unaware of its benefits. Having notable experience with a
multitude of more traditional approaches to making music, within a number of rock, pop and jazz
bands, I'm personally able to relate to musicians in this situation.
I first set out to explore the sonic possibilities and practicality of granular synthesis with the
purpose of bettering myself as a musician and producer by amassing knowledge on tools likely to
lead to sonic results that I personally found unusual and exotic. However, through learning about
the approach, my initial expectations regarding an assumed great complexity of this technique were
replaced with surprise at its apparent lack creative of use in mediums outside computer music
aficionados. Its basic principles weren't hard for me to grasp, and I personally found that the sounds
I could create with granular synthesis not only very satisfying to me as an artist, but that they could
also very well be used in music of a significant variation of genres.
As a result, spreading knowledge of this sound technique and making it more accessible to
3

musicians outside of the electronic music scene or academia circles became an important interest
for me. I believe that its versatility in producing a wide range of sounds should be known by a
number of people equal to those knowing how to tweak the EQ on a mixer, the Tone knob on an
electric guitar or the Gain knob on a distortion pedal. I believe that as music evolves over the years
and more tools are available to musically creative individuals, so should the accessibility to and
knowledge about these tools grow along with them.
Therefore, the purpose of this project is to explore the possible accessibility of this technique and
find out as closely as possible how accessible granular synthesis can be when implemented on a
hardware platform, from a technical standpoint. This can be investigated by creating a hardware
granular synthesizer ideally aimed at musicians and individuals that don't label themselves as
computer-music-literate. For the sake of clarity throughout this paper, as well as for the love of
nomenclature, this particular granular synthesizer will be called the Grainscape.
Also for the sake of absolute clarity, the research question being explored is:
How technically accessible can granular synthesis be as a musical approach when implemented on
a hardware platform?

The plan
I've previously dealt with the concept of creating a hardware granular synthesizer in a student
paper submitted for the Research Methods module in December 2012. As such, I found it
appropriate to use some of the text material from that particular paper in this section of the report,
as the concepts and their argumentation used in that context are highly relevant to the subject
approached in this one.
When considering the research question posed previously, there is a possibility that confusions can
arise regarding the terms accessible and hardware platform. In order to make the research aims
as clear as possible, the two terms must clearly be defined in the context of the design and build
stages about to be undergone.
Accessible, in this case, references two elements: the budget allocated to the resources used to
create the Grainscape (or price range of the instrument), and its ultimate ease of use when its
completed.
At this stage in development, estimating a price range for the instrument itself as if it were meant
for mass production wouldn't be practical. The current project focuses more on the technical aspects
that need to be resolved, rather than problems related to the current electronic instrument market, as
the limited time allocated to this project couldn't allow for effective market research and other nontechnical issues. However, a a good indicator of its possible accessibility in terms of price is the
money spent on the resources used to create the Grainscape. In this case, my aim is the arbitrary
sum of 150, if for no other reason than the fact that that's the money I'd personally be inclined to
spend on a synthesizer should I somehow be put in a position where I'd be obligated to do it.
The definition of the term ease of use implies that the instrument should pose no difficulties to
the average user, and be of the (in layman terms) plug-and-play variety.
But what is Plug-andplay? For that matter, what is ease of use? Again, due to time constraints
posed by the one-year of full time study context, creating a survey aimed at various musicians,
implementing it, interpreting the results and them implementing these interpretations into the
interface of the instrument might end up taking away too much time from the quantitative research
phases, platform testing, design and build stages.
Therefore, I've formed a subjective, yet simple definition of ease of use. For the purpose of this
project, this will mean no more than 3 button presses between plugging the instrument in and
4

hearing the first sound, no switching between pages and banks of controls, and no more than one
function for each button, knob, slider or other means of human interfacing on the front panel of the
instrument. This is, of course, a completely subjective view of ease of use and is by no means
considered a universal definition of the term when it comes to instrument design and construction.
The term hardware is also used fairly loosely. Because of the price range mentioned earlier, as
well as the rigid time constraints, implementing granular synthesis on an analogue platform is not
expected to be the approach chosen to create the instrument at the end of the design and
construction stages. Therefore, microcomputers running open source software are expected to be the
solution found for the implementation of granular synthesis.
This implementation contains both hardware and software elements, but it will be considered as
falling under the definition of a hardware approach for the purpose of this Independent
Scholarship, since the user does not interface with the software via a display, and the final version
of the instrument will have the microcomputer do nothing else besides sound generation through
granular synthesis.
As a result of this reasoning, there will be two purposes to the Grainscape, in this case, the first
being finding out how cheaply and accessibly a hardware granular synthesizer can be built. If the
resulting tool is deemed functional and potentially useful from a musician's standpoint, testing this
first aspect has the potential of provoking further research in this area and ultimately influencing the
electronic musical instrument market some time in the foreseeable future. The second purpose is to
explore the functionality achievable with such low-cost means and produce a general idea about the
feasibility of such a design based on software and microcomputers.
As a short aside, my personal predictions at the start of this project are that this design is the
starting point of a genuinely enlightening exploration into the possibilities of electronic instrument
design, and that the Grainscape project created at the end of it can at least prove that granular
synthesis is achievable on platforms outside of the computer music area.
Therefore, the electronic instrument will be a practical proof-of-concept that's accompanied by a
complete breakdown of the design and build processes, as well as an evaluative appraisal of its
abilities, advantages and flaws and proposed methods of improving upon the approach tested
herein.
(Coman Popescu, 2012)

Granular synthesis over time, tools and theory


Literature review
A Short History
In dense portions of the Milky way, stellar images appear to overlap, giving the effect of a nearcontinuous sheet of light... The effect is a grand illustion. In reality... the nighttime sky is
remarkably empty. Of the volume of space, only one part in 10^21 is filled with stars.
(Kaler 1997)
The above quote is used my Curtis Roads in his 2001 book Microsound as a parallel to the way
humans perceive clouds of sound as a single auditory event while also using the example of a series
of impulses repeated at a rate of 20 Hz to be heard as a single, continuous tone.
This parallel and subsequent example are a perfect explanation of the basis of granular synthesis
and how sound textures and other sonic events can be created by manipulating swarms of grains
(or atoms) of sound.
The notion that all matter is made out of small individual elements goes back as far as the 5 th
century BC, when philosophers Democritus and Leucippus posed forth the idea of atoms,
indivisible particles out of which both matter and energy are composed of along with empty space.
(Barnias, 2004). The idea of atoms of sound was touched upon by others in the following centuries,
including Isaac Beekman's corpuscular theory of sound (Roads, 2001), and was heavily debated
by many scientific minds including the likes of Rene Descartes, but only in the 20 th century did the
technology advance to a level suitable for effectively testing sound events imperceptible to the
human ear.
British Nobel Prize winner Dennis Gabor proposed that any sound can be decomposed into
thousands of different elementary sound grains, rather than being entirely representable by sound
waves of infinite duration as suggested by the use of Fourier analysis. [...]it is our most
elementary experience that sound has a time pattern as well as a frequency pattern (Gabor, 1947)
Gabor suggested that the time and frequency of sound are intrinsically connected , and even
created a machine with which he conducted experiments in changing the duration of sounds without
affecting their pitch and vice-versa. The technique was expanded further with the experiments of
Jacques Poullin and Pierre Schaeffer and their Phonogene, but actual musical uses of granular
synthesis only first appeared in the works of Iannis Xenakis, who is actually the first to coin the
term of grains of sound, extensively use overlapping sound particles to create new ones, as well
as being the first to use what is now known as grain position randomization, which he called ataxy.
(Barnias, 2004).
Also influential in this area are the works of Curtis Roads, who's used granular synthesis in a
significant number of his compositions, while developing the technique of asynchronous granular
streams (or, more clearly, sound grain clouds). His practical influence in this area is less prevalent in
this case as his teachings and papers about the theory behind electronic music composition and
performance, through his books The Computer Music Tutorial and Microsound.
Currently, granular synthesis is more accessible to computer musicians than ever before with
powerful computers and incredibly versatile software tools like Reason's Malstrom, the Alchemy
VST plugin and countless others, this technique is no longer confined to laboratories and studios
equipped with powerful enough technology to handle it, but implementable on any modern laptop,
which makes it perfect for implementing on musician-friendly hardware just as well.
6

Theory
There are two basic approaches to granular synthesis:
The first approach involves analysing an existing sound recording and then resynthesizing selected
grains to generate a new sound. In this approach, grains are slices or selections of the recorded
sound being used.
The second approach involves generation of sound grains through computer algorithms and
doesn't involve any analysis stage or pre-recorded sound. (Barnias, 2004)
The two approaches can be further divided into the following sub-categories:

Pitch-Synchronous granular synthesis analysing a sound and the resynthesizing slices of it


to produce a granularity, one slice at a time.

Synchronous and quasi-synchronous streams multiple streams of grains produced from the
analysis of a recorde sound are generated at the same time.

Asynchronous clouds grains are produced in irregular streams, and quasi-organized in


clouds of audio. Grain waveforms are directly generated rather than produced through
analysis and resynthesizing.
(Truax, 2010)

Hardware granular tools currently available


There isn't a large number of hardware platforms available at the moment that support granular
synthesis, and those that are currently being manufactured are either inaccessibly priced (even if
reasonably so), or offer very limited granular capability. Mentioning them in this context can not
only offer inspiration for approaches to building the Grainscape, but also provides further
justification for the attempted experiment itself.
The ones I'm personally aware of are the following:

The Virus TI a hardware synthesizer capable of a number of sound synthesis approaches


including granular synthesis, with a high price tag and limited granular implementation.

The Gotharman deMOON a synthesizer capable of acting as an effects processor with


granular effects for very limited granular implementation.

The Symbolic Sound Kyma - basically a powerful computer designed to be fit into a rack,
with incredibly expressive and spectacular implementations of a number of sound synthesis
techniques including granular synthesis and a price tag to match.

The Make Noise Phonogene named after Pierre Schaeffer's and Jacques Poullin's 1940s
machine, the Phonogene is a rack synthesizer module designed for resampling, sound slicing and
looping, with limited but creative granular synthesis implementation.

Building The Grainscape


Finding the Design
Work on building the Grainscape commenced by testing the granular synthesis capabilities of a
series of software tools, to try and determine the type of functionality I wanted to achieve and
explore with the hardware about to be built. At this stage, I set my sights on trying to answer the
following two questions:

What should the Grainscape actually do in terms of functionality?

and

Would I personally be inclined to use it outside of a computer music environment?

As previously mentioned, DSP algorithms based on granular synthesis are the basis of any tools
dealing with time stretching or compressing audio clips without altering pitch, so I first set out to
investigate the way that DAWs such as Pro Tools from Avid and Ableton Live deal with warping
audio. Although I discovered a multitude of what I considered to be creative uses for this type of
audio warping, I ultimately concluded that I personally wouldn't be inclined use this type of
functionality outside of a studio production situation.
I then spent a considerable amount of time experimenting with different types of plugins that
supported granular synthesis in Ableton Live, up to and including Live's native Granular Delay
device, as well as trying out patches built in the visual programming environments Max and
PureData, that dealt with granular synthesis. Eventually, I decided that the most spectacular and
versatile tool out of all the software I tested was a Max-based device designed by Robert Henke
a.k.a. Monolake, called the Granulator. Its functionality includes real-time recording of audio,
typical ADSR controls, grain position randomization and the option of selecting the number of
voices, as well as more advanced features like the option of applying amplitude modulation or
frequency modulation to the grains, per-grain amplitude curve, a Scan control, allowing the grain
position to move progressively forward over the selected audio snippet and an assignable LFO.
(Henke, 2013)
I decided to try to achieve part of the functionality of the Granulator for my Grainscape tool, while
also trying to simplify it to a certain degree, both for the sake of efficient time allocation and
because of the Granulator's inherent Max-based modular nature once the basic features are
implemented, it is possible to add more functionality without having to start everything again from
the ground up. Also, less initial features would make its basic functionality easier to master.
The Grainscape would have the following functionality: the user would record a few seconds of
audio into an audio buffer via a microphone or RCA/phono input, which could then be played back
with the keys on a MIDI controller and looped immediately, similar to a regular sampler. The user
could then adjust the size of the looped section of audio (or grain) and it's position within the
audio buffer, as well as having typical Attack, Decay, Sustain and Release controls. Another
function would be a knob controlling the rate of randomization of the position of the grain, relative
to the position initially set by the user. A shorter and more accessible explanation would be that a
performer could sing into the Grainscape and then create lush, spacious soundscapes or harsh
granular stretches out of what they recorded, thus achieving a form of quasi-synchronous granular
synthesis.
With this in mind, I started looking for a platform suitable for implementing the Grainscape patch
on. After a somewhat lengthy process of finding a number of hardware manufacturers, ruling some
8

of them out, corresponding with the rest and establishing endorsement connections and support, I
finally received two platforms for the purpose of testing the creation of the Grainscape: a Raspbery
Pi microcomputer and a Cubox microcomputer. The selection and correspondence process is not
covered here on account of it not being directly tied to the granular tool being built and assessed as
part of this project.
After receiving the Raspberry Pi and the Cubox, I started testing the both of them to see how they
handle sound-based applications. Unfortunately, upon a closer examination, I found that the more
powerful Cubox apparently wasn't designed for handling audio, and either the process of installing
the operating system on it was beyond my grasp, or the unit I received was corrupted. This left me
with the single options of using the less powerful Raspberry Pi. However, in contrast to the very
complicated-to-set-up (and potentially inaccessible) Cubox, the Raspberry Pi was designed for
children to learn programming. (Raspberry Pi FAQ, 2012) and it came with a pre-installed version
of the operating system on an SD card. After making sure it could output audio files, I advanced to
the software design stage.
My initial plan was building a Max patch based on Robert Henke's Granulator and implementing it
onto the Raspberry Pi. However, since the Raspbian operating system on the Pi is based on Linux
(Raspberry Pi FAQ, 2012) and Max isn't available for Linux (Max FAQ), the only other solution
that I found was to replace Max with a very similar visual programming environment called
PureData (also referred to as Pd). Although I was confident I would be able to learn Pd in time to
create the Grainscape, I discovered that in spite of their similarities (both are object-based graphical
programming environments) the differences in approaching certain functionalities between Max and
Pd are quite substantial and that the method of achieving granular synthesis used in the Granulator
Max patch that inspired me was in no way valid when it came to Pd. The solution was to find a
different method, based on Pd, to achieve the same functionality that I was after.

The Build Process


After several weeks of exploring Pd and learning its quirks and possible approaches, I managed to
create a patch that very closely achieved what I had in mind.

Illustration 1
9

How it works:
In Pd, audio can be recorded as point values in graphical representation called an Array. Sending
integer number values to that array plays back the point corresponding to each number it receives.
Point values (in this case used as digital audio samples) can be written to the array with the
tabwrite~ object, and a tabread4~ object is used to receive the individual or continuous stream of
numbers necessary to play it back.
Multiple tabread4~ objects can read from the same Array, and for the sake of efficiency and
simplicity, I ultimately decided on limiting the Array to 2 seconds of audio capacity. For reasons
discussed in a bit more detail further on, here are 4 tabread4~ objects to read from the Array, each
bundled with a group of other objects into a Pd abstraction a whole separate patch contained
within the overall patch, grouped into a single Pd object, which I titled voicevoicevoice.

Illustration 2
How voicevoicevoice works:
To generate the continuous stream of numbers necessary to play back audio from the Array via
tabread4~, I used the phasor~ object to generate a rising sawtooth-shaped oscillator. At normal
playback speed, 2 seconds of audio approximately translates to 88.400 sample points, which meant
I had to multiply the output values of the phasor~ (normally between 0 and 1) by 88 400. After
setting up the patch to receive MIDI note values from a MIDI controller (via a notein object in the
mother-patch), I attached the MIDI note values to the rate(speed) of the phasor~, thus allowing the
user to play the pitch of the recorded audio on a MIDI keyboard. From there, it was a simple matter
of setting up user-controlled division and addition to modify the phasor~ minimum and maximum
values (translating to grain size), start position (scrub) and start position randomization relative
10

to the user-set start position (spray).


Therefore, each voicevoicevoice abstraction represents a single synthesizer voice. I had settled
early on on creating a polyphonic synthesizer, so I implemented 4 of these abstractions in order to
make a 4-voice polyphonic synth. I deemed the ability of the synth being to play 4 notes at a time
sufficient to demonstrate the functionality that I was after, while not overloading the Raspberry Pi's
central processing unit, as each additional voice requires a rise in the processing power being used.

Illustration 3
There is also an adsr abstraction that applies a volume envelope for each of the
voicevoicevoice patches.
The relationships between all of these objects is visible in Illustrations 1, 2 and 3, as well as the
patch being documented on the Grainscape blog and available for download. All links included in
the Appendix.
With the patch completed, there still remained the problem of accessing it on the Raspberry Pi
without the need of using a monitor, the Graphic User Interface, a mouse and a computer keyboard.
The patch would need to launch automatically as soon as the microcomputer is plugged in, just like
the functionality of any hardware tool is available as soon as it's turned on.
This was achieved by combining a set of software functions in both Pd and the Raspberry Pi
Operating System itself, as I had no knowledge of creating Python scripts, which would have been
an alternative solution. Pd was set to run the Grainscape patch at launch, while the patch itself had a
loadbang object triggering a message to the software itself turning on the DSP algorithm. Pd was
also set to start automatically when the Raspberry Pi was plugged in, by having the Pi skip the
normally obligatory login and launch the graphical user interface, where an autolaunch Linux
folder was set up containing a shortcut to the Pd software.
The physical side of the design was resolved by integrating both the Raspberry Pi and the audio
interface into the generously-spaced casing of a 49 key MIDI controller. Two USB cables were used
to connect the Pi to the controller and the interface while assuring access to the microcomputer's SD
card containing the Operating system as well as the patch software, the power adapter, and the
audio interface's two RCA audio inputs and two RCA audio outputs.

11

Issues, Achievements and Room for Improvement


After several tests, the current iteration of the Grainscape unveiled a series of problems that need
to be resolved in order to move the platform towards being a viable option for expanding
accessibility to the granular synthesis technique.
The main issue is audio quality. I've discovered that based on the transformer/charger being used
to provide power to the Raspberry Pi significantly affects the output audio. While there are no
digital artefacts, there is constant regular clicking that never completely disappears regardless of the
transformer used.
The second problem is latency, which occurs both when recording audio into the Array and
between pressing a key and hearing the note it corresponds to.
To my knowledge, both of these problems can be fixed by properly calibrating the USB audio
interface being used. On the software side, maybe a solution can be found to work with drivers or
equivalent software in order to have full access to the performance of the interface. I'm sure that a
solution can be found to better integrate it with the Raspberry Pi, and I will personally pursue it
myself in the future. Regarding the clicking, I'm also personally convinced this can be fixed after a
thorough examination of the hardware (both the interface and the Raspberry Pi) by a person more
electronically proficient than myself. The Appendix contains links to a number of internet sources
claiming achievements in audio clarity with the Raspberry Pi, which, although prove the potential
of the Grainscape of evolving from a viable tool idea to a marketable instrument, I haven't included
as references on account of their somewhat unofficial nature.
Despite these problems, however, this prototype Grainscape is proof that granular synthesis is
indeed possible on a platform that is very cheap compared to all other current solutions on the
market, and easy to comprehend and use. The Grainscape has been used successfully and was easily
understood by a number of people with at least a marginal connection to the musical world, and the
comments I received from these tryouts ranged from This is so awesome! to I don't like the noise
it makes, but I can easily get how it works.
As a short aside, the Grainscape is also proof of the potential of this platform to become much
more than a host for accessible granular synthesis if this CPU-intensive sound technique is
possible on the Raspberry Pi, then surely a simpler PureData patch based on a less demanding
sound generation or manipulation method like frequency modulation or equalization can surely be
achieved? This begs the question about the possibility of creating a universal synthesizer/effects
processor based on Pd and the Pi, where the tool can become whatever its user base can program in
PureData. However, regardless of the exciting nature of this idea, experiments in this direction are
the potential subject of another report.
So, regarding the research question posed at the beginning of this report, How accessible can
granular synthesis be when implemented on a hardware platform?:
If the definitions of ease of use and accessible budget that I've given in beginning of the argument
are to be considered viable, and the central factors taken into consideration for the purpose of this
report, the answer is that granular synthesis can become just as accessible as any other currently
popular musical technique or instrument. Given time and effort invested into development,
something similar to the Grainscape could potentially be produced and offer a very cheap
alternative to conventional synthesizers currently on the market, and make the sounds of granular
synthesis a household occurrence as widespread as the electric guitar.

12

References
1.
Simon Price - Granular Synthesis How It Works & Ways To Use It - published in Sound
on Sound in December of 2005.
2.

Barry Truax - Granular Synthesis - http://www.sfu.ca/~truax/gran.html, 2010.

3.

Curtis Roads - Microsound - MIT Press, 2001.

4.
Jacques Poullin - The Application of Recording Techniques to the Production of New
Musical Materials and Forms. Applications to Musique Concrete (Lapport des techniques
denregistrement dans la fabrication de matieres et de formes musicales nouvelles: applications
la musique concrete). Ars Sonora, no. 9, 1954. Original text in French: http://www.arssonora.org/html/numeros/numero09/09f.htm
5.

Robert Henke Granulator II : http://roberthenke.com/technology/granulator.html , 2013.

6.

Raspberry Pi FAQ - http://www.raspberrypi.org/faqs , 2012

7.

Max FAQ - http://cycling74.com/support/faq_max6/

8.

Kaler, J. Cosmic Clouds. New York: Scientific American Library, 1997

9.

Dimitris Barnias Granular Synthesis, sonicspace.org article, 2004

10.

Dennis Gabor, Acoustical quanta and the theory of hearing Nature, 1947

11. Vladimir Coman-Popescu Research Plan Development, University of Derby student


paper, 2012.

13

Appendix
The Grainscape Project Diary blog - http://grainscape.wordpress.com/
Audio Issues on the Rapberry Pi:
http://puredata.info/docs/raspberry-pi
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=35&t=49803
http://www.electronicsweekly.com/news/design/embedded-systems/getting-hi-fi-sound-fromraspberry-pi-2013-07/

14

You might also like