You are on page 1of 27

Parallel & Distributed

Computer Systems

Dr. Mohammad Ansari


Course Details
 Delivery
◦ Lectures/discussions: English
◦ Assessments: English
◦ Ask questions in class if you don’t understand
◦ Email me after class if you do not want to ask in
class
◦ DO NOT LEAVE QUESTIONS TILL THE DAY BEFORE THE
EXAM!!!
 Assessments (this may change)
◦ Homework (~1 per week): 10%
◦ Midterm: 20%
◦ 1 project + final exam OR 2 projects: 35%+35%
Course Details
 Textbook
◦ Principles of Parallel Programming, Lin & Snyder
 Other sources of information:
◦ COMP 322, Rice University
◦ CS 194, UC Berkeley
◦ Cilk lectures, MIT
 Many sources of information on the
internet for writing parallelized code
Teaching Materials & Assignments
 Everything is on Jusur
◦ Lectures
◦ Homeworks
 Submit homework through Jusur
 Homework is given out on Saturday
 Homework due following Saturday
 You lose 10% for each day late
 No homework this week! 
Outline
 This lecture:
◦ Why study parallel computing?
◦ Topics covered on this course
 Next lecture:
◦ Discuss an example problem
Why study parallel computing?
 First, WHAT is parallel computing?
◦ Using multiple processors (in parallel) to solve a
problem faster than a single processor
 Why is this important?
◦ Science/research is usually has two parts.
Theory, and experimentation.
◦ Some experiments just take too long on a single
processor (days, months, or even years)
◦ We do not want to wait for so long
◦ Need to execute experiments faster
Why study parallel computing
 BUT, parallel computing very
specialized
◦ Few computers in the world with many procs.
◦ Most software not (very) parallelized
◦ Typically parallel programming is hard
◦ Result: parallel computing taught at Masters
level
 Why study it during undergraduate?
◦ The entire computing industry has shifted to
parallel computing. Intel, AMD, IBM, Sun, …
Why study parallel computing?
 Today:
◦ All computers are multi-core, even laptops
◦ Mobile phones will also be multi-core
◦ Number of cores keeps going up
◦ Intel/AMD:
 ~2004: 2 cores per processor
 ~2006: 4 cores per processor
 ~2009: 6 cores per processor
 If you want your software to use all
those cores, you need to parallelize it.
 BUT, why did this happen?
Why did this happen?
 We need to look at history of
processor architectures
 All processors made of transistors
◦ Moore’s Law: number of transistors per chip
doubles every 18-24 months
◦ Fabrication process (manufacture of chips)
improvements made transistors smaller
◦ Allows more transistors to be placed in the
same space (transistor density increasing).
Transistor Counts
Intel 80286
2,000,000,000 Intel 80386
Intel 80486
Pentium
200,000,000 AMD K5
Pentium II
Pentium III
20,000,000
AMD Athlon
Pentium 4
2,000,000 AMD Athlon 64
AMD Athlon X2
Cell
200,000 Core 2 Duo
Core i7 (Quad)
Six-Core Opteron 2400
20,000 Six-Core Xeon 7400
1980 1985 1990 1995 2000 2005 2010
Why did this happen?
 What did engineers do with so many
transistors?
◦ Added advanced hardware that made your code
faster automatically
 MMX, SSE, superscalar, out-of-order execution
 Smaller transistors change state faster
◦ Smaller transistors enables higher speeds
 Old view:
◦ “Want more performance? Get new processor.”
◦ New processor more advanced, and higher speed.
◦ Makes your software run faster.
◦ No effort from programmer for this extra speed.
 Don’t have to change the software.
Why did this happen?
 But now, there are problems
◦ Engineers have run out of ideas for advanced
hardware.
◦ Cannot use extra transistors to automatically
improve performance of code
 OK, but we can still increase the
speed, right?
Why did this happen?
 But now, there are problems
◦ Engineers have run out of ideas for advanced
hardware.
◦ Cannot use extra transistors to automatically
improve performance of code
 OK, but we can still increase the
speed, right? WRONG!
Why did this happen?
 But now, there are problems
◦ Higher speed processors consume more power
 Big problem for large servers: need their own
power plant
◦ Higher speed processors generate more heat
 Dissipating (removing) the heat is requiring
more and more sophisticated equipment, heat
sinks cannot do it anymore
◦ Result: not possible to keep increasing speed
 Let’s look at some heat sinks
Intel 386 (25 MHz) Heatsink
 The 386 had no heatsink!
 It did not generate much heat
 Because it has very slow speed
486 (~50Mhz) Heatsink
Pentium 2 Heatsink
Pentium 3 Heatsink
Pentium 4 Heatsink
Why study parallel computing?
 Old view:
◦ “Want more performance? Get new processor.”
◦ New processor will have higher speed, more
advanced. Makes your software run faster.
◦ No effort from programmer for this extra speed.
 New view:
◦ Processors will not be more advanced
◦ Processors will not have higher speed
◦ Industry/academia: Use extra transistors for
multiple processors (cores) on the same chip
◦ This is called a multi-core processor
 E.g., Core 2 Duo, Core 2 Quad, Athlon X2, X4
Quotes
◦ “We are dedicating all of our future product
development to multicore designs. … This is a
sea change in computing”
 Paul Otellini, President, Intel (2005)

◦ Number of cores will ~double every 2 years


Why study parallel computing?
 What are the benefits of multi-core?
◦ Continue to increase theoretical performance:
 Quad-core processor, with each core at 2GHz
is like 4x2GHz = 8GHz processor
◦ Decrease speed to reduce temperature, power
 16-core at 0.5GHz = 16*0.5 = 8GHz
 8GHz, but at lower temperature, lower power
 Multi-core is attractive, because it
removes existing problems
 No limit (yet) to number of cores
Affects on Programming
 Before:
◦ Write sequential (non-parallel) program.
◦ It becomes faster with newer processor
 Higher speed, more advanced
 Now:
◦ New processor has more cores, but each is slower
◦ Sequential programs will run slower on new proc
 They can only use one core
◦ What will run faster?
 Parallel program that can use all the cores!!!
Why study parallel computing?
 You need knowledge of parallelism
◦ Future processors will have many cores
◦ Each core will become slower (speed)
◦ Your software will only achieve high
performance if it is parallelized
 Parallel programming is not easy
◦ Many factors affect performance
◦ Not easy to find source of bad performance
◦ Usually requires deeper understanding of
processor architectures
◦ This is why there is a whole course for it
Course Topics
 Foundations of parallel algorithms
◦ How do we make a parallel algorithm?
◦ How do we measure its performance?
 Foundations of parallel programming
◦ Parallel processor architectures
◦ Threads/tasks, synchronization, performance
◦ What are the trade-offs, and overheads?
 Experiment with real hardware
◦ 8-way distributed supercomputer
◦ 24-core shared memory supercomputer
 If we have time:
◦ GPGPUs / CUDA
Skills You Need
 Basic understanding of processor
architectures
◦ Pipelines, registers, caches, memory
 Programming in C and/or Java
Summary
 Processor technology cannot continue
as before. Changed to multi-cores.
 Multi-cores require programs to be
parallelized for high performance
 This course will cover core theory
and practice of parallel computing

You might also like