You are on page 1of 92

GIS Work

Book
(Technical Course )
written by Shunji Murai
Shunji Murai 1999.

GIS Work Book (Technical Course)


Written by :
Shunji Murai, Professor
Institute of Industrial Science,
University of Tokyo
7-22-1 Roppongi, Minatoku,
Tokyo 106, Japan
Telephone 3-3402-6231
Fax 3-3479-2762

Published by :
Japan Association of Surveyors (JAS)
1-3-4 Koishigawa, Bunkyo-ku
Tokyo 112, -JapanTelephone 3-5684-3354
Fax 3-3816-6870
Price : 20 US Dollars

Published October 1996 (Fundamental Course), January 1997 ( Technical Course)

Reedited August 1998

Copyright for the materials is held by Shunji Murai. Permission to use these materials for any
purpose must be obtained in writing from Japan Association of Surveyors or Shunji Murai.
Permission is granted provided the source is clearly acknowledged together with the mark as
given below.
Shunji Murai 1998

CONTENTS
Technical Course
Preface
Resume of the author

Chapter 1 Coordinate System and Map Projection


1.1 Coordinate System
1.2 The Shape of the Earth?
1.3 Map Projection
1.4 Coordiante Transformaiton
1.5 Distance
1.6 Scale, Accuracy and Resolution

Chapter 2 Interpolation
2.1 Principle of Interpolation
2.2 Curve Fitting
2.3 Surface Fitting
2.4 Least Square Method
2.5 Interpolation of Image Data
2.6 Interpolation of Image Data
Chapter 3 Digital Terrain Model (DTM)
3.1 DEM and DTM
3.2 Triangulated Irregular Network (TIN)
3.3 Generation of Contour Lines
3.4 Interpolation of Elevation fom Contours
3.5 Automated Generation of DEM
3.6 Orthoimage Generation
3.7 Extraction of Terrain Information
3.8 Shade and Shadow
Chapter 4 Spatial Analysis
4.1 What is Spatial analysis?
4.2 Query
4.3 Reclassification
4.4 Coverage Rebuilding
4.5 Overlay of Raster Data
4.6 Overlay of Vector Data
4.7 Connectivity Analysis
4.8 Shape Analysis and Measurement
3

Chapter 5 Digital Image Processing


5.1 Flow of Digital Image Processing
5.2 Radiometric Correction
5.3 Geometric Correction
5.4 Image Enhancement
5.5 Spatial Filtering
5.6 Feature Extraction
5.7 Classification Methods
5.8 Maximum Likelihood Classifier
Chapter 6 Visualization of Geospatial Data
6.1 Graphic Variables
6.2 Gray Sca;omg
6.3 Color Map
6.4 Relief Map
6.5 Bird's Eye View-Parallel Projection
6.6 Bird's Eye View-Central Projection
Glossary
References

Preface
Shunji Murai Professor and Doctor of Engineering
Institute of Industrial Science
University of Tokyo, Japan
Chair Professor, STAR Program
Asian Institute of Technology, Thailand
Recognizing that a well organized text book for education and training is a key issue for a
successful GIS. Asian Association on Remote Sensing (AARS) has established a working group
on GIS Text Book. As the General Secretary of AARS, I decided to demonstrate such a text book
myself, as a good sample that the working group members can improve upon. Otherwise it will
take much time for completing a perfect one.
"GIS Work Book" includes two volumes; Volume 1: Fundamental Course and Volume 2:
Technical Course. Fundamental Course focuses on the concept and role of GIS, data model and
structure, data input, database, hardware/software, installation of GIS, and successful GIS
applications in Japan.
On the other hand, the Volume 2: Technical Course summarizes several technicalities that
support GIS including coordinate systems, map projection, interpolation, digital terrain model
(DTM), spatial analysis, digital image processing and visualization. I feel very happy to note that
my teaching experience of over thirty years on photogrammety, DTM, remote sensing, computer
assisted cartography, GIS and global change study has been really useful in completing this
technical course. While writing this book, I realized that GIS is a multi-disciplinary science
supported by many different technologies. In this case there are so many things to learn and to
teach.
In 1996 and 1997, I published GIS Work Book- Fundamental Course and Technical Course
respectively with bi-lingal of English and Japanese. As some readers request me to publish only
English version, I reedited the two volumes into a book with only English version.
I believe that this text book with its two parts ; "fundamental course" and "technical course"
would be useful and helpful to not only students, trainees, engineers, salesmen but also to top
managers or decision makers.
I would like to thank Mr. Minoru Tsuzura, Japan Association of Surveyors for his administrative
support to make this English version possible
August,1998
Tokyo, Japan
The production of this CD-ROM was funded by National Space Development Agency of Japan
(NASDA) Remote Sensing Technology Center of Japan (RESTEC). The conversion into
electronic form was implemented by Asian Center for Research on Remote Sensing (ACRoRS)
of Asian Institute of Technology (AIT), Thailand in March 1999. The editing team members
were Professor Shunji Murai (Team Leader), Mr. Tin Aung Moe, Ms. Wandee Kijpoovadol and
Mrs. Nancy Canisius.
March, 1999
AIT, Thailand
5

Resume of the Author


Shunji Murai, Professor and Dr. Eng.
Born in Tokyo, Japan in 1939
Professional Career :
1997-Present
1995-1997
1992-1995
1983-1992
1971-1992
1970
1963

Chair Professor, STAR Program, Asian Institute of Technology


Professor, Institute of Industrial Science, University of Tokyo
Professor, Asian Institute of Technology
Professor, Institute of Industrial Science, University of Tokyo
Associate Professor, Institute of Industrial Science, Univ. of Tokyo
Awarded Doctor of Engineering, University of Tokyo
Graduated from Civil Engineering Department, University of Tokyo

International Activities :
First Vice President, International Society for Photogrammetry and Remote Sensing
(ISPRS)
1992-1996 President, International Society for Photogrammetry and Remote Sensing (ISPRS)
1981General Secretary, Asian Association on Remote Sensing (AARS)
Present
1992President, Japan Association of Remote Sensing (JARS)
Present
1979Executive Board Member, Japan Association of Surveyors (JAS)
Present
1989Editor in Chief, The Journal of Survey, Japan Association of Surveyors
Present
1997.7
Academician, International Eurasian Academy of Sciences
1994.10
Honorary Professor, Wuhan Technical University of Surveying and Mapping, China
Honorary Fellow, International Institute of Aerospace Survey and Earth Sciences
1993.4
(ITC), The Netherlands
1996-2000

Publications :

Three Dimensional Measurement by Photogrammetry, Editor, Japan Society of


Photogrammetry and Remote Sensing, Kyoritsu Publishing Co., (Japanese version) 1983

Applications of Remote Sensing in Asia and Oceania, Editor, AARS, Geocartos


International, Hong Kong 1992

Remote Sensing Note, Editor, JARS, JAS 1992

The World of Geoinformatics, Author, JAS, 1995 (Japanese Version)

Toward Global Planning for Sustainable Use of the Earth ; Proceedings of the 8th
TOYOTA Conference, Editor, Elsevier Science, 1995

Survey High Technologies-100 Collections, Editor, JAS, 1996 (Japanese Version)

Chapter 1 Coordinate Systems and Map


Projection
1-1 Coordinate System
Geospatial data should be geographically referenced ( called georeferenced or geocoded) in a
common coordinate system.
Plane Orthogonal Coordinates
One of the most convenient way of locating points is to use plane orthogonal coordinates with x
(horizontal) and y (vertical) axis as shown in Figure 1.1 (a) and (b). Mostly a right hand system
with the thumb assigned to x and the forefinger to y will be used as shown in Figure 1. 1 (a)
while a left hand system may be used in a specific case as shown in Figure 1.1 (b).
In case of raster data, image coordinates (i, j) with the pixel number in horizontal direction
(column i or pixel i) and the line number in vertical direction (row j or line j) as shown in Figure
1.1 (c) are commonly used.
Polar Coordinates
A polar coordinate system with the angle ( ) measured from the polar axis (x axis) and distance
(r) from the pole is used in some cases as shown in Figure 1.2 (a).
In geodetic survey, a point is located with the azimuth (A) measured from the North and the
distance D from a geodetic point as shown in Figure 1.2 (b).
3D Orthogonal Coordinates
Three dimensional (3D) orthogonal coordinates are also used to locate points with the plane
coordinates (x, y) and height or depth (z) as shown in Figure 1.3 (a) and (b).
In case of locating points on the Earth on the assumption of a sphere, latitude ( ), the angle
measured between the equatorial plane and the point along the meridian and longitude ( ), the
angle measured on the equatorial plane between the meridian of the point and the Greenwich
meridian (or called the central meridian) are used as shown in Figure 1.3 (c). Longitude has
values ranging from 0 ( Greenwich, U.K. ) to + 180 (eastly) and from 0 to -180 (westly).

1-2 The Shape of the Earth


The shape of the Earth can be represented by an ellipsoid of rotation (or called a spheroid) with
the lengths of the major semi-axis (a) and the minor semi-axis (b) as shown in Figure 1.4 (a).
The amount of polar flattening (or called ellipticity) is expressed by

The approximate values of the Earth are;


However, the major and minor semi-axes have been measured precisely by many scientists or
organizations as listed in Table 1.1, that have been adopted in different countries.
The following coordinate systems are used to represent points on the surface of the Earth.

Geodetic Coordinator System (see. Figure 1.4 (b))


Longitude () is the angle measured from the Greenwich meridian. Latitude () is the angle
measured between the equatorial plane and the normal line of the ellipsoid.

h: ellipsoid height

Geocentric Coordinate System (see Figure 1.4 (c))


Longitude () is the same as the Geodetic Coordinates.
Latitude () is the angle measured between the center and a point on the surface of the Earth
under assumption that the Earth is approximated as a sphere with radius (R);

10

1-3 Map Projection


A map projection is a process of transforming location on the curved surface of the Earth with
the geodetic coordinates ( , ) to planar map coordinates (x, y).
More than 400 difference map projections have been proposed. The map projections are
classified by the following parameters.
- projection plane: perspective, conical, cylindrical
- aspect: normal, transverse, oblique
- property: conformality, equivalence, equidistance
Perspective Projection
Perspective projections are classified based on the projection center or viewpoint as shown in
Figure 1.5. One of the most popular perspective projections is polar stereo projection with the
projection plane tangent to the north or South Pole and the viewpoint at the opposite pole. This
polar stereo projection is used in NOAA GVI (Global Vegetation Index) data for global study.
Conical Projection
Conical projections are classified by the aspect as well as the cone size as shown in Figure 1.6
and Figure 1.7 respectively.
One of the popular conical projection is Lamberts conformal conical in which the angle is
conformal with an equal angle and distance in an area of 300 km in East-West and 500 km in
North-South.
The shortest distance is given as a straight line. The projection is used in the world aeronautical
chart of 1:1,000,000 scale.
Cylindrical Projections
Cylindrical projections are classified as in case of conical projections as shown in Figure 1.8 and
Figure 1.9 respectively.
One of the most popular cylindrical projections is the Universal Transverse Mercator (UTM)
with a transverse axis, secant cylinder and conformality (equal angle). UTM is commonly used
for topographic maps of the world, devided into 60 zones with a width of 6 degree longitude.
Figure 1.10, Figure 1.11 and Figure 1.12 show polar stereo projection, Lamberts conformal
conical projection and UTM respectively

11

12

13

1-4 Coordinate Transformation


Coordinate transformation is to transform a coordinate system (x, y) to another coordinate system
(u, v). The transformation is needed in the following cases;
- to transform different map projections of many GIS data sources to an unified map projection
in a GIS database,
- to adjust errors which occur at map digitization due to shrinkage or distortion of the map
measured, and
- to produce geo-coded image by so called geometric correction of remote sensing imagery with
geometric errors and distortions.
Coordinate transformation is executed by a selected transformation model (or mathematical
equation), with a set of reference points (or control points), that are selected as tic masks at the
corner points, reseau or ground control points as shown in Figure 1.13.
The following transformations are commonly used in GIS as well as photogrammetry and remote
sensing
Figure 1.14 shows major transformations.
Helmert Transformation (scale, rotation and
shift)

Quadratic Transformation (parabolic


distortion)

Affine Transformation (skew, scale of x and y,and Perspective Projection (rectification of


shift)
aerial photo)

Pseudo Affine Transformation(bi-linear


distortion)

14

Cubic Transformation(cubic and


distortion)

15

1-5 Distance
Distance is one of the important elements in measuring spatial objects in GIS. Several different
concepts of distance are defined as follows.
Euclidean Distance
Euclidean distance D is the defined as the distance measured along a straight line from point (x1,
y1 ) to point (x2, y2 ) in Cartesian coordinate system (see Figure 1.15 (a).
D2 = ( x1 - x2 ) + ( y1- y2 )2
Manhattan Distance
Manhattan distance D is defined as the rectilinear rout measured along parallels to X and Y axes
as shown in Figure 1.15 (b).
D = | x1 - x2| + | y1- y2|
Great Circle Distance
Great circle distance D is defined as distance along the great circle of the spherical Earth surface
from a point ( 1 1; latitude and longitude) to another point ( 2 2) as shown in Figure 1.15
(c).
where R is the radius of the Earth (R = 6370.3 km) on the assumption that
the Earth is a sphere.
Mahalanobis Distance
Mahalanobis distance D is a normalized distance in the normal distribution from the center ( )
to a point (X) in case of n dimensional normal distribution. Mahalanobis distance is used in the
maximum likelihood method for the classification of multi-spectral satellite images.
where S: variance-covariance matrix
Time Distance
Time distance is defined as the time required to move from point B to point A by using specific
transportation means.
Figure 1.15 shows major distances.

16

17

1-6 Scale, Accuracy and Resolution


Scale of map refers to the ratio of distance on a map over the corresponding distance on the
ground. The scale is represented as 1: M or 1/M, where M is called the scale denominator. The
larger the scale, the more the detail described by the map and with higher accuracy.
In GIS, largest scale of map would be 1/500, that is used in cadastre survey. The smallest scale
would be 1/1,000,000, that is used in the world map and global study.
Accuracy is defined as the closeness of measurements or estimates by computation to true
values. Accuracy is generally represented by standard deviation of errors, that is difference
between measurements and the true value.

where _

: error of measurements

n : number of measurements
In GIS, errors result from the map itself, map digitizing and coordinate transformation, which
will sum up to about 0.5 mm on the map.
In digital GIS database, there is no concept of scale but resolution, expressed as pixel size
(interval or dot per inch), grid cell size or grid interval, ground resolution for satellite images and
so on.
There is a rough relationship between scale and resolution, as follows.
grid interval
where M : scale denominator
Table 1.2 shows the relationships between scale, accuracy and resolution. Height accuracy is
usually one third of the contour interval according to international standard. Most of pixel size of
the scanned raster data will be 200 ~ 400 d.p.i. (dot per inch) or 0.1 mm interval on maps.

18

Chapter 2 Interpolation
2-1 Principle of Interpolation
Interpolation is the procedure of estimating the value of properties at unsampled points or areas
using a limited number of sampled observations.
Figure 2.1 and Figure 2.2 show the principle of curve fitting and surface fitting respectively to
interpolate the value at an unsampled point using surrounding sampled points.
In case a single function of the curve or surface fitting is determined, the interpolation is called
global interpolation, and in case different functions are adopted locally and repeatedly in a small
portion of the total area, it is called local interpolation.
When curve or surface fitting is executed with all the sampled observations, the interpolation is
called exact interpolation, where as in case the fitted curve or surface does not pass through all
the sampled observations because of some expected errors, it is called approximate interpolation.
Approximate interpolation is sometimes used in spatial prediction of trend or representation of
grid cells or unit areas.
Figure 2.3 shows prediction of trend with an approximate curve interpolation and the variation
from the trend. Figure 2.4 shows an example of representation at a grid cell based on majority
rule.

19

20

2-2 Pointwise Interpolation


Pointwise interpolation is used in case the sampled points are not densely located with a limited
influence or continuity in surrounding observations, for example climate observations such as
rainfall and temperature, or ground water level measurements at wells.
Following two methods are commonly used for pointwise interpolations.
Thiessen Polygons
Thiessen polygons can be generated using distance operator as shown in Figure 2.5 (a) which
creates the polygon boundaries as the intersections of radial expansions from the observation
points. This method is also known as Voronoi tessellation.
Pointwise interpolation within a Thiessen polygon is based on nearest neighbor, which estimates
the value as the same value with that of the sampled observations in the area.
Weighted Average
A window of circular shape with the radius of dmax is drawn at a point to be interpolated, so as to
involve six to eight surrounding observed points as shown in Figure 2.6 (a).
Then the value of a point is calculated from the summation of the product of the observed value
zi and weight wi, divided by the summation of the weights.

The weight functions commonly used are the function of distance as follows.

Table 2.1 shows the general properties of the weight function.

21

22

2-3 Curve Fitting


Curve fitting is an important type of interpolation in many applications of GIS.Curve fitting is
divided into two categories;
- exact interpolation : a fitted curve passes through all given points
- approximate interpolation : a fitted curve does not always pass through all given points
Exact Interpolation
There are three methods;
- nearest neighbor : the same value as that of the observation is given within the proximal
distance, as shown in Figure 2.7.
- linear interpolation: a piecewise linear function is applied between two adjacent points as
shown in Figure 2.8.

- cubic interpolation : a third order polynomial is applied between two adjacent points under the
condition that the first and
second order differentials should be continuous. Such a curve is called "spline" (see Figure
2.9).
y = ax3 + bx2 + cx + d
In case, when the curve is not a single function of x as whown in Figure 2.10, an auxilliary
variable u should be introduced as follows.

Approximate Interpolation
There are three methods;
- Moving Average : a window with a range of -d to +d is set to average the observation within
the region as shown in
Figure 2.11
- B spline : a cubic curve is determined by using four adjacent observations as shown in Figure
2.12
- Curve Fitting by Least Square Method : see section 2-5

23

24

25

2-4 Surface Fitting


Surface fitting is widely used for interpolation of points on continuous surfaces such as digital
elevation model (DEM), geoid, climate model (rainfall, temperature, pressure etc.) and so on.
Surface fitting is classified into two categories: surface fitting for regular grid and for random
points.
Surface Fitting for Regular Grid
Following two methods are commonly used.
Bilinear Interpolation
Bilinear function is used to interpolate z using the following formula with respect to normalized
coordinates (u, v) of the original coordinates (x, y) as shown in Figure 2.13.

Bicubic Interpolation
Third order polynomial is used to fit a continuous surface using 4 x 4 = 16 adjacent points as
shown in Figure 2.14.
z is calculated using the following formula.

Surface Fitting for random Points


Triangular network called as Triangulated Irregular Network (TIN) is applied as shown in Figure
2.15. A triangle forms a plane with straight contour lines. The detail of TIN is described in
Chapter 3.

26

27

2-5 Least Square Method


Least square method (sometimes called regression model) is a statistical approach to estimate an
expected value or function with the highest probability from the observations with random
errors. The highest probability is replaced by minimizing the sum of square of residuals in the
least square method.
Residual is defined as the difference between the observation and an estimated value of a
function.
In GIS, the least square method is widely used for spatial data analysis rather than single use of
interpolation technique.
Least square method is commonly applied for the following two cases in GIS.
Curve Fitting
In case measurements (xi, yi ) are given, the relationship between x and y is estimated by a
function, for example: y = f (x) = ax + b. By minimizing the square sum of residuals, the
unknown parameters a and b will be determined.
Unknown parameters in the case of y = ax+ b are determined as follows.
Observed Equation; AX = B or xi a + b = yi

Coordinate Transformation
For example, when a digitizer is used to digitize map data on a paper map sheet in the digitizers
coordinate system as shown in Figure 2.17 (a), users want to transform it into map coordinate
system as shown in Figure 2.17 (b) using the four tic marks at the corner. The rotation, scale and
shift can be adjusted with only two points mathematically. But two more additional redundant
measurements are strongly recommended because of measurement errors. In such cases the least
square method is applied.

28

2-6 Interpolation of Image Data<P< photogrammetry.


Figure 2.18 shows the process of geometric correction in which the original image data with
distortions are transformed into geocoded image through resampling and interpolation.
There are three interpolation methods.
Nearest Neighbor
Resampled image data Q (x, y) is replaced by the nearest original image data Pkl.
Q (x, y) = k = IFIX (x + 0.5), l = IFIX ( y + 0.5)
Bi-linear
Resampled image data Q (u, v) is averaged by the four surrounding original image data as
follows.
Q (u, v) = (1-u)(1-v) Pi,j+1 (1-u) VPij+1 + u (1-v) P i+i,j +uv P i+1, j+1
Where (u, v): normalized coordinates; 0 u 1, 0 v 1
Cubic Convolution
Cubic convolution is an image enhancement technique to stretch the contrast and sharpen the
edges using the following spatial filter as shown in Figure 2.19 (c).

29

Resampled image data are computed using 4 x 4 = 16 surrounding original image data as shown
in Figure 2.19 (d).

30

Chapter 3 Digital Terrain Model (DTM)


3-1 DEM and DTM
A DEM (digital elevation model) is digital representation of topographic surface with the
elevation or ground height above any geodetic datum. Various types of DEM are already
described in 2-8 and Figure 2.11 of Chapter 2, Volume 1 as well as in 2-4 and Figure 2.13 ~2.15
of Chapter 2, Volume 2 including grid cell surface, TIN (triangulated irregular network), contour
lines and profile.
Figure 3.1 shows three major DEMs that are widely used GIS.
A DTM (digital terrain model) is digital representation of terrain features including elevation,
slope, aspect, drainage and other terrain attributes. Usually a DTM is derived from a DEM or
elevation data. In this book, a DEM refers to a model with elevation data in digital format by
which elevation at an arbitrary location in the area can be interpolated, while a DTM refers to
terrain features in digital format, that can be derived from the elevation data.
Figure 3.2 shows several terrain features including the following DTMs.
Slope and Aspect
Drainage network
Catchment area
Shading
Shadow
Slope stability

)
31

32

3-2 Triangulated Irregular Network (TIN)


Triangulated irregular network or TIN is a DEM with a network of triangles at randomly located
terrain points. Irregular spaced sample points are measured with more points in areas of rough
terrain and fewer in smooth terrain. These sample points are connected by lines to form triangles
under Delaunay criterion. A circle drawn through three points of the triangle contains no other
points as shown in Figure 3.3. Such triangle is called a Delaunay triangle.
Delaunay triangles can be created from Thiessen polygons (see Figure 2.5) in such a way that
two vertices are connected to form the Delaunay triangle if their Thiessen polygons share an
edge as shown in Figure 3.4.
There are three data structures for storing TIN model (see Figure 3.5 and Table 3.1)
Triangle -based structure (see Table 3.1 (a)): efficient for slope analysis
Triangle ID
Three node IDs and coordinates
Neighbors of triangle
Point-based structure (see Table 3.1 (b)): efficient for contouring and other traversing
Point ID
Coordinates
Neighbors of point
Side-based structure (see Table 3.1 (C)): also efficient for contouring
Point file with ID and coordinates
Triangle file with ID and and three point IDs
Side file with ID, two point IDs and neighbor triangle (left and right)
Contouring of TINs is based on the following procedure. (see Figure 3.6)
step 1: find the intersect of contour and a side
step 2: assign the "reference point" with the symbol r to the vertex above the contour height and
the "sub-point" with the symbol
s to the vertex below the contour height.
Step 3: shift over to the transversing to find the third vertex in the triangle by checking whether it
is a reference point (r) or sub-point (s).

33

34

3-3 Generation of Contour Lines


Contour lines are one of the terrain features which represent the relief of the terrain with the
same height.
There are two types of contour lines in visualizing GIS data; vector line drawing and raster
image.
Vector Line Drawing
In case when the terrain points are given in grid, the simplest method is to divide the square cell
into two triangles mechanically as shown in Figure 3.6 a and b. However such mechanical
division will cause some inconveniences in smoothness of the contour lines. Figure 3.7 shows
strategic division, which makes triangles depending on the slope.
In case when the terrain points are given randomly, TINs will be created as described in 3-2 (see
Figure 3.6).
Raster Image
Contour image with painted contour terraces, belts or lines instead of vector lines will be
generated in raster form.
In case of grid, highly densed subgrid points will be generated and the interpolated height values,
which are simply sliced into contour interval are assigned to the corresponding color as shown in
Figure 3.8 (a). In case of irregularly spaced points, two-step interpolation, that is coarse grid and
then dense subgrid will be applied as shown in Figure 3.8 (b). The procedure is known as
densification.

35

36

3-4 Interpolation of Elevation from Contours


Digital elevation model (DEM) is very often generated by measuring terrain points along contour
lines using a digitizer. DEM with contour points should be provided with an algorithm
interpolate elevation at arbitrary points.
There are several interpolation methods as follows.
Profile Method
A profile passing through the point to be interpolated will be generated and linear or spline curve
applied, as shown in Figure 3.9 (a). However, sometimes improper profiles as shown in Figure
3.10 (a) are introduced. In case of spline curve, transition areas from steep to gentle slope will be
a problem of wave shape profile.
Proportional Distance Method
According to distance to two adjacent contour lines, as shown in Figure 3.9 (b), the elevation is
interpolated proportionally with respect to the distance ratio. However a point in an island
contour, as shown in Figure 3.10 (b), would be a problem.
Window Method
A circular window is set up around a point to be interpolated as shown in Figure 3.9 (c) and
adjacent terrain points are used to interpolate the value using second order or third order
polynomials.
The interpolation accuracy is better than other methods, but searching of adjacent points within
the window is time consuming.
TIN Method
TINs are generated using terrain points along contour lines. The interpolation is very easy but
TINs within an island, as shown in Figure 3.10 (c), would be a problem.
Buffering based on proportional method with additional independent terrain points will be the
best interpolation method.

37

38

3-5 Automated Generation of DEM


Automated generation of DEM is achieved by photogrammetric methods based on stereo aerial
photography and satellite stereo imagery.
The principle of aerial photography and satellite stereo imagery is the same, in using the theory
of parallax of a stereo pair as show in Figure 3.11.
Parallax is defined as difference between left and right photographs or image coordinates. The
higher the elevation is, the bigger the parallax is. If the parallax is constant, equal elevation or
contour lines will be produced.
Determination of three dimensional coordinates of terrain points is achieved by searching the
corresponding points called conjugate points on the image planes of left and right stereopair.
Intersection of two corresponding 3D rays will give the 3D coordinates of terrain points. 3D rays
are generated by photogrammetric geometry, called colinearity equation that connects lens of
camera (X0, Y0, Z0 image point )(x, y) and terrain point )(X, Y Z) as follows.

where c: focal length of camera

where

, and

are rotation angle of camera around Z, Y and X axis respectively

Automated generation of DEM is achieved by automatic searching of conjugate points on stereo


pair, called image matching or stereo matching.
Image matching is usually executed in two steps.
step 1: Resampling along epipolar lines as shown in Figure 3.12, because conjugate points are
located on epipolar lines
theoretically.
step 2: Image correlation between fixed window of n x n pixels on the left image and moving
window of the same size on the
right image is maximized to determine the conjugate point as shown in Figure 3.13.

39

40

3-6 Orthoimage Generation


Once DEM is given, orthoimage can be generated automatically from rasterized image data of
aerial photograph based on central projection.
There are two procedures as follows.
Orthoimage from Stereo Matched DEM
In case of stereo matching, the left image is fixed as regularly spaced grid, while the right image
will be irregularly spaced grid because of the terrain relief. The product, that is DEM, is
accordingly based on irregularly spaced grid on the ground. Therefore resampling and
interpolation should be executed to convert it to regularly spaced grid on the ground, as shown in
Figure 3.14.
Orthoimage from Regularly Spaced DEM
Some areas will have regularly spaced DEM, possibly produced from the existing topographic
maps. In such cases, regularly spaced DEM or three dimensional coordinates (X, Y, Z) can be
projected on the image plane as shown in Figure 3.15 The projected grid will be irregularly
spaced grid, which must be interpolated to form regularly spaced image data file.
Any image based on other geometry such as mechanical scanner, pushbroom scanner, radar etc.
can be also converted to orthoimage using the same principle

41

42

3-7 Extraction of Terrain Information


Terrain information or topographic features can be extracted from DEM.
The following terrain information are useful for various applications of spatial analysis in GIS.
Slope and Aspect
The steepest slope(s) and the direction from the east ( ) can be computed from 3 x 3 as shown in
Figure 3.16. The aspect that is, the slope faced to azimuth is 180 opposite to the direction of
(see Figure 3.15).
Convex and Concave
Convex and concave of the terrain shape are represented by the second order differentials, that is
computed by Laplacian operator (see Figure 3.17). Convex is positive Laplacian while concave
is negative.
Surface Specific points
+ is assigned if the height of the central point is higher than the one of the eight neighbors and if lower. A peak can be detected if all the eight neighbors are lower as shown in Figure 3.18 (a),
while a pit is formed if all the eight neighbors are higher as shown in Figure 3.18 (b). A pass can
be extracted if the + and - alternate around the central point with at least two complete cycle as
shown in the example of Figure 3.17 (c) and (d).
Drainage Network and Watershed
The lowest point out of the eight neighbors is compared with the height of the central point to
determine the flow direction as shown in Figure 3.18 (a).
Accumulated count of the flow pass at a point will give the catchment area or watershed as
shown in Figure 3.18 (b).
Hill Shading
The effect of hill shading on the assumption of an ideally diffused reflecting surface (called
Lambertian surface) can be computed as follows.
Relative shading = cos
where : angle between incident light vector s and surface normal n as shown in Figure 3.19.
For human psychological recognition of relief effect, the effect of hill shading is performed with
the incident light of 45 elevation angle from the North West, which results in brighter (larger cos
) North West face surface and darker (smaller cos ) at South East face surface. The detail is
described in 3-8.
Various extracted terrain information are demonstrated in the front pages

43

44

45

3-8 Shade and Shadow


Shade is defined as reduced reflection depending on the angle between the terrain surface and the
incident light such as the sun.
The effect of hill shading is based on the assumption that the terrain is Lambertian surface as
explained in 3-7.
Shadow is projected areas that the incident light cannot reach because of visual hindrance of
objects on terrain relief as shown in Figure 3.20.
Computation of Hill Shading
Hill shading = |cos | = |nxsx + nysy+ nzsz | 1.0
Where : angle between surface normal vector and incident light vector (see Figure 3.19)
normal vector of terrain surface

Usually and are given - 45 (NW) and 45 respectively


Shadow Algorithm
Usually the incident light is regarded as the sun with the solar zenith angle (90-) and the
azimuth angle measured from the south, positive to the east.
The shadow algorithm to determine whether the central point z (see Figure 3.15 (a) is sun lit or is
in sun shadow is given as follows;
Step 1: According to the sun azimuth with respect to the eight zones as shown in Figure 3.22 (a),
the resultant height H with the weight p at the corner points and (1-p) at the side points is
computed for comparing the height from the central point Z5 (see Figure 3.22 (b)).
H = pZm + (1-p)Zn p, m and n are given in Figure 3.22 (a).
Step 2: If the following equation is true, the central point is in shadow (see Figure 3.22 (c))
where D : grid interval, : sun elevation angle.
Step 3: If the central point is assigned to be in shadow, Z5 is replaced by the following value and
repeat the above procedures to other neighboring points.

46

47

Chapter 4 Spatial Analysis


4-1 What is Spatial Analysis?
The most important function of GIS is to enable the analysis of the spatial data and their
attributes for decision support.
Spatial analysis is done to answer questions about the real world including the present situation
of specific areas and features, the change in situation, the trends, the evaluation of capability or
possibility using overlay technique and/or modeling and prediction. Therefore spatial analysis
ranges from simple arithmetic and logical operation to complicated model analysis.
Spatial analysis is categorized as follows.
Query: retrieval of attribute data without altering the existing data by means of arithmetic and
logical operations.
Reclassification: reclassification of attribute data by dissolving a part of the boundaries and
merging into new reclassified polygons.
Coverage Rebuilding: rebuilding of the spatial data and the topology by "update", "erase",
"clip", "split", "join" or "append".
Overlay: Overlaying of more than two layers, including rebuilding topology of themerged
points, lines and polygons and operations on the merged attributes for suitability study, risk
management and potential evaluation.
Connectivity Analysis: analysis of connectivity between points, lines and polygon in terms of
distance, area, travel time, optimum paths etc. Proximity analysis by buffering, seek analysis of
optimum paths, network analysis, etc. are included.
Figure 4.1 shows examples of spatial analysis.

48

49

4-2 Query
Query is to retrieve the attribute data without altering the existing data according to
specifications given by the operator.
The specifications include the following three items, given usually in Standard Query Language
(SQL).
SELECT: attribute name (s)
FROM: table
WHERE: condition statement
The conditional statement is represented by the following three types of operator.
relational: >, <, =, ,
Arithmetic: +, -, x,
Boolean (logical): AND, OR, NOT, XOR (exclusive OR)
The Boolean operators are used to combine more than two conditions as shown in Figure 4.2.
The Boolean operators are based on 0 and 1; 0 if the attributes do not meet the condition and 1 if
they do as shown in Figure 4.2.
The Boolean operators are based on 0 and 1; 0 if the attribute do not meet the condition and 1 if
they do as shown in
Figure 4.3.
AND: multiply
OR: add (2 is reclassified to 1)
NOT: subtract (-1 is reclassified to 0)
XOR: (add)- (multiply)

50

51

4-3 Reclassification
Reclassification is to reassign new thematic values or codes to units of spatial feature, which will
result in merging polygons.
A set of "reclassify attributes", "dissolve the boundaries" and "merge the polygons" are used
frequently in aggregating area objects, as already shown in Figure 4.1 (b).
Reclassification is executed in the following cases.
Generalization: reassignment of existing data into smaller number of classes. Generalization
will result in a reduction of the level of detail.
Ranking: valuation of attributes based on an evaluation model or table specified by
Reselection :selection of features to be kept and removal of unselected features.
Figure 4.4 shows examples of the above three cases

52

4-4 Coverage Rebuilding


Coverage rebuilding is a boundary operation to create new coverages that are identified and
selected by users.
Boundary operations include the following six commands.
- Clip: to identify and preserve features within the boundary of interest specified by users. It is
called a "cookie cutter".
- Erase: to erase features inside the boundary while preserving features outside the boundary.
- Update : to replace features within the boundary by cutting out the current polygons and
pasting in the updated polygons.
- Split: to create new coverages by clipping geographic features with divided borders.
- Append: to merge the same feature classes of points and lines from the adjacent coverages.
- Map Join: to join the adjacent polygon features into a single coverage and to rebuild to
topology. It is called mosaicking.
Figure 4.5 shows the concept of coverage rebuilding.

53

54

4-5 Overlay of Raster Data


Overlay of raster data with more than two layers is rather easier as compared with overlay of
vector data, because it does not include any topological operation but only pixel by pixel
operations.
Generally there are two methods of raster-based overlay.
Weighting point method: basically two layers with the values of P1 and P2 respectively are
overlaid with the weight of w1 and w2 respectively as follows.
P = w1 P1 + w2 P2
where w1 + w2 = 1.0
The weighting point method is only available when the attributes have numerical values which
can be operated arithmetically.
Ranking method: at first the attributes of the two layers are categorized into five ranks as
excellent (5), better (4), good (3), poor (2), and bad (1) before a specific purpose of overlay.
Then the two different layers of A and B are overlaid by following one of the three ranking tables
as shown in Table 4.1.
Minimum Ranking
Lower rank is taken as the new rank of the overlaid pixel as the safety rule.
Multiplication Ranking
Two ranks are multiplied because of more influential effect rather than additional effect.
Selective Ranking
Experts can set up combined ranks depending on professional experience.
For practical purposes, a model of overlay with many layers and the hierarchical structure should
be built by users as shown in Figure 4.6.

55

4-6 Overlay of Vector Data


Overlay of vector data is a little bit complicated because it must update the topological tables of
spatial relationships between points, lines and polygons.
Overlay of vector data results in the creation of new line and area objects with additional
intersections or nodes, that need topological overlay.
There are three types of vector overlay.
point in polygon overlay: points are overlaid on polygon map as shown in Figure 4.7 (a).
Topology of point in polygon is "is contained in" relationship. Point topology is a new attribute
of polygon for each point.
line on polygon overlay: lines are overlaid on polygon map with broken line objects as shown
in Figure 4.7 (b). Topology of line on polygon is "is contained in" relationship. Line topology is
the attribute of old line ID and containing area ID.
polygon on polygon overlay: two layers of area objectives are overlaid resulting in new
polygons and intersections as shown in Figure 4.7 (c). The number of new polygons are usually
larger than that of the original polygons. Polygon topology is a list of original polygon IDs.

56

57

4-7 Connectivity Analysis


Connectivity analysis is to analyze the connectivity between points, lines and areas in terms of
distance, area, travel time, optimum path etc.
Connectivity analysis consists of the following analyses.
Proximity Analysis: proximity analysis is measurement of distances from points, lines and
boundaries of polygons. One of the most popular proximity analysis is based on "buffering", by
which a buffer can be generated around a point, line and area with a given distance as shown in
Figure 4.8. Buffering is easier to generate for raster data than for vector data.
Proximity analysis is not always based on distance but also time. For example, proximity
analysis based on access time or travel time will give the distribution of time zones indicating the
time to reach a certain point.
Figure 4.9 shows walking distance in time (contour lines of every 10 minutes) to the railway
station.
Network Analysis: network analysis includes determination of optimum paths using specified
decision rules. The decision rules are likely based on minimum time or distance, maximum
correlation occurrence or capacity and so on.
Figure 4.10 shows two examples of optimum paths based on minimum distance and time
respectively.

58

59

4-8 Shape Analysis and Measurement


Shape analysis and measurement are very important to analyze the shape of area objects in GIS.
The following parameters are computed from vector data.

Area:
where xn+1 = x1, yn+1 + y1

Perimeter:
Centroid (Center of Gravity) (see Figure 4.11)

where A : the area of the polygon


The above parameters of area and the centroid can be computed easily from raster data by simply
summing the number of pixels of the polygon and averaging the x and y coordinates respectively.
The perimeters can be computed by counting the chain codes around the boundary of the
polygon.
The following parameters indicating the shape of the polygon are also used widely in the shape
analysis as shown in Figure 4.12.
Horizontal maximum chord: CHORD H
Vertical maximum chord: CHORD V
Horizontal Ferets Diameter: FERE H
Vertical Ferets Diameter: FERE V
Maximum Length: MAXLING
Breadth: BRDTH
Orientation:
The following shape factors are also used for shape analysis as shown in Figure 4.13.
Roundness Factor:
Unevenness:
Flatness:

60

61

Chapter 5 Digital Image Processing


5-1 Flow of Digital Image Processing
Digital image processing is required for the following cases in GIS.
- Integration of remote sensing and GIS, particularly with satellite imagery of Landsat, SPOT,
JERS-1, ERS-1, Radarsat, IRS
etc.
- Automated digitization from rasterized map data using scanners
- Computer mapping with color output in raster format
- Visualization in three dimensional birds eye view
- Editing of image database
Digital image processing includes the following procedures, as shown in Figure 5.1.
Image input: to acquire digital image data by scanning analog films or maps. Satellite imagery
is provided in computer compatible tape (CCT) or CD-ROM.
Preprocessing: two procedures of radiometric correction and geometric correction are required.
Image Transformation: two procedures of image enhancement and feature extraction will be
implemented depending on the specific purposes as shown in Figure 5.2
Classification: thematic maps such as land cover/land use, soil, forest, geology etc. will be
produced.
Image output: two products of analog image out put and digital image database will be the
result of digital image processing

62

63

5-2 Radiometric Correction


Radiometric is a pre-processing technique to reconstruct physically calibrated values by
correcting the spectral distortions caused by sensors, sun angle, topography and the atmosphere
as shown in Figure 5.3.
Radiometric correction is classified into two types; absolute and relative correction.
Absolute correction: Correct radiance or reflectance should be measured or converted by using
the sensor calibration data, the sun angle and view angle, atmospheric models and ground truth
data. The incident energy input to sensors as shown in Figure 5.4 should be analyzed correctively
by radiometric correction. However it can not be applied in most applications, therefore the
relative correction is applied because the atmospheric model is so complicated and the exact
measurement of atmospheric condition is difficult.
Relative Correction: Relative correction is to normalize multi-temporal data taken on different
dates to a selected reference data at specific time.
The following techniques will be typical.
- Adjustment of average and standard deviation values.
- Conversion to normalized index: for example the normalized difference vegetation index
(NDVI).

- Histogram matching: the histograms per band and/or per sensor are calculated and the
cumulative histogram with cut-offs at
1% and 99% will be relatively adjusted to the reference histogram as shown in Figure 5.5.
- Least square method: linear function of y = ax + b is determined, where y is reference data and
x is data to be normalized.

64

65

5-3 Geometric Correction


Geometric correction is to correct the geometric distortions; internal and external distortions as
shown in Figure 5.6.
internal distortions: caused by sensor, such as lens distortion, misarrangement of detectors,
variation of sampling rate etc.
external distortions: caused by external parameters other than sensor, including variation of
attitude and position of platform, earth curvature, topographic relief etc.
Geometric correction is made according to the following steps.
step 1: Data input of uncorrected image data
step 2: Selection of geometric correction method and transformation equation depending on the
sensor geometry and the
estimated types of geometric distortions
step 3: Determination of transformation parameters using ground control points (GCP) (see
Figure 5.7)
step 4: Resampling and interpolation ( see 2.6, Chapter 2)
step 5: Output georeference image, known as geo-coded image
There are three types of geometric correction.
System correction: geometric correction using the geometry of the sensor with sensor
calibration data and attitude/positioning measurements. But this is not enough to produce geocoded image with high accuracy.
Mapping transformation: determination of transformation function, usually the second of third
order polynomials to transform image to map coordinate system based only on ground control
points (see Figure 5.7).
Combined method: at the first stage, system correction is adopted to avoid major systematic
distortions and then mapping transformation with simpler functions such as affine or pseudoaffine is applied with fewer number of GCPs.

66

67

5-4 Image Enhancement


Image enhancement is conversion of the original imagery to a better understandable level in
spectral quality for feature extraction or image interpretation.
The following techniques are typical in image enhancement.
Gray scale conversion (see Figure 5.8.)
Contrast stretch (linear), fold conversion, saw conversion etc. are involved.
Histogram conversion (see Figure 5.9.)
The histogram of the original image is converted to other types of histogram as specified by
users. Histogram equalization with flat histogram or linear cumulative histogram and histogram
normalization with normalized histogram are popular.
Color composition
Color composition is the assignment of three primary colors; red (R), green (G) and blue (B) to
three selected bands from multispectral bands usually available for satellite remote sensing
image data.
Table 5.1 shows several color compositions and their characteristics.
Color composition is demonstrated in the front pages of this book.
Color Conversion
Though R, G and B are primary colors with more convenience for computer processing, three
color elements of hue (H), intensity (I) and saturation (S) are more easily understood by human
visual sense. Color conversion between RGB and HIS color system will be useful to obtain
better color quality or variety for interpretation. Multi-sensor fusion, for example, with optical
multi-spectral bands and another sensors black and white image such as SAR (synthetic aperture
radar) or high resolution panchromatic band is implemented by replacing I (intensity) by B/W
band after color conversion from RGB (only with optical multi-bands) to HIS.

68

69

5-5 Spatial Filtering


Spatial filtering is commonly used for the following purposes.
to restore imagery by avoiding noises
to enhance the imagery for better interpretation
to extract features such as edges and lineaments
There are two methods;
Filtering in the domain of image space
Generally local convolution with a window operator of n n matrix is used. Table 5.2 shows
several typical 3 x 3 window operators and their effects.
Sobel, Laplacian and Highpass filters are useful to detect or extract linear features and edges.
Mean and Median filters are required to avoid high frequency noises, for example in the images
of water surface with subtle tone change.
Filtering in the domain of spatial frequency
The Fourier transformation is conventionally used to convert from image space domain to spatial
domain of which frequency is controlled by low pass, high pass and band pass filters. After such
frequency domain filtering, image will reconstructed by using an inverse Fourier transformation.
Low pass filters will cut off high frequency to allow the output of only low frequency image,
while high pass filters will cut off low frequency noises such as stripe noise or shading.
As the Fourier transformation will not be able to localize the frequency domain filtering, the
Wavelet function has become more useful to detect particularly edges, because it enables to
select an optimum window size locally.
Examples of images applied by various filters are shown in the front pages of this book.

70

71

5-6 Feature Extraction


Feature extraction is the operation to extract various image features for identifying or
interpreting meaningful physical objects from images.
Features are classified into three types.
Spectral features: color, tone, ratio, spectral index etc. Principle components and normalized
vegetation index are widely used.
The first and second principle components computed from satellite multi-spectral scanner data
such as Landsat TM will give "brightness" and "greenness" respectively.
Normalized difference vegetation index (NDVI) is often used for vegetation classification.

Geometric features: edges, linearments etc.


Spatial filtering of edge detection is commonly used to extract linear features such as roads,
geological linearments, boundaries of agricultural fields etc.
Generally spatial filtering (see 5-5) is applied as follows.
Step 1: smoothing filter such as mean or median is applied to avoid high frequency noises.
Step 2: edge detection filter such as Sobel, Laplacian or Highpass is applied to detect edges.
Step 3: line edges are detected by thinning and sometimes edge closing.
Textural features: pattern, homogeneity, spatial frequency, etc.
Though many computer approaches have been tried, human pattern recognition is much better
than the computer results.
Table 5.3 summarizes major operations of feature extraction.

72

5-7 Classification Methods


Computer assisted classification of multispectral imagery in remote sensing is useful for
thematic mapping of land use, vegetation, soil, geology etc.
Classification methods are classfied into two categories.
Supervised classification: classification with the use of ground truth data in the form of sample
sets. Maximum likelihood classifier is one of the typical supervised classification methods.
Unsupervised classification: classification with only spectral features without use of ground
truth data. Clustering is an unsupervised classification in which a group of the spectral values
will regrouped into a few clusters with spectral similarity.
The following classification methods are widely used depending on the spectral characteristics
and availability of ground truth data as shown in Figure 5.10.
Rationing: classification between vegetation and non-vegetation is possible.
Box classifier: very easy to apply level slicing but accuracy is not very high.
Discriminant function: useful in the case when the number of classes is not many.
Clustering: unsupervised classifier with spectral similarity or distance between clusters.
Minimum distance method: several statistical distance measures such as Euclidan,
Mahalanobis, Bhattacharya and Jefrey-Matsushita distance are used to determine the class.
Maximum likelihood classifier: see 5-8
Knowledge-based classification including decision tree classifier will be specified as
classification model by users.

73

74

5-8 Maximum Likelihood Classifier


Maximum likelihood classifier is one of the most popular methods for thematic mapping with
satellite multispectral imagery.
An unknown pixel X with multispectral values (n bands) will be classified into the class (k) that
has the maximum likelihood
max Lk (X) .
The likelihood function is given as follows on the assumption that the ground truth data of class
k will form the Gaussian (normal) distribution (see Figure 5.11).

where:
: mean vector of the ground truth data in class k
: Variance-covariance matrix of K class produced from the ground truth data
: determinant of Sk
For practical computation, the above likelihood is converted to the discriminant function in the
form of logarithm.
Gk (X) = In |Sk| + d2k
where : d2k =
Instead of maximum Lk (X), class k that makes Gk (X) minimum is searched for among the
classes.
The maximum likelihood classifier is popular because of its robustness and simplicity. But there
will be some errors in the results if the number of sample data is not sufficient, the distributions
of the population does not follow the Gaussian distribution and/or the classes have much overlap
in their distribution resulting in poor Separability.

75

76

Chapter 6 Visualization of Geospatial Data


6-1 Graphic Variables
Graphic representation of spatial data or maps, thematic data, tables and network with
geographic reference and topology is very important to communicate geospatial data and the
results of spatial analysis to all users.
Graphic representation in GIS will be implemented in the form of graphs, maps and images with
XY plotters, dot printers, color monitors, color plotters etc. based on the knowledges of
cartography, computer graphics, color theory, semiology and psychology.
Following graphic variables are used to display quantity, order, difference or similarity.
Location: geographic location and spatial relation of points, lines and areas are displayed in 2D
space or map.
Size: Size of symbols and thickness of line represent quantitative difference. Physical difference
does not coincide with psychological impression.
Density: density, intensity or gray scale is used to represent order and difference. Density or
spacing of dot pattern or screen mesh should be carefully selected for the optimum gray scaling.
Texture: cyclic or repeated pattern of data, lines or symbols will represent difference as well as
similarity.
Color: hue (H), intensity (I) and saturation (S) are aesthetically selected.
Orientation: directional pattern with hatching will represent difference as well as similarity.
Symbol: form of symbols will represent similarity of class or group.
Figure 6.1 shows the schematic concept of graphic representation.

77

78

6-2 Gray Scaling


In case of black and white graphs or images gray scaling with different density of black ink,
black dots and black screen mesh should be carefully designed to represent density difference
better.
Psychological perception of the gray scale is not proportional to the physical difference.
Table 6.1 and Figure 6.2 show the ratio of black area with respect intensity based on
psychological perception.
In case of dot map or bit map, users should design dot patterns to generate the gray scale by
assigning either 0 (white) or 1 (black) to n x n dot matrix.
One should note the following special knowledges about the psychological effect.
a. homogeneous pattern with regularly spaced dot pattern will make the impression of
continuous tone or density rather than texture.
b. discrete data patterns with specially recognized forms, symbols and/or directional lines will
make the stronger impression of texture rather than continuous tone.
c. horizontal and vertical pattern in squarely spaced dots (0-90 degree), will make more static
and stable vision of the dot structure, while obliquely cross patterns of 45-135 degree will give
human eye some sort of dynamic confusion in its visual field, which results in more tonal
impression. In designing gray scale, horizontal and vertical patterns should be alternatively
assigned with obliquely cross patterns in order to make visual difference between neighbor
classes.
Figure 6.3 shows example of 4x4 dot pattern for density and textural representation.
Figure 6.4 shows the dot map with use of 4 x 4 dot patterns shown in Figure 6.3.

79

80

6-3 Color Conversion between RGB and HIS


While color output by computer is based on three primary colors of red (R), green (G) and blue
(B) and their mixture depending on bits or gray scale, human visual sense of color will rely on
hue (H), intensity (I) and color purity or saturation (S), the relationship that has been already
established with the Munsell color system, one of the most popular color appearance systems in
the world.
The Munsell color system as shown in Figure 6.5 consists of a hue ring with forty colors 11
intensity levels, from 0 (black) to 10 (white) and saturation ranging differently from low (mixed;
0~2) to high (pure; 10~20) depending on the hue and intensity. Any color in the Munsell color
system is identified with a combination of HIS, for example 2.5 R 6/4, that is 2.5R (H), 6(I) and
4(S).
The Munsell color samples are available in the commercial market or publications.
When users want to change the brightness or intensity of a certain color, it would be really
difficult to change RGB combination directly, but it is very easy to change only I of HIS if RGB
is converted to HIS.
Color conversation between RGB and HIS is physically established, though there are several
HIS color space systems such as cube, cone, hexagon and double hexagon color space, which are
a little different from the Munsell color system.
Figure 6.6 show the relationship between RGB and HIS color space.
Table 6.2 shows the algorithm of the conversion from RGM to HIS and from HIS to RGB with
the range of H (0, 60), I (0, 1) and S (0, 1); R (0, 1), G (0, 1), B (0, 1) for the hexagon color
space, where H = 0 or 60 (Red), H = 10 (Yellow), H = 20 (Green), H = 30 (Cyan), H = 40 (Blue),
H= 50 (Magenta).

81

82

6-4 Graphic Representation of Attributes


Attributes in GIS are usually given in the form of relational tables. The attributes are classified
into three categories.
class: difference and sometimes similarity should be represented by difference of color, tone and
texture.
quantity: number, length, size, density, area, volume, ratio etc. should have proper graphic
forms with respect to the physical unit.
spatial relation: order, connectivity, flow, paths, network etc. will be represented in a form of
flow chart, arrow vector map, dendrograph etc.
Most of statistical graphs are related to class and quantity which can be displayed in various
types of graphs such as bar, column, belt and circle graphs as shown in Figure 6.7 the statistics
shown in Table 6.3. Additional graphic decorations using shade, texture, pictures, color etc. is
most recommendable for producing impressive output.
Thematic maps are the representations of attributes or clases with respect to the georeferential
relation.
Several graphic representations are possible depending on the characteristics of attributes as
follows.
color map with 5~20 color codes
dot map with 5 ~10 dot patterns
texture map with 5~10 texture patterns
contour map for continuous values
profile map for continuous values along profiles
statistical map with various types of graph
pictomap for symbolic representation
3D map for easier understanding (birds eye view, prism map etc.)

83

6-5 Color Map


Visualization using color is now available with color monitor or color printer connected to a
computer. However to select color codes of R, G, B for color output is not easy to produce a
beautiful and meaningful color image.
The objective of color map is categorized into the following two types.
Representation of similarity
There are two types;
1. Numerical values of all positive or all negative in a certain order. Elevation or ground height
on the land is usually all positive. Water depth is all negative. Such values will be represented
continuously by similar color codes, particularly by intensity or brightness.
2. Numerical values with plus and minus or over and under the average. Temperature is an
example. Red color will be used for higher values while blue color for lower values. The average
will be yellow or green color.
Representation of separability
Different color codes will be better used to enhance the difference of attributes, for example land
use, soil, geology, vegetation etc. To select more than twenty colors would be so difficult to
identify the color difference on the color image. About ten colors would be recommendable.
Because in GIS many letters and lines will be added on the color map, the brightness of color
should be rather high. In most case, different hues with almost equal intensity and saturation
would be a better color combination on the color map.
Table 6.4 shows R, G, B color codes of three color pallets for similarity and two color pallets for
separability visualizations.
The color samples are shown in the front page of this book.

84

85

6-6 Relief Map


Relief map is the representation of height variation with three dimensional structure. There are
several techniques to produce relief maps based on psychological effect. The basic idea is that
human eye will reorganize the three dimensional senses or distance of depth if there is shade or
shadow projected by an illumination with the azimuth of north west (or from left upper) and the
elevation of 45 degree as explained in 3-8 in this book.
Following techniques are used in many parts of GIS applications.
Contour map with shade: thickness of contour lines of southeast faced slope is increased,
which results in the relief effect.
Hill shading with hatched lines: traditional cartographic representation which is manually
produced by professional cartographers.
Prism map: a kind of birds eye view with constant height with respect to the polygon.
Shaded image: hill shading effect of cosine of angle between normal vector of the surface and
the incident light.
Stereoscopic view: 3D vision can be seen stereoscopically with a pair of stereo images which
have horizontal parallaxes depending on the height or depth distance.
Figure 6.8 shows some examples of relief map.

86

87

6-7 Bird's Eye View-Parallel Projection


Bird's eye view is the oblique projection by which landscape is looked down obliquely from over
sky. There are two projections to produce the bird's eye view.
Parallel projection: projected landscape has constant scale with oblique but parallel coordinate
system. Mathematically the projection is simpler but the actual reality is a little different.
Central projection: perspective projection is applied to the 3D coordinates of DEM, which
produce the actual reality as human eye looks while closer objects have bigger scale and further
objects have smaller scale.
Figure 6.9 shows the concept of the above two projections. Bird's eye view in parallel projection
will be produced onto either vertical plane or perpendicular plane to the looking direction.
Looking direction with the depression angle of 30~40 degree is widely used.
Mathematics of parallel projection with the depression angle of and azimuth measured from
the north or Y axis, as shown in Figure 6.10 is as follows:
x = s (X cos - Y cos sin)
y = s (X sin tan + Y cos tan + Z)
in case of vertical plane
y = s (X sin sin + Y cos sin + Z cos )
in the case of perpendicular plane
where (x, y): coordinates in the projected plane
s = scale
= azimuth
= depression angle
Hidden points can be detected by checking whether yi is bigger or not than the current ymax
among (i-l) points along a profile from i = 1 to i=n, as shown in Figure 6.11.

88

89

6-8 Bird's Eye View-Central Projection


Bird's eye view in the central projection is called perspective. The mathematical principle is the
same with the camera and film based photogrammetry.
The mathematics of perspective is given as follows (see Figure 6.12)

where (x, y): perspective coordinates


c: scale factor
(X0, Y0, Z0 ): center of projection or eye point
(X, Y, Z): terrain point to be converted

: rotation angle around X axis rotation


: angle measured around Z axis
= 0

; vertical look (depression angle = 90 )


=90 ; horizontal look (depression angle = 0 )
Firstly rotation around Z axis is given, then the rotation around X axis is given.
Bird's eye view in raster image is produced by combining DEM and the corresponding thematic
data or satellite color image.
Hidden points are usually processed by the following two methods.
From far to close projection: from the order of further points along a profile all points are
projected onto an image plane or perspective coordinate system. The nearest visible point will be
output as the result. (see Figure 6.13 (a))
From close to far check: Z value of a ray is checked at small interval whether Z value is bigger
than the terrain height or not. The first intersection will be output on the image plane. (see Figure
6.13 (b))
Examples of bird's eye view are shown in the front page of this book.

90

91

The end

92

You might also like